[jira] [Commented] (HADOOP-14313) Replace/improve Hadoop's byte[] comparator

2018-06-05 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16501417#comment-16501417
 ] 

Akira Ajisaka commented on HADOOP-14313:


LGTM, +1

> Replace/improve Hadoop's byte[] comparator
> --
>
> Key: HADOOP-14313
> URL: https://issues.apache.org/jira/browse/HADOOP-14313
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 2.7.4
>Reporter: Vikas Vishwakarma
>Priority: Major
> Attachments: HADOOP-14313.001.patch, 
> HADOOP-14313.branch-2.7.001.patch, HADOOP-14313.branch-2.7.002.patch
>
>
> Hi,
> Recently we were looking at the Lexicographic byte array comparison in HBase. 
> We did microbenchmark for the byte array comparator of HADOOP ( 
> https://github.com/hanborq/hadoop/blob/master/src/core/org/apache/hadoop/io/FastByteComparisons.java#L161
>  ) , HBase Vs the latest byte array comparator from guava  ( 
> https://github.com/google/guava/blob/master/guava/src/com/google/common/primitives/UnsignedBytes.java#L362
>  ) and observed that the guava main branch version is much faster. 
> Specifically we see very good improvement when the byteArraySize%8 != 0 and 
> also for large byte arrays. I will update the benchmark results using JMH for 
> Hadoop vs Guava. For the jira on HBase, please refer HBASE-17877. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15217) org.apache.hadoop.fs.FsUrlConnection does not handle paths with spaces

2018-06-05 Thread Zsolt Venczel (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsolt Venczel updated HADOOP-15217:
---
Attachment: HADOOP-15217.06.patch

> org.apache.hadoop.fs.FsUrlConnection does not handle paths with spaces
> --
>
> Key: HADOOP-15217
> URL: https://issues.apache.org/jira/browse/HADOOP-15217
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0
>Reporter: Joseph Fourny
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HADOOP-15217.01.patch, HADOOP-15217.02.patch, 
> HADOOP-15217.03.patch, HADOOP-15217.04.patch, HADOOP-15217.05.patch, 
> HADOOP-15217.06.patch, TestCase.java
>
>
> When _FsUrlStreamHandlerFactory_ is registered with _java.net.URL_ (ex: when 
> Spark is initialized), it breaks URLs with spaces (even though they are 
> properly URI-encoded). I traced the problem down to 
> _FSUrlConnection.connect()_ method. It naively gets the path from the URL, 
> which contains encoded spaces, and pases it to 
> _org.apache.hadoop.fs.Path(String)_ constructor. This is not correct, because 
> the docs clearly say that the string must NOT be encoded. Doing so causes 
> double encoding within the Path class (ie: %20 becomes %2520). 
> See attached JUnit test. 
> This test case mimics an issue I ran into when trying to use Commons 
> Configuration 1.9 AFTER initializing Spark. Commons Configuration uses URL 
> class to load configuration files, but Spark installs 
> _FsUrlStreamHandlerFactory_, which hits this issue. For now, we are using an 
> AspectJ aspect to "patch" the bytecode at load time to work-around the issue. 
> The real fix is quite simple. All you need to do is replace this line in 
> _org.apache.hadoop.fs.FsUrlConnection.connect()_:
>         is = fs.open(new Path(url.getPath()));
> with this line:
>      is = fs.open(new Path(url.*toUri()*.getPath()));
> URI.getPath() will correctly decode the path, which is what is expected by 
> _org.apache.hadoop.fs.Path(String)_ constructor.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15217) org.apache.hadoop.fs.FsUrlConnection does not handle paths with spaces

2018-06-05 Thread Zsolt Venczel (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16501444#comment-16501444
 ] 

Zsolt Venczel commented on HADOOP-15217:


Thank you [~xiaochen] for the discussion and the summary.
I've uploaded the patch based on your suggestions.

> org.apache.hadoop.fs.FsUrlConnection does not handle paths with spaces
> --
>
> Key: HADOOP-15217
> URL: https://issues.apache.org/jira/browse/HADOOP-15217
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0
>Reporter: Joseph Fourny
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HADOOP-15217.01.patch, HADOOP-15217.02.patch, 
> HADOOP-15217.03.patch, HADOOP-15217.04.patch, HADOOP-15217.05.patch, 
> HADOOP-15217.06.patch, TestCase.java
>
>
> When _FsUrlStreamHandlerFactory_ is registered with _java.net.URL_ (ex: when 
> Spark is initialized), it breaks URLs with spaces (even though they are 
> properly URI-encoded). I traced the problem down to 
> _FSUrlConnection.connect()_ method. It naively gets the path from the URL, 
> which contains encoded spaces, and pases it to 
> _org.apache.hadoop.fs.Path(String)_ constructor. This is not correct, because 
> the docs clearly say that the string must NOT be encoded. Doing so causes 
> double encoding within the Path class (ie: %20 becomes %2520). 
> See attached JUnit test. 
> This test case mimics an issue I ran into when trying to use Commons 
> Configuration 1.9 AFTER initializing Spark. Commons Configuration uses URL 
> class to load configuration files, but Spark installs 
> _FsUrlStreamHandlerFactory_, which hits this issue. For now, we are using an 
> AspectJ aspect to "patch" the bytecode at load time to work-around the issue. 
> The real fix is quite simple. All you need to do is replace this line in 
> _org.apache.hadoop.fs.FsUrlConnection.connect()_:
>         is = fs.open(new Path(url.getPath()));
> with this line:
>      is = fs.open(new Path(url.*toUri()*.getPath()));
> URI.getPath() will correctly decode the path, which is what is expected by 
> _org.apache.hadoop.fs.Path(String)_ constructor.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15217) org.apache.hadoop.fs.FsUrlConnection does not handle paths with spaces

2018-06-05 Thread Zsolt Venczel (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsolt Venczel updated HADOOP-15217:
---
Attachment: (was: HADOOP-15217.06.patch)

> org.apache.hadoop.fs.FsUrlConnection does not handle paths with spaces
> --
>
> Key: HADOOP-15217
> URL: https://issues.apache.org/jira/browse/HADOOP-15217
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0
>Reporter: Joseph Fourny
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HADOOP-15217.01.patch, HADOOP-15217.02.patch, 
> HADOOP-15217.03.patch, HADOOP-15217.04.patch, HADOOP-15217.05.patch, 
> TestCase.java
>
>
> When _FsUrlStreamHandlerFactory_ is registered with _java.net.URL_ (ex: when 
> Spark is initialized), it breaks URLs with spaces (even though they are 
> properly URI-encoded). I traced the problem down to 
> _FSUrlConnection.connect()_ method. It naively gets the path from the URL, 
> which contains encoded spaces, and pases it to 
> _org.apache.hadoop.fs.Path(String)_ constructor. This is not correct, because 
> the docs clearly say that the string must NOT be encoded. Doing so causes 
> double encoding within the Path class (ie: %20 becomes %2520). 
> See attached JUnit test. 
> This test case mimics an issue I ran into when trying to use Commons 
> Configuration 1.9 AFTER initializing Spark. Commons Configuration uses URL 
> class to load configuration files, but Spark installs 
> _FsUrlStreamHandlerFactory_, which hits this issue. For now, we are using an 
> AspectJ aspect to "patch" the bytecode at load time to work-around the issue. 
> The real fix is quite simple. All you need to do is replace this line in 
> _org.apache.hadoop.fs.FsUrlConnection.connect()_:
>         is = fs.open(new Path(url.getPath()));
> with this line:
>      is = fs.open(new Path(url.*toUri()*.getPath()));
> URI.getPath() will correctly decode the path, which is what is expected by 
> _org.apache.hadoop.fs.Path(String)_ constructor.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15217) org.apache.hadoop.fs.FsUrlConnection does not handle paths with spaces

2018-06-05 Thread Zsolt Venczel (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsolt Venczel updated HADOOP-15217:
---
Attachment: HADOOP-15217.06.patch

> org.apache.hadoop.fs.FsUrlConnection does not handle paths with spaces
> --
>
> Key: HADOOP-15217
> URL: https://issues.apache.org/jira/browse/HADOOP-15217
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0
>Reporter: Joseph Fourny
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HADOOP-15217.01.patch, HADOOP-15217.02.patch, 
> HADOOP-15217.03.patch, HADOOP-15217.04.patch, HADOOP-15217.05.patch, 
> HADOOP-15217.06.patch, TestCase.java
>
>
> When _FsUrlStreamHandlerFactory_ is registered with _java.net.URL_ (ex: when 
> Spark is initialized), it breaks URLs with spaces (even though they are 
> properly URI-encoded). I traced the problem down to 
> _FSUrlConnection.connect()_ method. It naively gets the path from the URL, 
> which contains encoded spaces, and pases it to 
> _org.apache.hadoop.fs.Path(String)_ constructor. This is not correct, because 
> the docs clearly say that the string must NOT be encoded. Doing so causes 
> double encoding within the Path class (ie: %20 becomes %2520). 
> See attached JUnit test. 
> This test case mimics an issue I ran into when trying to use Commons 
> Configuration 1.9 AFTER initializing Spark. Commons Configuration uses URL 
> class to load configuration files, but Spark installs 
> _FsUrlStreamHandlerFactory_, which hits this issue. For now, we are using an 
> AspectJ aspect to "patch" the bytecode at load time to work-around the issue. 
> The real fix is quite simple. All you need to do is replace this line in 
> _org.apache.hadoop.fs.FsUrlConnection.connect()_:
>         is = fs.open(new Path(url.getPath()));
> with this line:
>      is = fs.open(new Path(url.*toUri()*.getPath()));
> URI.getPath() will correctly decode the path, which is what is expected by 
> _org.apache.hadoop.fs.Path(String)_ constructor.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15514) NoClassDefFoundError of TimelineCollectorManager when starting MiniCluster

2018-06-05 Thread Rohith Sharma K S (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated HADOOP-15514:
---
Attachment: HADOOP-15514.01.patch

> NoClassDefFoundError of TimelineCollectorManager when starting MiniCluster
> --
>
> Key: HADOOP-15514
> URL: https://issues.apache.org/jira/browse/HADOOP-15514
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Jeff Zhang
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: HADOOP-15514.01.patch
>
>
> {code:java}
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.lang.NoClassDefFoundError: 
> org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollectorManager
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
>   at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>   at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
>   at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at java.lang.Class.getDeclaredMethods0(Native Method)
>   at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
>   at java.lang.Class.getDeclaredMethods(Class.java:1975){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15514) NoClassDefFoundError of TimelineCollectorManager when starting MiniCluster

2018-06-05 Thread Rohith Sharma K S (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated HADOOP-15514:
---
Status: Patch Available  (was: Open)

Updated the patch fixing MiniYARNcluster start issue. There are change I did
# Timeline service jar was excluded in hadoop-client-minicluster jar. This 
patch includes timeline-service jar classes. 
# After above change, started getting NoClassDefFoundError error for zookeeper 
package. Looking to hadoop-client-minicluster.jar, zookeeper package is 
excluded assuming that  hadoop-client-runtime.jar includes it. But zookeeper 
package was not shaded anywhere which leading this issue. I removed zookeeper 
package from exclude list as well. 
[~sunilg] [~vinodkv] kindly review this change. 

> NoClassDefFoundError of TimelineCollectorManager when starting MiniCluster
> --
>
> Key: HADOOP-15514
> URL: https://issues.apache.org/jira/browse/HADOOP-15514
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Jeff Zhang
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: HADOOP-15514.01.patch
>
>
> {code:java}
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.lang.NoClassDefFoundError: 
> org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollectorManager
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
>   at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>   at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
>   at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at java.lang.Class.getDeclaredMethods0(Native Method)
>   at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
>   at java.lang.Class.getDeclaredMethods(Class.java:1975){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12649) Improve Kerberos diagnostics and failure handling

2018-06-05 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-12649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16501535#comment-16501535
 ] 

Steve Loughran commented on HADOOP-12649:
-

[~m.benalla]: just add a comment on the JIRA saying you are working on it; 
submit a patch when you have one as a .patch file, then hit the "submit patch" 
button, so jenkins will do a review. I can assign the JIRA to you, but I'd 
rather wait until the first patch is in...

> Improve Kerberos diagnostics and failure handling
> -
>
> Key: HADOOP-12649
> URL: https://issues.apache.org/jira/browse/HADOOP-12649
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.1
> Environment: Kerberos
>Reporter: Steve Loughran
>Priority: Major
>
> Sometimes —apparently— some people cannot get kerberos to work.
> The ability to diagnose problems here is hampered by some aspects of UGI
> # the only way to turn on JAAS debug information is through an env var, not 
> within the JVM
> # failures are potentially underlogged
> # exceptions raised are generic IOEs, so can't be trapped and filtered
> # failure handling on the TGT renewer thread is nonexistent
> # the code is barely-readable, underdocumented mess.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-12649) Improve Kerberos diagnostics and failure handling

2018-06-05 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-12649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16501535#comment-16501535
 ] 

Steve Loughran edited comment on HADOOP-12649 at 6/5/18 9:42 AM:
-

[~m.benalla]: just add a comment on the JIRA saying you are working on it; 
submit a patch when you have one as a .patch file, then hit the "submit patch" 
button, so jenkins will do a review. I can assign the JIRA to you, but I'd 
rather wait until the first patch is up on the JIRA


was (Author: ste...@apache.org):
[~m.benalla]: just add a comment on the JIRA saying you are working on it; 
submit a patch when you have one as a .patch file, then hit the "submit patch" 
button, so jenkins will do a review. I can assign the JIRA to you, but I'd 
rather wait until the first patch is in...

> Improve Kerberos diagnostics and failure handling
> -
>
> Key: HADOOP-12649
> URL: https://issues.apache.org/jira/browse/HADOOP-12649
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.1
> Environment: Kerberos
>Reporter: Steve Loughran
>Priority: Major
>
> Sometimes —apparently— some people cannot get kerberos to work.
> The ability to diagnose problems here is hampered by some aspects of UGI
> # the only way to turn on JAAS debug information is through an env var, not 
> within the JVM
> # failures are potentially underlogged
> # exceptions raised are generic IOEs, so can't be trapped and filtered
> # failure handling on the TGT renewer thread is nonexistent
> # the code is barely-readable, underdocumented mess.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15217) org.apache.hadoop.fs.FsUrlConnection does not handle paths with spaces

2018-06-05 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16501576#comment-16501576
 ] 

genericqa commented on HADOOP-15217:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
44s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
37s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}143m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15217 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926524/HADOOP-15217.06.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 11588e347e64 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6d5e87a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14724/testReport/ |
| Max. process+thread count | 1360 (vs. ulimit of 1)

[jira] [Commented] (HADOOP-15514) NoClassDefFoundError of TimelineCollectorManager when starting MiniCluster

2018-06-05 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16501577#comment-16501577
 ] 

genericqa commented on HADOOP-15514:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
44m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
22s{color} | {color:green} hadoop-client-minicluster in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15514 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926536/HADOOP-15514.01.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  |
| uname | Linux 10ce68c564a9 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0e3c315 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14725/testReport/ |
| Max. process+thread count | 325 (vs. ulimit of 1) |
| modules | C: hadoop-client-modules/hadoop-client-minicluster U: 
hadoop-client-modules/hadoop-client-minicluster |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14725/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> NoClassDefFoundError of TimelineCollectorManager when starting MiniCluster
> --
>
> Key: HADOOP-15514
> URL: https://issues.apache.org/jira/browse/HADOOP-15514
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Jeff Zhang
> 

[jira] [Comment Edited] (HADOOP-15514) NoClassDefFoundError of TimelineCollectorManager when starting MiniCluster

2018-06-05 Thread Rohith Sharma K S (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16501521#comment-16501521
 ] 

Rohith Sharma K S edited comment on HADOOP-15514 at 6/5/18 11:01 AM:
-

Updated the patch fixing MiniYARNcluster start issue. Following are the change 
in patch
# Timeline service jar was excluded in hadoop-client-minicluster jar. This 
patch includes timeline-service jar classes. 
# After above change, started getting NoClassDefFoundError error for zookeeper 
package. Looking to hadoop-client-minicluster.jar, zookeeper package is 
excluded assuming that  hadoop-client-runtime.jar includes it. But zookeeper 
package was not shaded anywhere which leading this issue. I removed zookeeper 
package from exclude list as well. 
[~sunilg] [~vinodkv] kindly review this change. 


was (Author: rohithsharma):
Updated the patch fixing MiniYARNcluster start issue. There are change I did
# Timeline service jar was excluded in hadoop-client-minicluster jar. This 
patch includes timeline-service jar classes. 
# After above change, started getting NoClassDefFoundError error for zookeeper 
package. Looking to hadoop-client-minicluster.jar, zookeeper package is 
excluded assuming that  hadoop-client-runtime.jar includes it. But zookeeper 
package was not shaded anywhere which leading this issue. I removed zookeeper 
package from exclude list as well. 
[~sunilg] [~vinodkv] kindly review this change. 

> NoClassDefFoundError of TimelineCollectorManager when starting MiniCluster
> --
>
> Key: HADOOP-15514
> URL: https://issues.apache.org/jira/browse/HADOOP-15514
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Jeff Zhang
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: HADOOP-15514.01.patch
>
>
> {code:java}
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.lang.NoClassDefFoundError: 
> org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollectorManager
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
>   at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>   at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
>   at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at java.lang.Class.getDeclaredMethods0(Native Method)
>   at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
>   at java.lang.Class.getDeclaredMethods(Class.java:1975){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12649) Improve Kerberos diagnostics and failure handling

2018-06-05 Thread Marouane BENALLA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-12649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16501618#comment-16501618
 ] 

Marouane BENALLA commented on HADOOP-12649:
---

Good, thank you. I'll try to do it in the next coming days !

> Improve Kerberos diagnostics and failure handling
> -
>
> Key: HADOOP-12649
> URL: https://issues.apache.org/jira/browse/HADOOP-12649
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.1
> Environment: Kerberos
>Reporter: Steve Loughran
>Priority: Major
>
> Sometimes —apparently— some people cannot get kerberos to work.
> The ability to diagnose problems here is hampered by some aspects of UGI
> # the only way to turn on JAAS debug information is through an env var, not 
> within the JVM
> # failures are potentially underlogged
> # exceptions raised are generic IOEs, so can't be trapped and filtered
> # failure handling on the TGT renewer thread is nonexistent
> # the code is barely-readable, underdocumented mess.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14178) Move Mockito up to version 2.x

2018-06-05 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14178:
---
Attachment: HADOOP-14178.017.patch

> Move Mockito up to version 2.x
> --
>
> Key: HADOOP-14178
> URL: https://issues.apache.org/jira/browse/HADOOP-14178
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-14178.001.patch, HADOOP-14178.002.patch, 
> HADOOP-14178.003.patch, HADOOP-14178.004.patch, HADOOP-14178.005-wip.patch, 
> HADOOP-14178.005-wip2.patch, HADOOP-14178.005-wip3.patch, 
> HADOOP-14178.005-wip4.patch, HADOOP-14178.005-wip5.patch, 
> HADOOP-14178.005-wip6.patch, HADOOP-14178.005.patch, HADOOP-14178.006.patch, 
> HADOOP-14178.007.patch, HADOOP-14178.008.patch, HADOOP-14178.009.patch, 
> HADOOP-14178.010.patch, HADOOP-14178.011.patch, HADOOP-14178.012.patch, 
> HADOOP-14178.013.patch, HADOOP-14178.014.patch, HADOOP-14178.015.patch, 
> HADOOP-14178.016.patch, HADOOP-14178.017.patch
>
>
> I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 
> since the switch to maven in 2011. 
> Mockito is now at version 2.1, [with lots of Java 8 
> support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. 
> That' s not just defining actions as closures, but in supporting Optional 
> types, mocking methods in interfaces, etc. 
> It's only used for testing, and, *provided there aren't regressions*, cost of 
> upgrade is low. The good news: test tools usually come with good test 
> coverage. The bad: mockito does go deep into java bytecodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14178) Move Mockito up to version 2.x

2018-06-05 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16501624#comment-16501624
 ] 

Akira Ajisaka commented on HADOOP-14178:


017 patch: rebased

> Move Mockito up to version 2.x
> --
>
> Key: HADOOP-14178
> URL: https://issues.apache.org/jira/browse/HADOOP-14178
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-14178.001.patch, HADOOP-14178.002.patch, 
> HADOOP-14178.003.patch, HADOOP-14178.004.patch, HADOOP-14178.005-wip.patch, 
> HADOOP-14178.005-wip2.patch, HADOOP-14178.005-wip3.patch, 
> HADOOP-14178.005-wip4.patch, HADOOP-14178.005-wip5.patch, 
> HADOOP-14178.005-wip6.patch, HADOOP-14178.005.patch, HADOOP-14178.006.patch, 
> HADOOP-14178.007.patch, HADOOP-14178.008.patch, HADOOP-14178.009.patch, 
> HADOOP-14178.010.patch, HADOOP-14178.011.patch, HADOOP-14178.012.patch, 
> HADOOP-14178.013.patch, HADOOP-14178.014.patch, HADOOP-14178.015.patch, 
> HADOOP-14178.016.patch, HADOOP-14178.017.patch
>
>
> I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 
> since the switch to maven in 2011. 
> Mockito is now at version 2.1, [with lots of Java 8 
> support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. 
> That' s not just defining actions as closures, but in supporting Optional 
> types, mocking methods in interfaces, etc. 
> It's only used for testing, and, *provided there aren't regressions*, cost of 
> upgrade is low. The good news: test tools usually come with good test 
> coverage. The bad: mockito does go deep into java bytecodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer

2018-06-05 Thread Marcin Juszkiewicz (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16501650#comment-16501650
 ] 

Marcin Juszkiewicz commented on HADOOP-13363:
-

Any update?

> Upgrade protobuf from 2.5.0 to something newer
> --
>
> Key: HADOOP-13363
> URL: https://issues.apache.org/jira/browse/HADOOP-13363
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Tsuyoshi Ozawa
>Priority: Major
> Attachments: HADOOP-13363.001.patch, HADOOP-13363.002.patch, 
> HADOOP-13363.003.patch, HADOOP-13363.004.patch, HADOOP-13363.005.patch
>
>
> Standard protobuf 2.5.0 does not work properly on many platforms.  (See, for 
> example, https://gist.github.com/BennettSmith/7111094 ).  In order for us to 
> avoid crazy work arounds in the build environment and the fact that 2.5.0 is 
> starting to slowly disappear as a standard install-able package for even 
> Linux/x86, we need to either upgrade or self bundle or something else.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12741) UserGroupInformation.loginUserFromKeytab() creates background thread which is not getting killed even after application exited

2018-06-05 Thread Marouane BENALLA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-12741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16501813#comment-16501813
 ] 

Marouane BENALLA commented on HADOOP-12741:
---

We've done the same trick (we have updated the kinit configuration to point to 
a local script that does nothing) because we're having some race condition on 
the cache file between the kinit deamon started by hadoop and k5start (used for 
automatic renewal). So It's better if we could move with an option to 
deactivate the deamon thread.

> UserGroupInformation.loginUserFromKeytab() creates background thread which is 
> not getting killed even after application exited
> --
>
> Key: HADOOP-12741
> URL: https://issues.apache.org/jira/browse/HADOOP-12741
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 2.6.3
>Reporter: Umesh K
>Priority: Major
>
> Hi UserGroupInformation.loginUserFromKeytab() method creates one background 
> thread for keytab refresh after every 10 hours I guess. One of my application 
> is using UserGroupInformation.loginUserFromKeytab() but at the end of my 
> application the background thread created by it does not get killed it keeps 
> on running. How do I kill/stop thead started by 
> UserGroupInformation.loginUserFromKeytab()? Please guide or please provide 
> method inside UserGroupInformation so that we can kill it or stop it. Thanks 
> in advance. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-12741) UserGroupInformation.loginUserFromKeytab() creates background thread which is not getting killed even after application exited

2018-06-05 Thread Marouane BENALLA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-12741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16501813#comment-16501813
 ] 

Marouane BENALLA edited comment on HADOOP-12741 at 6/5/18 1:48 PM:
---

We've done the same trick (we have updated the kinit configuration to point to 
a local script that does nothing) because we're having some race condition on 
the cache file between the kinit daemon started by Hadoop and k5start (used for 
automatic renewal). So It's better if we could move with an option to 
deactivate the daemon thread.


was (Author: m.benalla):
We've done the same trick (we have updated the kinit configuration to point to 
a local script that does nothing) because we're having some race condition on 
the cache file between the kinit deamon started by hadoop and k5start (used for 
automatic renewal). So It's better if we could move with an option to 
deactivate the deamon thread.

> UserGroupInformation.loginUserFromKeytab() creates background thread which is 
> not getting killed even after application exited
> --
>
> Key: HADOOP-12741
> URL: https://issues.apache.org/jira/browse/HADOOP-12741
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 2.6.3
>Reporter: Umesh K
>Priority: Major
>
> Hi UserGroupInformation.loginUserFromKeytab() method creates one background 
> thread for keytab refresh after every 10 hours I guess. One of my application 
> is using UserGroupInformation.loginUserFromKeytab() but at the end of my 
> application the background thread created by it does not get killed it keeps 
> on running. How do I kill/stop thead started by 
> UserGroupInformation.loginUserFromKeytab()? Please guide or please provide 
> method inside UserGroupInformation so that we can kill it or stop it. Thanks 
> in advance. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-8980) TestRPC and TestSaslRPC fail on Windows

2018-06-05 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-8980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-8980:
--

Assignee: (was: Chris Nauroth)

> TestRPC and TestSaslRPC fail on Windows
> ---
>
> Key: HADOOP-8980
> URL: https://issues.apache.org/jira/browse/HADOOP-8980
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Priority: Major
>
> This failure may indicate a difference in socket handling on Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-12550) NativeIO#renameTo on Windows cannot replace an existing file at the destination.

2018-06-05 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-12550:
---

Assignee: (was: Chris Nauroth)

> NativeIO#renameTo on Windows cannot replace an existing file at the 
> destination.
> 
>
> Key: HADOOP-12550
> URL: https://issues.apache.org/jira/browse/HADOOP-12550
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
> Environment: Windows
>Reporter: Chris Nauroth
>Priority: Major
> Attachments: HADOOP-12550.001.patch, HADOOP-12550.002.patch
>
>
> {{NativeIO#renameTo}} currently has different semantics on Linux vs. Windows 
> if a file already exists at the destination.  On Linux, it's a passthrough to 
> the [rename|http://linux.die.net/man/2/rename] syscall, which will replace an 
> existing file at the destination.  On Windows, it's a passthrough to 
> [MoveFile|https://msdn.microsoft.com/en-us/library/windows/desktop/aa365239%28v=vs.85%29.aspx?f=255&MSPPError=-2147217396],
>  which cannot replace an existing file at the destination and instead 
> triggers an error.  The easiest way to observe this difference is to run the 
> HDFS test {{TestRollingUpgrade#testRollback}}.  This fails on Windows due to 
> a block recovery after truncate trying to replace a block at an existing 
> destination path.  This issue proposes to use 
> [MoveFileEx|https://msdn.microsoft.com/en-us/library/windows/desktop/aa365240(v=vs.85).aspx]
>  on Windows with the {{MOVEFILE_REPLACE_EXISTING}} flag.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12550) NativeIO#renameTo on Windows cannot replace an existing file at the destination.

2018-06-05 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12550:

Status: Patch Available  (was: Open)

this is a very old patch, but submitting it to see what happens. With the work 
on better windows native stuff, maybe it can be merged in with the ongoing work

> NativeIO#renameTo on Windows cannot replace an existing file at the 
> destination.
> 
>
> Key: HADOOP-12550
> URL: https://issues.apache.org/jira/browse/HADOOP-12550
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
> Environment: Windows
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Major
> Attachments: HADOOP-12550.001.patch, HADOOP-12550.002.patch
>
>
> {{NativeIO#renameTo}} currently has different semantics on Linux vs. Windows 
> if a file already exists at the destination.  On Linux, it's a passthrough to 
> the [rename|http://linux.die.net/man/2/rename] syscall, which will replace an 
> existing file at the destination.  On Windows, it's a passthrough to 
> [MoveFile|https://msdn.microsoft.com/en-us/library/windows/desktop/aa365239%28v=vs.85%29.aspx?f=255&MSPPError=-2147217396],
>  which cannot replace an existing file at the destination and instead 
> triggers an error.  The easiest way to observe this difference is to run the 
> HDFS test {{TestRollingUpgrade#testRollback}}.  This fails on Windows due to 
> a block recovery after truncate trying to replace a block at an existing 
> destination path.  This issue proposes to use 
> [MoveFileEx|https://msdn.microsoft.com/en-us/library/windows/desktop/aa365240(v=vs.85).aspx]
>  on Windows with the {{MOVEFILE_REPLACE_EXISTING}} flag.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-12550) NativeIO#renameTo on Windows cannot replace an existing file at the destination.

2018-06-05 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-12550:
---

Assignee: Chris Nauroth

> NativeIO#renameTo on Windows cannot replace an existing file at the 
> destination.
> 
>
> Key: HADOOP-12550
> URL: https://issues.apache.org/jira/browse/HADOOP-12550
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
> Environment: Windows
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Major
> Attachments: HADOOP-12550.001.patch, HADOOP-12550.002.patch
>
>
> {{NativeIO#renameTo}} currently has different semantics on Linux vs. Windows 
> if a file already exists at the destination.  On Linux, it's a passthrough to 
> the [rename|http://linux.die.net/man/2/rename] syscall, which will replace an 
> existing file at the destination.  On Windows, it's a passthrough to 
> [MoveFile|https://msdn.microsoft.com/en-us/library/windows/desktop/aa365239%28v=vs.85%29.aspx?f=255&MSPPError=-2147217396],
>  which cannot replace an existing file at the destination and instead 
> triggers an error.  The easiest way to observe this difference is to run the 
> HDFS test {{TestRollingUpgrade#testRollback}}.  This fails on Windows due to 
> a block recovery after truncate trying to replace a block at an existing 
> destination path.  This issue proposes to use 
> [MoveFileEx|https://msdn.microsoft.com/en-us/library/windows/desktop/aa365240(v=vs.85).aspx]
>  on Windows with the {{MOVEFILE_REPLACE_EXISTING}} flag.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12550) NativeIO#renameTo on Windows cannot replace an existing file at the destination.

2018-06-05 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-12550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16501928#comment-16501928
 ] 

genericqa commented on HADOOP-12550:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HADOOP-12550 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-12550 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12770710/HADOOP-12550.002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14727/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> NativeIO#renameTo on Windows cannot replace an existing file at the 
> destination.
> 
>
> Key: HADOOP-12550
> URL: https://issues.apache.org/jira/browse/HADOOP-12550
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
> Environment: Windows
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Major
> Attachments: HADOOP-12550.001.patch, HADOOP-12550.002.patch
>
>
> {{NativeIO#renameTo}} currently has different semantics on Linux vs. Windows 
> if a file already exists at the destination.  On Linux, it's a passthrough to 
> the [rename|http://linux.die.net/man/2/rename] syscall, which will replace an 
> existing file at the destination.  On Windows, it's a passthrough to 
> [MoveFile|https://msdn.microsoft.com/en-us/library/windows/desktop/aa365239%28v=vs.85%29.aspx?f=255&MSPPError=-2147217396],
>  which cannot replace an existing file at the destination and instead 
> triggers an error.  The easiest way to observe this difference is to run the 
> HDFS test {{TestRollingUpgrade#testRollback}}.  This fails on Windows due to 
> a block recovery after truncate trying to replace a block at an existing 
> destination path.  This issue proposes to use 
> [MoveFileEx|https://msdn.microsoft.com/en-us/library/windows/desktop/aa365240(v=vs.85).aspx]
>  on Windows with the {{MOVEFILE_REPLACE_EXISTING}} flag.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14201) Some 2.8.0 unit tests are failing on windows

2018-06-05 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16501929#comment-16501929
 ] 

Steve Loughran commented on HADOOP-14201:
-

I had completely forgotten about this. 

# looking at it, it's adding more diagnostics when tests fail than fixing 
things: printing exceptions, logging stdout. The things you need to identify 
failures
# and better test diags can only be good

If anyone can update this, it'd be great. I'll move under HADOOP-15475 so it 
can be dealt with there


> Some 2.8.0 unit tests are failing on windows
> 
>
> Key: HADOOP-14201
> URL: https://issues.apache.org/jira/browse/HADOOP-14201
> Project: Hadoop Common
>  Issue Type: Task
>  Components: test
>Affects Versions: 2.8.0
> Environment: Windows Server 2012.
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-14201-001.patch
>
>
> Some of the 2.8.0 tests are failing locally, without much in the way of 
> diagnostics. They may be false alarms related to system, VM setup, 
> performance, or they may be a sign of a problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14201) Some 2.8.0 unit tests are failing on windows

2018-06-05 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14201:

Issue Type: Sub-task  (was: Task)
Parent: HADOOP-15475

> Some 2.8.0 unit tests are failing on windows
> 
>
> Key: HADOOP-14201
> URL: https://issues.apache.org/jira/browse/HADOOP-14201
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.8.0
> Environment: Windows Server 2012.
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-14201-001.patch
>
>
> Some of the 2.8.0 tests are failing locally, without much in the way of 
> diagnostics. They may be false alarms related to system, VM setup, 
> performance, or they may be a sign of a problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-8980) TestRPC and TestSaslRPC fail on Windows

2018-06-05 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-8980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-8980:
---
Issue Type: Sub-task  (was: Bug)
Parent: HADOOP-15475

> TestRPC and TestSaslRPC fail on Windows
> ---
>
> Key: HADOOP-8980
> URL: https://issues.apache.org/jira/browse/HADOOP-8980
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Priority: Major
>
> This failure may indicate a difference in socket handling on Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12463) TestShell.testGetSignalKillCommand failing on windows

2018-06-05 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12463:

Issue Type: Sub-task  (was: Bug)
Parent: HADOOP-15475

> TestShell.testGetSignalKillCommand failing on windows
> -
>
> Key: HADOOP-12463
> URL: https://issues.apache.org/jira/browse/HADOOP-12463
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: windows
>Reporter: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-12463-001.patch, HADOOP-12463.1.branch-2.patch
>
>
> TestShell.testGetSignalKillCommand is failing on windows; the command to 
> query a process isn't that which the test expects.
> Maybe we need to have some policy that nothing goes into Shell without being 
> tested on Windows first: its where things meet.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-8980) TestRPC and TestSaslRPC fail on Windows

2018-06-05 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-8980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16501932#comment-16501932
 ] 

Steve Loughran commented on HADOOP-8980:


This is a really old issue/patch, but it falls under HADOOP-15475, so making a 
subtask. 

> TestRPC and TestSaslRPC fail on Windows
> ---
>
> Key: HADOOP-8980
> URL: https://issues.apache.org/jira/browse/HADOOP-8980
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Priority: Major
>
> This failure may indicate a difference in socket handling on Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12463) TestShell.testGetSignalKillCommand failing on windows

2018-06-05 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-12463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16501936#comment-16501936
 ] 

Steve Loughran commented on HADOOP-12463:
-

As discussed, this is probably a DONE or CANNOT REPRODUCE; it just needs people 
to play with to be sure that's the case

> TestShell.testGetSignalKillCommand failing on windows
> -
>
> Key: HADOOP-12463
> URL: https://issues.apache.org/jira/browse/HADOOP-12463
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.8.0, 3.0.0-alpha1
> Environment: windows
>Reporter: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-12463-001.patch, HADOOP-12463.1.branch-2.patch
>
>
> TestShell.testGetSignalKillCommand is failing on windows; the command to 
> query a process isn't that which the test expects.
> Maybe we need to have some policy that nothing goes into Shell without being 
> tested on Windows first: its where things meet.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11285) FileUtil operations don't check for native lib loaded on windows

2018-06-05 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11285:

Issue Type: Sub-task  (was: Bug)
Parent: HADOOP-15461

> FileUtil operations don't check for native lib loaded on windows
> 
>
> Key: HADOOP-11285
> URL: https://issues.apache.org/jira/browse/HADOOP-11285
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: util
>Affects Versions: 2.6.0
> Environment: windows
>Reporter: Steve Loughran
>Priority: Major
>
> On windows {{FileUtil.canRead()}} and the like requires the native APIs (at 
> least until a migration to java 7 APIs). The methods do not, however, call  
> {{NativeIO.isAvailable()}} to verify the native libs are there. As a result, 
> the calls fail with less useful stack traces. 
> if Java 7 allows all of these calls to be replaced, then this is a non-issue
> If not, I propose some {{verifyAvailableOnWindows()}} method which triggers 
> an exception; one which includes some hints about the problem —perhaps a URL 
> to a wiki page on the topic



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12550) NativeIO#renameTo on Windows cannot replace an existing file at the destination.

2018-06-05 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12550:

Issue Type: Sub-task  (was: Bug)
Parent: HADOOP-15461

> NativeIO#renameTo on Windows cannot replace an existing file at the 
> destination.
> 
>
> Key: HADOOP-12550
> URL: https://issues.apache.org/jira/browse/HADOOP-12550
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: native
> Environment: Windows
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Major
> Attachments: HADOOP-12550.001.patch, HADOOP-12550.002.patch
>
>
> {{NativeIO#renameTo}} currently has different semantics on Linux vs. Windows 
> if a file already exists at the destination.  On Linux, it's a passthrough to 
> the [rename|http://linux.die.net/man/2/rename] syscall, which will replace an 
> existing file at the destination.  On Windows, it's a passthrough to 
> [MoveFile|https://msdn.microsoft.com/en-us/library/windows/desktop/aa365239%28v=vs.85%29.aspx?f=255&MSPPError=-2147217396],
>  which cannot replace an existing file at the destination and instead 
> triggers an error.  The easiest way to observe this difference is to run the 
> HDFS test {{TestRollingUpgrade#testRollback}}.  This fails on Windows due to 
> a block recovery after truncate trying to replace a block at an existing 
> destination path.  This issue proposes to use 
> [MoveFileEx|https://msdn.microsoft.com/en-us/library/windows/desktop/aa365240(v=vs.85).aspx]
>  on Windows with the {{MOVEFILE_REPLACE_EXISTING}} flag.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14199) TestFsShellList.testList fails on windows: illegal filenames

2018-06-05 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14199:

Attachment: HADOOP-15496.000.patch

> TestFsShellList.testList fails on windows: illegal filenames
> 
>
> Key: HADOOP-14199
> URL: https://issues.apache.org/jira/browse/HADOOP-14199
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
> Environment: win64
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-15496.000.patch
>
>
> {{TestFsShellList.testList}} fails setting up the files to test against
> {code}
> org.apache.hadoop.io.nativeio.NativeIOException: The filename, directory 
> name, or volume label syntax is incorrect.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14199) TestFsShellList.testList fails on windows: illegal filenames

2018-06-05 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-14199:
---

Assignee: Anbang Hu

> TestFsShellList.testList fails on windows: illegal filenames
> 
>
> Key: HADOOP-14199
> URL: https://issues.apache.org/jira/browse/HADOOP-14199
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
> Environment: win64
>Reporter: Steve Loughran
>Assignee: Anbang Hu
>Priority: Minor
> Attachments: HADOOP-15496.000.patch
>
>
> {{TestFsShellList.testList}} fails setting up the files to test against
> {code}
> org.apache.hadoop.io.nativeio.NativeIOException: The filename, directory 
> name, or volume label syntax is incorrect.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14199) TestFsShellList.testList fails on windows: illegal filenames

2018-06-05 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14199:

Status: Patch Available  (was: Open)

> TestFsShellList.testList fails on windows: illegal filenames
> 
>
> Key: HADOOP-14199
> URL: https://issues.apache.org/jira/browse/HADOOP-14199
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
> Environment: win64
>Reporter: Steve Loughran
>Assignee: Anbang Hu
>Priority: Minor
> Attachments: HADOOP-15496.000.patch
>
>
> {{TestFsShellList.testList}} fails setting up the files to test against
> {code}
> org.apache.hadoop.io.nativeio.NativeIOException: The filename, directory 
> name, or volume label syntax is incorrect.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15496) TestFsShellList#testList fails on Windows

2018-06-05 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16501944#comment-16501944
 ] 

Steve Loughran commented on HADOOP-15496:
-

closed as a duplicate; attached patch to existing JIRA...we like to assign to 
the older ones as they have more watchers. 

patch makes sense though: "\" isn't a legal entry in a windows path, and 
*every* FS is allowed to have its own set of invalid characters

> TestFsShellList#testList fails on Windows
> -
>
> Key: HADOOP-15496
> URL: https://issues.apache.org/jira/browse/HADOOP-15496
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
> Attachments: HADOOP-15496.000.patch
>
>
> [TestFsShellList#testList|https://builds.apache.org/job/hadoop-trunk-win/478/testReport/org.apache.hadoop.fs/TestFsShellList/testList/]
>  fails on Windows because Windows filename does not accept "\", while in the 
> test
> {code:java}
> createFile(new Path(testRootDir, "abc\bd\tef"));
> ...
> createFile(new Path(testRootDir, "qq\r123"));
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14199) TestFsShellList.testList fails on windows: illegal filenames

2018-06-05 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14199:

Issue Type: Sub-task  (was: Bug)
Parent: HADOOP-15475

> TestFsShellList.testList fails on windows: illegal filenames
> 
>
> Key: HADOOP-14199
> URL: https://issues.apache.org/jira/browse/HADOOP-14199
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.8.0
> Environment: win64
>Reporter: Steve Loughran
>Assignee: Anbang Hu
>Priority: Minor
> Attachments: HADOOP-15496.000.patch
>
>
> {{TestFsShellList.testList}} fails setting up the files to test against
> {code}
> org.apache.hadoop.io.nativeio.NativeIOException: The filename, directory 
> name, or volume label syntax is incorrect.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15475) Fix broken unit tests on Windows

2018-06-05 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15475:

Affects Version/s: 3.1.0
   2.9.1

> Fix broken unit tests on Windows
> 
>
> Key: HADOOP-15475
> URL: https://issues.apache.org/jira/browse/HADOOP-15475
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
>
> There are hundreds of unit tests that fail on Windows. This JIRA tracks the 
> effort to fix them.
> The main reasons for unit test failures on Windows are:
> * Windows/Linux path formats (e.g., HDFS-10256).
> * Line separator.
> * Locked files: Windows locks files when opening them.
> ** The typical trigger is not cleaning MiniDFSCluster leaves files locked 
> when a test times out; they need to be cleaned using After.
> * Memory lock size.
> * Slow DNS resolution (e.g., HDFS-13569).
> * Locked ports (e.g., HDFS-11700)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15475) Fix broken unit tests on Windows

2018-06-05 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16501946#comment-16501946
 ] 

Steve Loughran commented on HADOOP-15475:
-

the graph is going in the right direction. 

One thing which would be good would be for some official ASF windows libs to 
ship. I do the set on [github|https://github.com/steveloughran/winutils], but 
that's not quite the same as the ASF blessing. Even without all the tests 
passing, we can at least include the native binaries as signed artifacts

> Fix broken unit tests on Windows
> 
>
> Key: HADOOP-15475
> URL: https://issues.apache.org/jira/browse/HADOOP-15475
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.1.0, 2.9.1
>Reporter: Anbang Hu
>Assignee: Anbang Hu
>Priority: Minor
>  Labels: Windows
>
> There are hundreds of unit tests that fail on Windows. This JIRA tracks the 
> effort to fix them.
> The main reasons for unit test failures on Windows are:
> * Windows/Linux path formats (e.g., HDFS-10256).
> * Line separator.
> * Locked files: Windows locks files when opening them.
> ** The typical trigger is not cleaning MiniDFSCluster leaves files locked 
> when a test times out; they need to be cleaned using After.
> * Memory lock size.
> * Slow DNS resolution (e.g., HDFS-13569).
> * Locked ports (e.g., HDFS-11700)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13223) winutils.exe is a bug nexus and should be killed with an axe.

2018-06-05 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13223:

Issue Type: Sub-task  (was: Improvement)
Parent: HADOOP-15461

> winutils.exe is a bug nexus and should be killed with an axe.
> -
>
> Key: HADOOP-13223
> URL: https://issues.apache.org/jira/browse/HADOOP-13223
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: bin
>Affects Versions: 2.6.0
> Environment: Microsoft Windows, all versions
>Reporter: john lilley
>Priority: Major
>
> winutils.exe was apparently created as a stopgap measure to allow Hadoop to 
> "work" on Windows platforms, because the NativeIO libraries aren't 
> implemented there (edit: even NativeIO probably doesn't cover the operations 
> that winutils.exe is used for).  Rather than building a DLL that makes native 
> OS calls, the creators of winutils.exe must have decided that it would be 
> more expedient to create an EXE to carry out file system operations in a 
> linux-like fashion.  Unfortunately, like many stopgap measures in software, 
> this one has persisted well beyond its expected lifetime and usefulness.  My 
> team creates software that runs on Windows and Linux, and winutils.exe is 
> probably responsible for 20% of all issues we encounter, both during 
> development and in the field.
> Problem #1 with winutils.exe is that it is simply missing from many popular 
> distros and/or the client-side software installation for said distros, when 
> supplied, fails to install winutils.exe.  Thus, as software developers, we 
> are forced to pick one version and distribute and install it with our 
> software.
> Which leads to problem #2: winutils.exe are not always compatible.  In 
> particular, MapR MUST have its winutils.exe in the system path, but doing so 
> breaks the Hadoop distro for every other Hadoop vendor.  This makes creating 
> and maintaining test environments that work with all of the Hadoop distros we 
> want to test unnecessarily tedious and error-prone.
> Problem #3 is that the mechanism by which you inform the Hadoop client 
> software where to find winutils.exe is poorly documented and fragile.  First, 
> it can be in the PATH.  If it is in the PATH, that is where it is found.  
> However, the documentation, such as it is, makes no mention of this, and 
> instead says that you should set the HADOOP_HOME environment variable, which 
> does NOT override the winutils.exe found in your system PATH.
> Which leads to problem #4: There is no logging that says where winutils.exe 
> was actually found and loaded.  Because of this, fixing problems of finding 
> the wrong winutils.exe are extremely difficult.
> Problem #5 is that most of the time, such as when accessing straight up HDFS 
> and YARN, one does not *need* winutils.exe.  But if it is missing, the log 
> messages complain about its absence.  When we are trying to diagnose an 
> obscure issue in Hadoop (of which there are many), the presence of this red 
> herring leads to all sorts of time wasted until someone on the team points 
> out that winutils.exe is not the problem, at least not this time.
> Problem #6 is that errors and stack traces from issues involving winutils.exe 
> are not helpful.  The Java stack trace ends at the ProcessBuilder call.  Only 
> through bitter experience is one able to connect the dots from 
> "ProcessBuilder is the last thing on the stack" to "something is wrong with 
> winutils.exe".
> Note that none of these involve running Hadoop on Windows.  They are only 
> encountered when using Hadoop client libraries to access a cluster from 
> Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15307) Improve NFS error handling: Unsupported verifier flavorAUTH_SYS

2018-06-05 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15307:

Labels:   (was: newbie)

> Improve NFS error handling: Unsupported verifier flavorAUTH_SYS
> ---
>
> Key: HADOOP-15307
> URL: https://issues.apache.org/jira/browse/HADOOP-15307
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
> Environment: CentOS 7.4, CDH5.13.1, Kerberized Hadoop cluster
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15307.001.patch, HADOOP-15307.002.patch
>
>
> When NFS gateway starts and if the portmapper request is denied by rpcbind 
> for any reason (in our case, /etc/hosts.allow did not have the localhost), 
> NFS gateway fails with the following obscure exception:
> {noformat}
> 2018-03-05 12:49:31,976 INFO org.apache.hadoop.oncrpc.SimpleUdpServer: 
> Started listening to UDP requests at port 4242 for Rpc program: mountd at 
> localhost:4242 with workerCount 1
> 2018-03-05 12:49:31,988 INFO org.apache.hadoop.oncrpc.SimpleTcpServer: 
> Started listening to TCP requests at port 4242 for Rpc program: mountd at 
> localhost:4242 with workerCount 1
> 2018-03-05 12:49:31,993 TRACE org.apache.hadoop.oncrpc.RpcCall: 
> Xid:692394656, messageType:RPC_CALL, rpcVersion:2, program:10, version:2, 
> procedure:1, credential:(AuthFlavor:AUTH_NONE), 
> verifier:(AuthFlavor:AUTH_NONE)
> 2018-03-05 12:49:31,998 FATAL org.apache.hadoop.mount.MountdBase: Failed to 
> start the server. Cause:
> java.lang.UnsupportedOperationException: Unsupported verifier flavorAUTH_SYS
> at 
> org.apache.hadoop.oncrpc.security.Verifier.readFlavorAndVerifier(Verifier.java:45)
> at org.apache.hadoop.oncrpc.RpcDeniedReply.read(RpcDeniedReply.java:50)
> at org.apache.hadoop.oncrpc.RpcReply.read(RpcReply.java:67)
> at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:71)
> at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:130)
> at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:101)
> at org.apache.hadoop.mount.MountdBase.start(MountdBase.java:83)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startServiceInternal(Nfs3.java:56)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:69)
> at 
> org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter.start(PrivilegedNfsGatewayStarter.java:60)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
> 2018-03-05 12:49:32,007 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1{noformat}
>  Reading the code comment for class Verifier, I think this bug existed since 
> its inception
> {code:java}
> /**
>  * Base class for verifier. Currently our authentication only supports 3 types
>  * of auth flavors: {@link RpcAuthInfo.AuthFlavor#AUTH_NONE}, {@link 
> RpcAuthInfo.AuthFlavor#AUTH_SYS},
>  * and {@link RpcAuthInfo.AuthFlavor#RPCSEC_GSS}. Thus for verifier we only 
> need to handle
>  * AUTH_NONE and RPCSEC_GSS
>  */
> public abstract class Verifier extends RpcAuthInfo {{code}
> The verifier should also handle AUTH_SYS too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15493) DiskChecker should handle disk full situation

2018-06-05 Thread Arpit Agarwal (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16502139#comment-16502139
 ] 

Arpit Agarwal commented on HADOOP-15493:


Yeah checking for the messages is terribly ugly. However I don't know any other 
filesystem agnostic way of checking for disk full situation. Threshold-based 
checks are inaccurate:
# Space may be freed up between the failure and running {{getUsableSpace}}.
# Filesystems may not report zero free space when the disk is full (especially 
xfs) so we can't compare with zero. We'll need an inaccurate threshold.


> DiskChecker should handle disk full situation
> -
>
> Key: HADOOP-15493
> URL: https://issues.apache.org/jira/browse/HADOOP-15493
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Critical
> Attachments: HADOOP-15493.01.patch, HADOOP-15493.02.patch
>
>
> DiskChecker#checkDirWithDiskIo creates a file to verify that the disk is 
> writable.
> However check should not fail when file creation fails due to disk being 
> full. This avoids marking full disks as _failed_.
> Reported by [~kihwal] and [~daryn] in HADOOP-15450. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10768) Optimize Hadoop RPC encryption performance

2018-06-05 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-10768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16502150#comment-16502150
 ] 

Wei-Chiu Chuang commented on HADOOP-10768:
--

Hi Daryn,

Sounds impressive that you're able to get down to ~5% penalty. Please file a 
new Jira when you have a patch available.

> Optimize Hadoop RPC encryption performance
> --
>
> Key: HADOOP-10768
> URL: https://issues.apache.org/jira/browse/HADOOP-10768
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, security
>Affects Versions: 3.0.0-alpha1
>Reporter: Yi Liu
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HADOOP-10768.001.patch, HADOOP-10768.002.patch, 
> HADOOP-10768.003.patch, HADOOP-10768.004.patch, HADOOP-10768.005.patch, 
> HADOOP-10768.006.patch, HADOOP-10768.007.patch, HADOOP-10768.008.patch, 
> HADOOP-10768.009.patch, HADOOP-10768.010.patch, HADOOP-10768.011.patch, 
> Optimize Hadoop RPC encryption performance.pdf, 
> cpu_profile_RPC_encryption_AES.png, 
> cpu_profile_rpc_encryption_optimize_calculateHMAC.png
>
>
> Hadoop RPC encryption is enabled by setting {{hadoop.rpc.protection}} to 
> "privacy". It utilized SASL {{GSSAPI}} and {{DIGEST-MD5}} mechanisms for 
> secure authentication and data protection. Even {{GSSAPI}} supports using 
> AES, but without AES-NI support by default, so the encryption is slow and 
> will become bottleneck.
> After discuss with [~atm], [~tucu00] and [~umamaheswararao], we can do the 
> same optimization as in HDFS-6606. Use AES-NI with more than *20x* speedup.
> On the other hand, RPC message is small, but RPC is frequent and there may be 
> lots of RPC calls in one connection, we needs to setup benchmark to see real 
> improvement and then make a trade-off. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14199) TestFsShellList.testList fails on windows: illegal filenames

2018-06-05 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16502167#comment-16502167
 ] 

genericqa commented on HADOOP-14199:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 45s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
13s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}125m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-14199 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926580/HADOOP-15496.000.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 748ba35f080e 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 745f3a2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14728/testReport/ |
| Max. process+thread count | 1508 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14728/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestFsShellList.testList fails on windows: illegal filenames
> 
>
>  

[jira] [Commented] (HADOOP-15421) Stabilise/formalise the JSON _SUCCESS format used in the S3A committers

2018-06-05 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16502200#comment-16502200
 ] 

Steve Loughran commented on HADOOP-15421:
-

specific contents

* version marker; deser fails if missing/wrong. 
* timestamp for people and for machines
* who did the commit, where, what
* the committer name.  Maybe we should also add a full classname
* the list of files created, looks like a path with the URL schema missing. 
That's probably a bug; fix it and my spark cloud tests will fail until I update 
that code, presumably.

* metrics grabbed from the job committer. I've tried to aggregate there. For MR 
jobs, the aggregation works for all metrics where adding makes sense. For Spark 
it doesn't, because worker threads all share metrics. Really we want FS 
statistics on a thread-by-thread basis. Bear in mind, however, that this stats 
gathering was primarily because neither spark nor MR collect these things; if 
they did then anything done in the success file is irrelevant



> Stabilise/formalise the JSON _SUCCESS format used in the S3A committers
> ---
>
> Key: HADOOP-15421
> URL: https://issues.apache.org/jira/browse/HADOOP-15421
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Major
>
> the S3A committers rely on an atomic PUT to save a JSON summary of the job to 
> the dest FS, containing files, statistics, etc. This is for internal testing, 
> but it turns out to be useful for spark integration testing, Hive, etc.
> IBM's stocator also generated a manifest.
> Proposed: come up with (an extensible) design that we are happy with as a 
> long lived format.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14199) TestFsShellList.testList fails on windows: illegal filenames

2018-06-05 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HADOOP-14199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16502214#comment-16502214
 ] 

Íñigo Goiri commented on HADOOP-14199:
--

In [^HADOOP-15496.000.patch], by making bs="" in Windows we are just not 
testing anything.
This would be equivalent to just test a regular path (no special characters) in 
Windows.
Given that, I would just skip the creation of the first and third file in 
Windows:
{code}
createFile(new Path(testRootDir, "ghi"));
if (!Shell.WINDOWS) {
  createFile(new Path(testRootDir, "abc\bd\tef"));
  createFile(new Path(testRootDir, "qq\r123"));
}
{code}
Would be equivalent to what we are doing in  [^HADOOP-15496.000.patch].
We can add a comment and say that Windows does not support such characters.

Ideally we would do a test that tests the Windows special characters but I'm 
fine with this.

> TestFsShellList.testList fails on windows: illegal filenames
> 
>
> Key: HADOOP-14199
> URL: https://issues.apache.org/jira/browse/HADOOP-14199
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.8.0
> Environment: win64
>Reporter: Steve Loughran
>Assignee: Anbang Hu
>Priority: Minor
> Attachments: HADOOP-15496.000.patch
>
>
> {{TestFsShellList.testList}} fails setting up the files to test against
> {code}
> org.apache.hadoop.io.nativeio.NativeIOException: The filename, directory 
> name, or volume label syntax is incorrect.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15513) Add additional test cases to cover some corner cases for FileUtil#symlink

2018-06-05 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HADOOP-15513:
--
Attachment: HADOOP-15513.v2.patch

> Add additional test cases to cover some corner cases for FileUtil#symlink
> -
>
> Key: HADOOP-15513
> URL: https://issues.apache.org/jira/browse/HADOOP-15513
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: HADOOP-15513.v1.patch, HADOOP-15513.v2.patch
>
>
> Add additional test cases to cover some corner cases for FileUtil#symlink.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15513) Add additional test cases to cover some corner cases for FileUtil#symlink

2018-06-05 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16502227#comment-16502227
 ] 

Giovanni Matteo Fumarola commented on HADOOP-15513:
---

Thanks [~elgoiri] for the feedback.
I added a np check in FileUtil#Symlink since the method is a public static.

> Add additional test cases to cover some corner cases for FileUtil#symlink
> -
>
> Key: HADOOP-15513
> URL: https://issues.apache.org/jira/browse/HADOOP-15513
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: HADOOP-15513.v1.patch, HADOOP-15513.v2.patch
>
>
> Add additional test cases to cover some corner cases for FileUtil#symlink.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-15515) adl.AdlFilesystem.close() doesn't release locks on open files

2018-06-05 Thread Chris Douglas (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas moved HDFS-13344 to HADOOP-15515:
---

Affects Version/s: (was: 2.7.3)
   2.7.3
  Component/s: (was: fs/adl)
   fs/adl
  Key: HADOOP-15515  (was: HDFS-13344)
  Project: Hadoop Common  (was: Hadoop HDFS)

> adl.AdlFilesystem.close() doesn't release locks on open files
> -
>
> Key: HADOOP-15515
> URL: https://issues.apache.org/jira/browse/HADOOP-15515
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.7.3
> Environment: HDInsight on MS Azure:
>  
> Hadoop 2.7.3.2.6.2.25-1
> Subversion g...@github.com:hortonworks/hadoop.git -r 
> 1ceeb58bb3bb5904df0cbb7983389bcaf2ffd0b6
> Compiled by jenkins on 2017-11-29T15:28Z
> Compiled with protoc 2.5.0
> From source with checksum 90b73c4c185645c1f47b61f942230
> This command was run using 
> /usr/hdp/2.6.2.25-1/hadoop/hadoop-common-2.7.3.2.6.2.25-1.jar
>Reporter: Jay Hankinson
>Assignee: Vishwajeet Dusane
>Priority: Major
> Attachments: HDFS-13344-001.patch, HDFS-13344-002.patch
>
>
> If you write to a file on and Azure ADL filesystem and close the file system 
> but not the file before the process exits, the next time you try open the 
> file for append it fails with:
> Exception in thread "main" java.io.IOException: APPEND failed with error 
> 0x83090a16 (Failed to perform the requested operation because the file is 
> currently open in write mode by another user or process.). 
> [a67c6b32-e78b-4852-9fac-142a3e2ba963][2018-03-22T20:54:08.3520940-07:00]
>  The following moves local file to HDFS if it doesn't exist or appends it's 
> contents if it does:
>  
> {code:java}
> public void addFile(String source, String dest, Configuration conf) throws 
> IOException {
> FileSystem fileSystem = FileSystem.get(conf);
> // Get the filename out of the file path
> String filename = source.substring(source.lastIndexOf('/') + 
> 1,source.length());
> // Create the destination path including the filename.
> if (dest.charAt(dest.length() - 1) != '/')
> { dest = dest + "/" + filename; }
> else {
> dest = dest + filename;
> }
> // Check if the file already exists
> Path path = new Path(dest);
> FSDataOutputStream out;
> if (fileSystem.exists(path)) {
> System.out.println("File " + dest + " already exists appending");
> out = fileSystem.append(path);
> } else {
> out = fileSystem.create(path);
> }
> // Create a new file and write data to it.
> InputStream in = new BufferedInputStream(new FileInputStream(new File(
> source)));
> byte[] b = new byte[1024];
> int numBytes = 0;
> while ((numBytes = in.read(b)) > 0) {
> out.write(b, 0, numBytes);
> }
> // Close the file system not the file
> in.close();
> //out.close();
> fileSystem.close();
> }
> {code}
>  If "dest" is an adl:// location, invoking the function a second time (after 
> the process has exited) it raises the error. If it's a regular hdfs:// file 
> system, it doesn't as all the locks are released. The same exception is also 
> raised if a subsequent append is done using: hdfs dfs  -appendToFile.
> As I can't see a way to force lease recovery in this situation, this seems 
> like a bug. org.apache.hadoop.fs.adl.AdlFileSystem inherits close() from 
> org.apache.hadoop.fs.FileSystem
> [https://hadoop.apache.org/docs/r3.0.0/api/org/apache/hadoop/fs/adl/AdlFileSystem.html]
> Which states:
> Close this FileSystem instance. Will release any held locks. This does not 
> seem to be the case



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15515) adl.AdlFilesystem.close() doesn't release locks on open files

2018-06-05 Thread Chris Douglas (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16502310#comment-16502310
 ] 

Chris Douglas commented on HADOOP-15515:


Moving to common, as this doesn't affect HDFS. v002 seems to implement a 
pattern similar to 
[DFSClient|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java#L626].

bq. As I can't see a way to force lease recovery in this situation
As long as this is only an optimization- and a client failing to release the 
lock won't wedge the system- this looks OK. I checked the [client 
docs|https://azure.github.io/azure-data-lake-store-java/javadoc/], and it looks 
like the ADL client doesn't have a {{close}} method, so presumably it's not 
tracking open streams and this is correct.

If there's no other feedback I'll commit this soon.

> adl.AdlFilesystem.close() doesn't release locks on open files
> -
>
> Key: HADOOP-15515
> URL: https://issues.apache.org/jira/browse/HADOOP-15515
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 2.7.3
> Environment: HDInsight on MS Azure:
>  
> Hadoop 2.7.3.2.6.2.25-1
> Subversion g...@github.com:hortonworks/hadoop.git -r 
> 1ceeb58bb3bb5904df0cbb7983389bcaf2ffd0b6
> Compiled by jenkins on 2017-11-29T15:28Z
> Compiled with protoc 2.5.0
> From source with checksum 90b73c4c185645c1f47b61f942230
> This command was run using 
> /usr/hdp/2.6.2.25-1/hadoop/hadoop-common-2.7.3.2.6.2.25-1.jar
>Reporter: Jay Hankinson
>Assignee: Vishwajeet Dusane
>Priority: Major
> Attachments: HDFS-13344-001.patch, HDFS-13344-002.patch
>
>
> If you write to a file on and Azure ADL filesystem and close the file system 
> but not the file before the process exits, the next time you try open the 
> file for append it fails with:
> Exception in thread "main" java.io.IOException: APPEND failed with error 
> 0x83090a16 (Failed to perform the requested operation because the file is 
> currently open in write mode by another user or process.). 
> [a67c6b32-e78b-4852-9fac-142a3e2ba963][2018-03-22T20:54:08.3520940-07:00]
>  The following moves local file to HDFS if it doesn't exist or appends it's 
> contents if it does:
>  
> {code:java}
> public void addFile(String source, String dest, Configuration conf) throws 
> IOException {
> FileSystem fileSystem = FileSystem.get(conf);
> // Get the filename out of the file path
> String filename = source.substring(source.lastIndexOf('/') + 
> 1,source.length());
> // Create the destination path including the filename.
> if (dest.charAt(dest.length() - 1) != '/')
> { dest = dest + "/" + filename; }
> else {
> dest = dest + filename;
> }
> // Check if the file already exists
> Path path = new Path(dest);
> FSDataOutputStream out;
> if (fileSystem.exists(path)) {
> System.out.println("File " + dest + " already exists appending");
> out = fileSystem.append(path);
> } else {
> out = fileSystem.create(path);
> }
> // Create a new file and write data to it.
> InputStream in = new BufferedInputStream(new FileInputStream(new File(
> source)));
> byte[] b = new byte[1024];
> int numBytes = 0;
> while ((numBytes = in.read(b)) > 0) {
> out.write(b, 0, numBytes);
> }
> // Close the file system not the file
> in.close();
> //out.close();
> fileSystem.close();
> }
> {code}
>  If "dest" is an adl:// location, invoking the function a second time (after 
> the process has exited) it raises the error. If it's a regular hdfs:// file 
> system, it doesn't as all the locks are released. The same exception is also 
> raised if a subsequent append is done using: hdfs dfs  -appendToFile.
> As I can't see a way to force lease recovery in this situation, this seems 
> like a bug. org.apache.hadoop.fs.adl.AdlFileSystem inherits close() from 
> org.apache.hadoop.fs.FileSystem
> [https://hadoop.apache.org/docs/r3.0.0/api/org/apache/hadoop/fs/adl/AdlFileSystem.html]
> Which states:
> Close this FileSystem instance. Will release any held locks. This does not 
> seem to be the case



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14178) Move Mockito up to version 2.x

2018-06-05 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16502346#comment-16502346
 ] 

genericqa commented on HADOOP-14178:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 258 new or modified 
test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 19m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
35m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-hdfs-project/hadoop-hdfs-native-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
hadoop-mapreduce-project/hadoop-mapreduce-client hadoop-mapreduce-project 
hadoop-client-modules/hadoop-client-minicluster . {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
8s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 61m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 
40s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 30m 40s{color} 
| {color:red} root generated 15 new + 1487 unchanged - 0 fixed = 1502 total 
(was 1487) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
4m 47s{color} | {color:orange} root: The patch generated 1 new + 6843 unchanged 
- 90 fixed = 6844 total (was 6933) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 19m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
48s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-hdfs-project/hadoop-hdfs-native-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
hadoop-mapreduce-project/hadoop-mapreduce-client hadoop-mapreduce-project 
hadoop-client-modules/hadoop-client-minicluster . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 33m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}155m 24s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {

[jira] [Commented] (HADOOP-15515) adl.AdlFilesystem.close() doesn't release locks on open files

2018-06-05 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16502413#comment-16502413
 ] 

genericqa commented on HADOOP-15515:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
45s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 32m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
53s{color} | {color:green} hadoop-azure-datalake in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15515 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12924488/HDFS-13344-002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7dd39511eab4 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1b0d4f4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14730/testReport/ |
| Max. process+thread count | 302 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-azure-datalake U: 
hadoop-tools/hadoop-azure-datalake |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14730/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> adl.AdlFilesystem.close() doesn't release locks on open files
> -
>
>  

[jira] [Commented] (HADOOP-15513) Add additional test cases to cover some corner cases for FileUtil#symlink

2018-06-05 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16502450#comment-16502450
 ] 

genericqa commented on HADOOP-15513:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 1s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
21s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 23m 
10s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 36m 
34s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 36m 34s{color} 
| {color:red} root generated 188 new + 1299 unchanged - 0 fixed = 1487 total 
(was 1299) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
23s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
50s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}144m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15513 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926610/HADOOP-15513.v2.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7d07572d0ae3 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / baebe4d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| compile | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14729/artifact/out/branch-compile-root.txt
 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14729/artifact/out/diff-compile-javac-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14729/testReport/ |
| Max. process+thread count | 1486 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build

[jira] [Commented] (HADOOP-12550) NativeIO#renameTo on Windows cannot replace an existing file at the destination.

2018-06-05 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-12550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16502458#comment-16502458
 ] 

Giovanni Matteo Fumarola commented on HADOOP-12550:
---

Thanks [~ste...@apache.org] , I think this issue is already fixed in 
HADOOP-14434. We can close this Jira as duplicate.

> NativeIO#renameTo on Windows cannot replace an existing file at the 
> destination.
> 
>
> Key: HADOOP-12550
> URL: https://issues.apache.org/jira/browse/HADOOP-12550
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: native
> Environment: Windows
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Major
> Attachments: HADOOP-12550.001.patch, HADOOP-12550.002.patch
>
>
> {{NativeIO#renameTo}} currently has different semantics on Linux vs. Windows 
> if a file already exists at the destination.  On Linux, it's a passthrough to 
> the [rename|http://linux.die.net/man/2/rename] syscall, which will replace an 
> existing file at the destination.  On Windows, it's a passthrough to 
> [MoveFile|https://msdn.microsoft.com/en-us/library/windows/desktop/aa365239%28v=vs.85%29.aspx?f=255&MSPPError=-2147217396],
>  which cannot replace an existing file at the destination and instead 
> triggers an error.  The easiest way to observe this difference is to run the 
> HDFS test {{TestRollingUpgrade#testRollback}}.  This fails on Windows due to 
> a block recovery after truncate trying to replace a block at an existing 
> destination path.  This issue proposes to use 
> [MoveFileEx|https://msdn.microsoft.com/en-us/library/windows/desktop/aa365240(v=vs.85).aspx]
>  on Windows with the {{MOVEFILE_REPLACE_EXISTING}} flag.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12550) NativeIO#renameTo on Windows cannot replace an existing file at the destination.

2018-06-05 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HADOOP-12550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16502457#comment-16502457
 ] 

Íñigo Goiri commented on HADOOP-12550:
--

I think this is pretty much what we did in HADOOP-14434.

> NativeIO#renameTo on Windows cannot replace an existing file at the 
> destination.
> 
>
> Key: HADOOP-12550
> URL: https://issues.apache.org/jira/browse/HADOOP-12550
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: native
> Environment: Windows
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Major
> Attachments: HADOOP-12550.001.patch, HADOOP-12550.002.patch
>
>
> {{NativeIO#renameTo}} currently has different semantics on Linux vs. Windows 
> if a file already exists at the destination.  On Linux, it's a passthrough to 
> the [rename|http://linux.die.net/man/2/rename] syscall, which will replace an 
> existing file at the destination.  On Windows, it's a passthrough to 
> [MoveFile|https://msdn.microsoft.com/en-us/library/windows/desktop/aa365239%28v=vs.85%29.aspx?f=255&MSPPError=-2147217396],
>  which cannot replace an existing file at the destination and instead 
> triggers an error.  The easiest way to observe this difference is to run the 
> HDFS test {{TestRollingUpgrade#testRollback}}.  This fails on Windows due to 
> a block recovery after truncate trying to replace a block at an existing 
> destination path.  This issue proposes to use 
> [MoveFileEx|https://msdn.microsoft.com/en-us/library/windows/desktop/aa365240(v=vs.85).aspx]
>  on Windows with the {{MOVEFILE_REPLACE_EXISTING}} flag.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12550) NativeIO#renameTo on Windows cannot replace an existing file at the destination.

2018-06-05 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HADOOP-12550:
--
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

> NativeIO#renameTo on Windows cannot replace an existing file at the 
> destination.
> 
>
> Key: HADOOP-12550
> URL: https://issues.apache.org/jira/browse/HADOOP-12550
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: native
> Environment: Windows
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Major
> Attachments: HADOOP-12550.001.patch, HADOOP-12550.002.patch
>
>
> {{NativeIO#renameTo}} currently has different semantics on Linux vs. Windows 
> if a file already exists at the destination.  On Linux, it's a passthrough to 
> the [rename|http://linux.die.net/man/2/rename] syscall, which will replace an 
> existing file at the destination.  On Windows, it's a passthrough to 
> [MoveFile|https://msdn.microsoft.com/en-us/library/windows/desktop/aa365239%28v=vs.85%29.aspx?f=255&MSPPError=-2147217396],
>  which cannot replace an existing file at the destination and instead 
> triggers an error.  The easiest way to observe this difference is to run the 
> HDFS test {{TestRollingUpgrade#testRollback}}.  This fails on Windows due to 
> a block recovery after truncate trying to replace a block at an existing 
> destination path.  This issue proposes to use 
> [MoveFileEx|https://msdn.microsoft.com/en-us/library/windows/desktop/aa365240(v=vs.85).aspx]
>  on Windows with the {{MOVEFILE_REPLACE_EXISTING}} flag.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-12550) NativeIO#renameTo on Windows cannot replace an existing file at the destination.

2018-06-05 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HADOOP-12550:
--
Comment: was deleted

(was: Thanks [~ste...@apache.org] , I think this issue is already fixed in 
HADOOP-14434. We can close this Jira as duplicate.)

> NativeIO#renameTo on Windows cannot replace an existing file at the 
> destination.
> 
>
> Key: HADOOP-12550
> URL: https://issues.apache.org/jira/browse/HADOOP-12550
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: native
> Environment: Windows
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Major
> Attachments: HADOOP-12550.001.patch, HADOOP-12550.002.patch
>
>
> {{NativeIO#renameTo}} currently has different semantics on Linux vs. Windows 
> if a file already exists at the destination.  On Linux, it's a passthrough to 
> the [rename|http://linux.die.net/man/2/rename] syscall, which will replace an 
> existing file at the destination.  On Windows, it's a passthrough to 
> [MoveFile|https://msdn.microsoft.com/en-us/library/windows/desktop/aa365239%28v=vs.85%29.aspx?f=255&MSPPError=-2147217396],
>  which cannot replace an existing file at the destination and instead 
> triggers an error.  The easiest way to observe this difference is to run the 
> HDFS test {{TestRollingUpgrade#testRollback}}.  This fails on Windows due to 
> a block recovery after truncate trying to replace a block at an existing 
> destination path.  This issue proposes to use 
> [MoveFileEx|https://msdn.microsoft.com/en-us/library/windows/desktop/aa365240(v=vs.85).aspx]
>  on Windows with the {{MOVEFILE_REPLACE_EXISTING}} flag.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14199) TestFsShellList.testList fails on windows: illegal filenames

2018-06-05 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16502475#comment-16502475
 ] 

Steve Loughran commented on HADOOP-14199:
-

bq, Ideally we would do a test that tests the Windows special characters but 
I'm fine with this.

could do, depends on how much it was felt to matter.

if there is something in paths which does matter a lot is: you can't have a 
colon in a filename, e.g "gs://store/path/logs-2018-06-05-13:48.csv". which is 
pretty much the kind of the path which google cloud storage uses when 
generating logs. But its more fundamental than the local FS, it's in Path.Now, 
windows must handle this, mustn't it, file:///C:/something, but that's probably 
the special case

> TestFsShellList.testList fails on windows: illegal filenames
> 
>
> Key: HADOOP-14199
> URL: https://issues.apache.org/jira/browse/HADOOP-14199
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.8.0
> Environment: win64
>Reporter: Steve Loughran
>Assignee: Anbang Hu
>Priority: Minor
> Attachments: HADOOP-15496.000.patch
>
>
> {{TestFsShellList.testList}} fails setting up the files to test against
> {code}
> org.apache.hadoop.io.nativeio.NativeIOException: The filename, directory 
> name, or volume label syntax is incorrect.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15513) Add additional test cases to cover some corner cases for FileUtil#symlink

2018-06-05 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HADOOP-15513:
--
Attachment: HADOOP-15513.v2.patch

> Add additional test cases to cover some corner cases for FileUtil#symlink
> -
>
> Key: HADOOP-15513
> URL: https://issues.apache.org/jira/browse/HADOOP-15513
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: HADOOP-15513.v1.patch, HADOOP-15513.v2.patch
>
>
> Add additional test cases to cover some corner cases for FileUtil#symlink.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15513) Add additional test cases to cover some corner cases for FileUtil#symlink

2018-06-05 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HADOOP-15513:
--
Attachment: (was: HADOOP-15513.v2.patch)

> Add additional test cases to cover some corner cases for FileUtil#symlink
> -
>
> Key: HADOOP-15513
> URL: https://issues.apache.org/jira/browse/HADOOP-15513
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: HADOOP-15513.v1.patch, HADOOP-15513.v2.patch
>
>
> Add additional test cases to cover some corner cases for FileUtil#symlink.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15513) Add additional test cases to cover some corner cases for FileUtil#symlink

2018-06-05 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16502478#comment-16502478
 ] 

Giovanni Matteo Fumarola commented on HADOOP-15513:
---

Javac and compile do not seem related. I queued another build.

> Add additional test cases to cover some corner cases for FileUtil#symlink
> -
>
> Key: HADOOP-15513
> URL: https://issues.apache.org/jira/browse/HADOOP-15513
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: HADOOP-15513.v1.patch, HADOOP-15513.v2.patch
>
>
> Add additional test cases to cover some corner cases for FileUtil#symlink.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15461) Improvements over the Hadoop support with Windows

2018-06-05 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HADOOP-15461:
--
Attachment: WinUtils-Functions.pdf
WinUtils.CSV

> Improvements over the Hadoop support with Windows
> -
>
> Key: HADOOP-15461
> URL: https://issues.apache.org/jira/browse/HADOOP-15461
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: WinUtils-Functions.pdf, WinUtils.CSV
>
>
> This Jira tracks the effort to improve the interaction between Hadoop and 
> Windows Server.
>  * Move away from an external process (winutils.exe) for native code:
>  ** Replace by native Java APIs (e.g., symlinks);
>  ** Replace by something like JNI or so;
>  * Fix the build system to fully leverage cmake instead of msbuild;
>  * Possible other improvements;
>  * Memory and handle leaks.
>  
> I did a quick investigation of the performance of WinUtils in YARN. In 
> average NM calls 4.76 times per second and 65.51 per container.
>  
> | |Requests|Requests/sec|Requests/min|Requests/container|
> |*Sum [WinUtils]*|*135354*|*4.761*|*286.160*|*65.51*|
> |[WinUtils] Execute -help|4148|0.145|8.769|2.007|
> |[WinUtils] Execute -ls|2842|0.0999|6.008|1.37|
> |[WinUtils] Execute -systeminfo|9153|0.321|19.35|4.43|
> |[WinUtils] Execute -symlink|115096|4.048|243.33|57.37|
> |[WinUtils] Execute -task isAlive|4115|0.144|8.699|2.05|
>  Interval: 7 hours, 53 minutes and 48 seconds
> Each execution of WinUtils does around *140 IO ops*, of which 130 are DDL ops.
> This means *666.58* IO ops/second due to WinUtils.
> We should start considering to remove WinUtils from Hadoop and creating a JNI 
> interface.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15461) Improvements over the Hadoop support with Windows

2018-06-05 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HADOOP-15461:
--
Description: 
This Jira tracks the effort to improve the interaction between Hadoop and 
Windows Server.
 * Move away from an external process (winutils.exe) for native code:
 ** Replace by native Java APIs (e.g., symlinks);
 ** Replace by something like JNI or so;
 * Fix the build system to fully leverage cmake instead of msbuild;
 * Possible other improvements;
 * Memory and handle leaks.

 

I did a quick investigation of the performance of WinUtils in YARN. In average 
NM calls 4.76 times per second and 65.51 per container.

 
| |Requests|Requests/sec|Requests/min|Requests/container|
|*Sum [WinUtils]*|*135354*|*4.761*|*286.160*|*65.51*|
|[WinUtils] Execute -help|4148|0.145|8.769|2.007|
|[WinUtils] Execute -ls|2842|0.0999|6.008|1.37|
|[WinUtils] Execute -systeminfo|9153|0.321|19.35|4.43|
|[WinUtils] Execute -symlink|115096|4.048|243.33|57.37|
|[WinUtils] Execute -task isAlive|4115|0.144|8.699|2.05|

 Interval: 7 hours, 53 minutes and 48 seconds

Each execution of WinUtils does around *140 IO ops*, of which 130 are DDL ops.

This means *666.58* IO ops/second due to WinUtils.

We should start considering to remove WinUtils from Hadoop and creating a JNI 
interface.

  was:
This Jira tracks the effort to improve the interaction between Hadoop and 
Windows Server.
 * Move away from an external process (winutils.exe) for native code:
 ** Replace by native Java APIs (e.g., symlinks);
 ** Replace by something like JNI or so;
 * Fix the build system to fully leverage cmake instead of msbuild;
 * Possible other improvements;
 * Memory and handle leaks.


> Improvements over the Hadoop support with Windows
> -
>
> Key: HADOOP-15461
> URL: https://issues.apache.org/jira/browse/HADOOP-15461
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: WinUtils-Functions.pdf, WinUtils.CSV
>
>
> This Jira tracks the effort to improve the interaction between Hadoop and 
> Windows Server.
>  * Move away from an external process (winutils.exe) for native code:
>  ** Replace by native Java APIs (e.g., symlinks);
>  ** Replace by something like JNI or so;
>  * Fix the build system to fully leverage cmake instead of msbuild;
>  * Possible other improvements;
>  * Memory and handle leaks.
>  
> I did a quick investigation of the performance of WinUtils in YARN. In 
> average NM calls 4.76 times per second and 65.51 per container.
>  
> | |Requests|Requests/sec|Requests/min|Requests/container|
> |*Sum [WinUtils]*|*135354*|*4.761*|*286.160*|*65.51*|
> |[WinUtils] Execute -help|4148|0.145|8.769|2.007|
> |[WinUtils] Execute -ls|2842|0.0999|6.008|1.37|
> |[WinUtils] Execute -systeminfo|9153|0.321|19.35|4.43|
> |[WinUtils] Execute -symlink|115096|4.048|243.33|57.37|
> |[WinUtils] Execute -task isAlive|4115|0.144|8.699|2.05|
>  Interval: 7 hours, 53 minutes and 48 seconds
> Each execution of WinUtils does around *140 IO ops*, of which 130 are DDL ops.
> This means *666.58* IO ops/second due to WinUtils.
> We should start considering to remove WinUtils from Hadoop and creating a JNI 
> interface.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15513) Add additional test cases to cover some corner cases for FileUtil#symlink

2018-06-05 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16502644#comment-16502644
 ] 

genericqa commented on HADOOP-15513:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 
46s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 32m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 31m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 31m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
10s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}150m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15513 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926637/HADOOP-15513.v2.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 77ca3db20ba5 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0afc036 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14731/testReport/ |
| Max. process+thread count | 1360 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14731/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add additional test cases to cover some corner cases for FileUtil#symlink
> --

[jira] [Commented] (HADOOP-15513) Add additional test cases to cover some corner cases for FileUtil#symlink

2018-06-05 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16502651#comment-16502651
 ] 

Giovanni Matteo Fumarola commented on HADOOP-15513:
---

[~ste...@apache.org], [~elgoiri] can you take a look at this patch?

I tested successfully in Windows and Linux with the current code.
This patch will help to figure out if HADOOP-15465 will not break compatibility.

> Add additional test cases to cover some corner cases for FileUtil#symlink
> -
>
> Key: HADOOP-15513
> URL: https://issues.apache.org/jira/browse/HADOOP-15513
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: HADOOP-15513.v1.patch, HADOOP-15513.v2.patch
>
>
> Add additional test cases to cover some corner cases for FileUtil#symlink.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15513) Add additional test cases to cover some corner cases for FileUtil#symlink

2018-06-05 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HADOOP-15513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16502692#comment-16502692
 ] 

Íñigo Goiri commented on HADOOP-15513:
--

[^HADOOP-15513.v2.patch] LGTM.
We don't check for the privileged operation but not sure how to trigger that 
one from the unit test.
+1

> Add additional test cases to cover some corner cases for FileUtil#symlink
> -
>
> Key: HADOOP-15513
> URL: https://issues.apache.org/jira/browse/HADOOP-15513
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Attachments: HADOOP-15513.v1.patch, HADOOP-15513.v2.patch
>
>
> Add additional test cases to cover some corner cases for FileUtil#symlink.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15499) Performance severe drop when running RawErasureCoderBenchmark with NativeRSRawErasureCoder

2018-06-05 Thread SammiChen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16493287#comment-16493287
 ] 

SammiChen edited comment on HADOOP-15499 at 6/6/18 3:07 AM:


Performance data before the patch,

bin/hadoop jar ./share/hadoop/common/hadoop-common-3.2.0-SNAPSHOT-tests.jar  
org.apache.hadoop.io.erasurecode.rawcoder.RawErasureCoderBenchmark encode 3 50 
1024 64
 Using 126MB buffer.
 ISA-L coder encode 50400MB data, with chunk size 64KB
Total time: 9.24 s.
Total throughput: 5455.73 MB/s
Threads statistics:
50 threads in total.
Min: 1.79 s, Max: 9.19 s, Avg: 6.58 s, 90th Percentile: 8.94 s.

 

Performance data after the patch,

bin/hadoop jar ./share/hadoop/common/hadoop-common-3.2.0-SNAPSHOT-tests.jar 
org.apache.hadoop.io.erasurecode.rawcoder.RawErasureCoderBenchmark encode 3 72 
10240 4096
 Using 120MB buffer.
 ISA-L coder encode 734400MB data, with chunk size 4096KB
 Total time: 8.11 s.
 Total throughput: 90521.39 MB/s
 Threads statistics:
 72 threads in total.
 Min: 6.78 s, Max: 7.93 s, Avg: 7.36 s, 90th Percentile: 7.66 s.

 

I also compared the performance data of two scenarios, one is remove all the 
synchronized key words, another is the current ReentrantReadWriteLock solution. 

The performance of ReentrantReadWriteLock solution is like less than 5% degrade 
than the remove synchronized key words case. It's acceptable for me. 

 


was (Author: sammi):
Performance data before the patch,

bin/hadoop jar ./share/hadoop/common/hadoop-common-3.2.0-SNAPSHOT-tests.jar  
org.apache.hadoop.io.erasurecode.rawcoder.RawErasureCoderBenchmark encode 3 50 
1024 64
 Using 126MB buffer.
 ISA-L coder encode 50400MB data, with chunk size 64KB
 Total time: 0.98 s.
 Total throughput: 51639.34 MB/s
 Threads statistics:
 50 threads in total.

 

Performance data after the patch,

bin/hadoop jar ./share/hadoop/common/hadoop-common-3.2.0-SNAPSHOT-tests.jar 
org.apache.hadoop.io.erasurecode.rawcoder.RawErasureCoderBenchmark encode 3 72 
10240 4096
 Using 120MB buffer.
 ISA-L coder encode 734400MB data, with chunk size 4096KB
 Total time: 8.11 s.
 Total throughput: 90521.39 MB/s
 Threads statistics:
 72 threads in total.
 Min: 6.78 s, Max: 7.93 s, Avg: 7.36 s, 90th Percentile: 7.66 s.

 

I also compared the performance data of two scenarios, one is remove all the 
synchronized key words, another is the current ReentrantReadWriteLock solution. 

The performance of ReentrantReadWriteLock solution is like less than 5% degrade 
than the remove synchronized key words case. It's acceptable for me. 

 

> Performance severe drop when running RawErasureCoderBenchmark with 
> NativeRSRawErasureCoder
> --
>
> Key: HADOOP-15499
> URL: https://issues.apache.org/jira/browse/HADOOP-15499
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 3.0.1, 3.0.2, 3.1.1
>Reporter: SammiChen
>Assignee: SammiChen
>Priority: Major
> Attachments: HADOOP-15499.001.patch
>
>
> Run RawErasureCoderBenchmark  which is a micro-benchmark to test EC codec 
> encoding/decoding performance. 
> 50 concurrency Native ISA-L coder has the less throughput than 1 concurrency 
> Native ISA-L case. It's abnormal. 
>  
> bin/hadoop jar ./share/hadoop/common/hadoop-common-3.2.0-SNAPSHOT-tests.jar 
> org.apache.hadoop.io.erasurecode.rawcoder.RawErasureCoderBenchmark encode 3 1 
> 1024 1024
> Using 126MB buffer.
> ISA-L coder encode 1008MB data, with chunk size 1024KB
> Total time: 0.19 s.
> Total throughput: 5390.37 MB/s
> Threads statistics:
> 1 threads in total.
> Min: 0.18 s, Max: 0.18 s, Avg: 0.18 s, 90th Percentile: 0.18 s.
>  
> bin/hadoop jar ./share/hadoop/common/hadoop-common-3.2.0-SNAPSHOT-tests.jar 
> org.apache.hadoop.io.erasurecode.rawcoder.RawErasureCoderBenchmark encode 3 
> 50 1024 10240
> Using 120MB buffer.
> ISA-L coder encode 54000MB data, with chunk size 10240KB
> Total time: 11.58 s.
> Total throughput: 4662 MB/s
> Threads statistics:
> 50 threads in total.
> Min: 0.55 s, Max: 11.5 s, Avg: 6.32 s, 90th Percentile: 10.45 s.
>  
> RawErasureCoderBenchmark shares a single coder between all concurrent 
> threads. While 
> NativeRSRawEncoder and NativeRSRawDecoder has synchronized key work on 
> doDecode and doEncode function. So 50 concurrent threads are forced to use 
> the shared coder encode/decode function one by one. 
>  
> To resolve the issue, there are two approaches. 
>  # Refactor RawErasureCoderBenchmark  to use dedicated coder for each 
> concurrent thread.
>  # Refactor NativeRSRawEncoder  and NativeRSRawDecoder  to get better 
> concurrency.  Since the synchronized key work is to try to protect the 
> private variable nativeCoder from being checked in doEncode/doDecode and  
> being modified in

[jira] [Updated] (HADOOP-15499) Performance severe drop when running RawErasureCoderBenchmark with NativeRSRawErasureCoder

2018-06-05 Thread SammiChen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HADOOP-15499:
---
Attachment: HADOOP-15499.002.patch

> Performance severe drop when running RawErasureCoderBenchmark with 
> NativeRSRawErasureCoder
> --
>
> Key: HADOOP-15499
> URL: https://issues.apache.org/jira/browse/HADOOP-15499
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 3.0.1, 3.0.2, 3.1.1
>Reporter: SammiChen
>Assignee: SammiChen
>Priority: Major
> Attachments: HADOOP-15499.001.patch, HADOOP-15499.002.patch
>
>
> Run RawErasureCoderBenchmark  which is a micro-benchmark to test EC codec 
> encoding/decoding performance. 
> 50 concurrency Native ISA-L coder has the less throughput than 1 concurrency 
> Native ISA-L case. It's abnormal. 
>  
> bin/hadoop jar ./share/hadoop/common/hadoop-common-3.2.0-SNAPSHOT-tests.jar 
> org.apache.hadoop.io.erasurecode.rawcoder.RawErasureCoderBenchmark encode 3 1 
> 1024 1024
> Using 126MB buffer.
> ISA-L coder encode 1008MB data, with chunk size 1024KB
> Total time: 0.19 s.
> Total throughput: 5390.37 MB/s
> Threads statistics:
> 1 threads in total.
> Min: 0.18 s, Max: 0.18 s, Avg: 0.18 s, 90th Percentile: 0.18 s.
>  
> bin/hadoop jar ./share/hadoop/common/hadoop-common-3.2.0-SNAPSHOT-tests.jar 
> org.apache.hadoop.io.erasurecode.rawcoder.RawErasureCoderBenchmark encode 3 
> 50 1024 10240
> Using 120MB buffer.
> ISA-L coder encode 54000MB data, with chunk size 10240KB
> Total time: 11.58 s.
> Total throughput: 4662 MB/s
> Threads statistics:
> 50 threads in total.
> Min: 0.55 s, Max: 11.5 s, Avg: 6.32 s, 90th Percentile: 10.45 s.
>  
> RawErasureCoderBenchmark shares a single coder between all concurrent 
> threads. While 
> NativeRSRawEncoder and NativeRSRawDecoder has synchronized key work on 
> doDecode and doEncode function. So 50 concurrent threads are forced to use 
> the shared coder encode/decode function one by one. 
>  
> To resolve the issue, there are two approaches. 
>  # Refactor RawErasureCoderBenchmark  to use dedicated coder for each 
> concurrent thread.
>  # Refactor NativeRSRawEncoder  and NativeRSRawDecoder  to get better 
> concurrency.  Since the synchronized key work is to try to protect the 
> private variable nativeCoder from being checked in doEncode/doDecode and  
> being modified in release.  We can use reentrantReadWriteLock to increase the 
> concurrency since doEncode/doDecode can be called multiple times without 
> change the nativeCoder state.
>  I prefer approach 2 and will upload a patch later. 
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15499) Performance severe drop when running RawErasureCoderBenchmark with NativeRSRawErasureCoder

2018-06-05 Thread SammiChen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16502796#comment-16502796
 ] 

SammiChen commented on HADOOP-15499:


Thanks [~xiaochen] for the review and comments.  A new patch is uploaded after 
addressed all issues. 

> Performance severe drop when running RawErasureCoderBenchmark with 
> NativeRSRawErasureCoder
> --
>
> Key: HADOOP-15499
> URL: https://issues.apache.org/jira/browse/HADOOP-15499
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 3.0.1, 3.0.2, 3.1.1
>Reporter: SammiChen
>Assignee: SammiChen
>Priority: Major
> Attachments: HADOOP-15499.001.patch, HADOOP-15499.002.patch
>
>
> Run RawErasureCoderBenchmark  which is a micro-benchmark to test EC codec 
> encoding/decoding performance. 
> 50 concurrency Native ISA-L coder has the less throughput than 1 concurrency 
> Native ISA-L case. It's abnormal. 
>  
> bin/hadoop jar ./share/hadoop/common/hadoop-common-3.2.0-SNAPSHOT-tests.jar 
> org.apache.hadoop.io.erasurecode.rawcoder.RawErasureCoderBenchmark encode 3 1 
> 1024 1024
> Using 126MB buffer.
> ISA-L coder encode 1008MB data, with chunk size 1024KB
> Total time: 0.19 s.
> Total throughput: 5390.37 MB/s
> Threads statistics:
> 1 threads in total.
> Min: 0.18 s, Max: 0.18 s, Avg: 0.18 s, 90th Percentile: 0.18 s.
>  
> bin/hadoop jar ./share/hadoop/common/hadoop-common-3.2.0-SNAPSHOT-tests.jar 
> org.apache.hadoop.io.erasurecode.rawcoder.RawErasureCoderBenchmark encode 3 
> 50 1024 10240
> Using 120MB buffer.
> ISA-L coder encode 54000MB data, with chunk size 10240KB
> Total time: 11.58 s.
> Total throughput: 4662 MB/s
> Threads statistics:
> 50 threads in total.
> Min: 0.55 s, Max: 11.5 s, Avg: 6.32 s, 90th Percentile: 10.45 s.
>  
> RawErasureCoderBenchmark shares a single coder between all concurrent 
> threads. While 
> NativeRSRawEncoder and NativeRSRawDecoder has synchronized key work on 
> doDecode and doEncode function. So 50 concurrent threads are forced to use 
> the shared coder encode/decode function one by one. 
>  
> To resolve the issue, there are two approaches. 
>  # Refactor RawErasureCoderBenchmark  to use dedicated coder for each 
> concurrent thread.
>  # Refactor NativeRSRawEncoder  and NativeRSRawDecoder  to get better 
> concurrency.  Since the synchronized key work is to try to protect the 
> private variable nativeCoder from being checked in doEncode/doDecode and  
> being modified in release.  We can use reentrantReadWriteLock to increase the 
> concurrency since doEncode/doDecode can be called multiple times without 
> change the nativeCoder state.
>  I prefer approach 2 and will upload a patch later. 
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15217) FsUrlConnection does not handle paths with spaces

2018-06-05 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-15217:
---
Summary: FsUrlConnection does not handle paths with spaces  (was: 
org.apache.hadoop.fs.FsUrlConnection does not handle paths with spaces)

> FsUrlConnection does not handle paths with spaces
> -
>
> Key: HADOOP-15217
> URL: https://issues.apache.org/jira/browse/HADOOP-15217
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0
>Reporter: Joseph Fourny
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HADOOP-15217.01.patch, HADOOP-15217.02.patch, 
> HADOOP-15217.03.patch, HADOOP-15217.04.patch, HADOOP-15217.05.patch, 
> HADOOP-15217.06.patch, TestCase.java
>
>
> When _FsUrlStreamHandlerFactory_ is registered with _java.net.URL_ (ex: when 
> Spark is initialized), it breaks URLs with spaces (even though they are 
> properly URI-encoded). I traced the problem down to 
> _FSUrlConnection.connect()_ method. It naively gets the path from the URL, 
> which contains encoded spaces, and pases it to 
> _org.apache.hadoop.fs.Path(String)_ constructor. This is not correct, because 
> the docs clearly say that the string must NOT be encoded. Doing so causes 
> double encoding within the Path class (ie: %20 becomes %2520). 
> See attached JUnit test. 
> This test case mimics an issue I ran into when trying to use Commons 
> Configuration 1.9 AFTER initializing Spark. Commons Configuration uses URL 
> class to load configuration files, but Spark installs 
> _FsUrlStreamHandlerFactory_, which hits this issue. For now, we are using an 
> AspectJ aspect to "patch" the bytecode at load time to work-around the issue. 
> The real fix is quite simple. All you need to do is replace this line in 
> _org.apache.hadoop.fs.FsUrlConnection.connect()_:
>         is = fs.open(new Path(url.getPath()));
> with this line:
>      is = fs.open(new Path(url.*toUri()*.getPath()));
> URI.getPath() will correctly decode the path, which is what is expected by 
> _org.apache.hadoop.fs.Path(String)_ constructor.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15217) FsUrlConnection does not handle paths with spaces

2018-06-05 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16502807#comment-16502807
 ] 

Xiao Chen commented on HADOOP-15217:


+1

> FsUrlConnection does not handle paths with spaces
> -
>
> Key: HADOOP-15217
> URL: https://issues.apache.org/jira/browse/HADOOP-15217
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0
>Reporter: Joseph Fourny
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HADOOP-15217.01.patch, HADOOP-15217.02.patch, 
> HADOOP-15217.03.patch, HADOOP-15217.04.patch, HADOOP-15217.05.patch, 
> HADOOP-15217.06.patch, TestCase.java
>
>
> When _FsUrlStreamHandlerFactory_ is registered with _java.net.URL_ (ex: when 
> Spark is initialized), it breaks URLs with spaces (even though they are 
> properly URI-encoded). I traced the problem down to 
> _FSUrlConnection.connect()_ method. It naively gets the path from the URL, 
> which contains encoded spaces, and pases it to 
> _org.apache.hadoop.fs.Path(String)_ constructor. This is not correct, because 
> the docs clearly say that the string must NOT be encoded. Doing so causes 
> double encoding within the Path class (ie: %20 becomes %2520). 
> See attached JUnit test. 
> This test case mimics an issue I ran into when trying to use Commons 
> Configuration 1.9 AFTER initializing Spark. Commons Configuration uses URL 
> class to load configuration files, but Spark installs 
> _FsUrlStreamHandlerFactory_, which hits this issue. For now, we are using an 
> AspectJ aspect to "patch" the bytecode at load time to work-around the issue. 
> The real fix is quite simple. All you need to do is replace this line in 
> _org.apache.hadoop.fs.FsUrlConnection.connect()_:
>         is = fs.open(new Path(url.getPath()));
> with this line:
>      is = fs.open(new Path(url.*toUri()*.getPath()));
> URI.getPath() will correctly decode the path, which is what is expected by 
> _org.apache.hadoop.fs.Path(String)_ constructor.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15217) FsUrlConnection does not handle paths with spaces

2018-06-05 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16502808#comment-16502808
 ] 

Hudson commented on HADOOP-15217:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #14369 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14369/])
HADOOP-15217. FsUrlConnection does not handle paths with spaces. (xiao: rev 
ba4011d64fadef3bee5920ccedbcdac01794cc23)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/fs/TestUrlStreamHandlerFactory.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsUrlConnection.java


> FsUrlConnection does not handle paths with spaces
> -
>
> Key: HADOOP-15217
> URL: https://issues.apache.org/jira/browse/HADOOP-15217
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0
>Reporter: Joseph Fourny
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HADOOP-15217.01.patch, HADOOP-15217.02.patch, 
> HADOOP-15217.03.patch, HADOOP-15217.04.patch, HADOOP-15217.05.patch, 
> HADOOP-15217.06.patch, TestCase.java
>
>
> When _FsUrlStreamHandlerFactory_ is registered with _java.net.URL_ (ex: when 
> Spark is initialized), it breaks URLs with spaces (even though they are 
> properly URI-encoded). I traced the problem down to 
> _FSUrlConnection.connect()_ method. It naively gets the path from the URL, 
> which contains encoded spaces, and pases it to 
> _org.apache.hadoop.fs.Path(String)_ constructor. This is not correct, because 
> the docs clearly say that the string must NOT be encoded. Doing so causes 
> double encoding within the Path class (ie: %20 becomes %2520). 
> See attached JUnit test. 
> This test case mimics an issue I ran into when trying to use Commons 
> Configuration 1.9 AFTER initializing Spark. Commons Configuration uses URL 
> class to load configuration files, but Spark installs 
> _FsUrlStreamHandlerFactory_, which hits this issue. For now, we are using an 
> AspectJ aspect to "patch" the bytecode at load time to work-around the issue. 
> The real fix is quite simple. All you need to do is replace this line in 
> _org.apache.hadoop.fs.FsUrlConnection.connect()_:
>         is = fs.open(new Path(url.getPath()));
> with this line:
>      is = fs.open(new Path(url.*toUri()*.getPath()));
> URI.getPath() will correctly decode the path, which is what is expected by 
> _org.apache.hadoop.fs.Path(String)_ constructor.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15217) FsUrlConnection does not handle paths with spaces

2018-06-05 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-15217:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.1.1
   3.2.0
   3.0.4
   Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-3.[0-1]. Thanks [~josephfourny] for the 
report and initial code, and [~zvenczel] for pushing this through the finish 
line!

> FsUrlConnection does not handle paths with spaces
> -
>
> Key: HADOOP-15217
> URL: https://issues.apache.org/jira/browse/HADOOP-15217
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0
>Reporter: Joseph Fourny
>Assignee: Zsolt Venczel
>Priority: Major
> Fix For: 3.0.4, 3.2.0, 3.1.1
>
> Attachments: HADOOP-15217.01.patch, HADOOP-15217.02.patch, 
> HADOOP-15217.03.patch, HADOOP-15217.04.patch, HADOOP-15217.05.patch, 
> HADOOP-15217.06.patch, TestCase.java
>
>
> When _FsUrlStreamHandlerFactory_ is registered with _java.net.URL_ (ex: when 
> Spark is initialized), it breaks URLs with spaces (even though they are 
> properly URI-encoded). I traced the problem down to 
> _FSUrlConnection.connect()_ method. It naively gets the path from the URL, 
> which contains encoded spaces, and pases it to 
> _org.apache.hadoop.fs.Path(String)_ constructor. This is not correct, because 
> the docs clearly say that the string must NOT be encoded. Doing so causes 
> double encoding within the Path class (ie: %20 becomes %2520). 
> See attached JUnit test. 
> This test case mimics an issue I ran into when trying to use Commons 
> Configuration 1.9 AFTER initializing Spark. Commons Configuration uses URL 
> class to load configuration files, but Spark installs 
> _FsUrlStreamHandlerFactory_, which hits this issue. For now, we are using an 
> AspectJ aspect to "patch" the bytecode at load time to work-around the issue. 
> The real fix is quite simple. All you need to do is replace this line in 
> _org.apache.hadoop.fs.FsUrlConnection.connect()_:
>         is = fs.open(new Path(url.getPath()));
> with this line:
>      is = fs.open(new Path(url.*toUri()*.getPath()));
> URI.getPath() will correctly decode the path, which is what is expected by 
> _org.apache.hadoop.fs.Path(String)_ constructor.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15514) NoClassDefFoundError of TimelineCollectorManager when starting MiniCluster

2018-06-05 Thread Jeff Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16502827#comment-16502827
 ] 

Jeff Zhang commented on HADOOP-15514:
-

This patch works for zeppelin which use hadoop for integration test. Thanks 
[~rohithsharma]

> NoClassDefFoundError of TimelineCollectorManager when starting MiniCluster
> --
>
> Key: HADOOP-15514
> URL: https://issues.apache.org/jira/browse/HADOOP-15514
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Jeff Zhang
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: HADOOP-15514.01.patch
>
>
> {code:java}
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.lang.NoClassDefFoundError: 
> org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollectorManager
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
>   at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>   at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
>   at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at java.lang.Class.getDeclaredMethods0(Native Method)
>   at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
>   at java.lang.Class.getDeclaredMethods(Class.java:1975){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15499) Performance severe drop when running RawErasureCoderBenchmark with NativeRSRawErasureCoder

2018-06-05 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16502843#comment-16502843
 ] 

genericqa commented on HADOOP-15499:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 32m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m 
57s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 49s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 8 unchanged - 0 fixed = 10 total (was 8) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
45s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}133m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15499 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926675/HADOOP-15499.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 57b78540a312 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0afc036 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14732/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14732/testReport/ |
| Max. process+thread count | 1464 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14732/console |
| Powered by | A

[jira] [Commented] (HADOOP-15483) Upgrade jquery to version 3.3.1

2018-06-05 Thread Rohith Sharma K S (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16502873#comment-16502873
 ] 

Rohith Sharma K S commented on HADOOP-15483:


I did basic testing with this patch, and found that it is breaking Yarn 
scheduler queue page. This patch need to revisited for YARN UI

> Upgrade jquery to version 3.3.1
> ---
>
> Key: HADOOP-15483
> URL: https://issues.apache.org/jira/browse/HADOOP-15483
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HADOOP-15483.001.patch, HADOOP-15483.002.patch, 
> HADOOP-15483.003.patch, HADOOP-15483.004.patch, HADOOP-15483.005.patch, 
> HADOOP-15483.006.patch
>
>
> This Jira aims to upgrade jquery to version 3.3.1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org