[jira] [Commented] (HADOOP-12424) Add a function to build unique cache key for Token.

2015-09-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14876823#comment-14876823
 ] 

Hadoop QA commented on HADOOP-12424:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 41s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   8m  5s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 25s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 13s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 54s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  23m 41s | Tests passed in 
hadoop-common. |
| | |  65m 35s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12761228/HADOOP-12424.001.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 94dec5a |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7679/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7679/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf908.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7679/console |


This message was automatically generated.

> Add a function to build unique cache key for Token.
> ---
>
> Key: HADOOP-12424
> URL: https://issues.apache.org/jira/browse/HADOOP-12424
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Critical
> Attachments: HADOOP-12424.001.patch
>
>
> HDFS-8855 needs facility function from Token to build unique cache key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11918) Listing an empty s3a root directory throws FileNotFound.

2015-09-18 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14876732#comment-14876732
 ] 

Chris Nauroth commented on HADOOP-11918:


Thank you, [~eddyxu] and [~Thomas Demoor].  Patch v003 looks good to me aside 
from a few minor nit-picks.

{code}
for (FileStatus status : statuses) {
 assertEquals("Could not remove files from root", true,
 fs.delete(status.getPath(), true));
}
{code}

Indentation is slightly off for the {{assertEquals}} line inside the loop.

{code}
if (LOG.isDebugEnabled()) {
  LOG.debug("Found root directory");
}
{code}

There is no need to check {{isDebugEnabled}}, because there is no expensive 
string concatenation logic to build the log message.  It's just a string 
literal.

I ran {{TestS3AContractRootDir}} against my own testing S3 bucket, and it 
passed.  It appears earlier feedback has been addressed too.

I'll be +1 after the nit-picks are fixed.  [~ste...@apache.org], please let us 
know if you have any other thoughts on the patch.

> Listing an empty s3a root directory throws FileNotFound.
> 
>
> Key: HADOOP-11918
> URL: https://issues.apache.org/jira/browse/HADOOP-11918
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.7.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
>  Labels: BB2015-05-TBR, s3
> Attachments: HADOOP-11918-002.patch, HADOOP-11918-003.patch, 
> HADOOP-11918.000.patch, HADOOP-11918.001.patch
>
>
> With an empty s3 bucket and run
> {code}
> $ hadoop fs -D... -ls s3a://hdfs-s3a-test/
> 15/05/04 15:21:34 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> ls: `s3a://hdfs-s3a-test/': No such file or directory
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12404) Disable caching for JarURLConnection to avoid sharing JarFile with other users when loading resource from URL in Configuration class.

2015-09-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14876714#comment-14876714
 ] 

Hudson commented on HADOOP-12404:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #393 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/393/])
HADOOP-12404. Disable caching for JarURLConnection to avoid sharing JarFile 
with other users when loading resource from URL in Configuration class. 
Contributed by Zhihai Xu (zxu: rev 88d89267ff6b66e144bfcceb09532191975f2a4a)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Disable caching for JarURLConnection to avoid sharing JarFile with other 
> users when loading resource from URL in Configuration class.
> -
>
> Key: HADOOP-12404
> URL: https://issues.apache.org/jira/browse/HADOOP-12404
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Reporter: zhihai xu
>Assignee: zhihai xu
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12404.000.patch
>
>
> Disable caching for JarURLConnection to avoid sharing JarFile with other 
> users when loading resource from URL in Configuration class.
> Currently {{Configuration#parse}} will call {{url.openStream}} to get the 
> InputStream for {{DocumentBuilder}} to parse.
> Based on the JDK source code, the calling sequence is 
> url.openStream => 
> [handler.openConnection.getInputStream|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/Handler.java]
>  => [new 
> JarURLConnection|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/JarURLConnection.java#JarURLConnection]
>  => JarURLConnection.connect => [factory.get(getJarFileURL(), 
> getUseCaches())|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/JarFileFactory.java]
>  =>  
> [URLJarFile.getInputStream|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/URLJarFile.java#URLJarFile.getJarFile%28java.net.URL%2Csun.net.www.protocol.jar.URLJarFile.URLJarFileCloseController%29]=>[JarFile.getInputStream|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/util/jar/JarFile.java#JarFile.getInputStream%28java.util.zip.ZipEntry%29]=>ZipFile.getInputStream
> If {{URLConnection#getUseCaches}} is true (by default), URLJarFile will be 
> shared for the same URL. If the shared URLJarFile is closed by other users, 
> all the InputStream returned by URLJarFile#getInputStream will be closed 
> based on the 
> [document|http://docs.oracle.com/javase/7/docs/api/java/util/zip/ZipFile.html#getInputStream(java.util.zip.ZipEntry)]
> So we saw the following exception in a heavy-load system at rare situation 
> which cause a hive job failed 
> {code}
> 2014-10-21 23:44:41,856 ERROR org.apache.hadoop.hive.ql.exec.Task: Ended 
> Job = job_1413909398487_3696 with exception 
> 'java.lang.RuntimeException(java.io.IOException: Stream closed)' 
> java.lang.RuntimeException: java.io.IOException: Stream closed 
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2484) 
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2337) 
> at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2254) 
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:861) 
> at 
> org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:2030) 
> at org.apache.hadoop.mapred.JobConf.(JobConf.java:479) 
> at org.apache.hadoop.mapred.JobConf.(JobConf.java:469) 
> at org.apache.hadoop.mapreduce.Cluster.getJob(Cluster.java:187) 
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:582) 
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:580) 
> at java.security.AccessController.doPrivileged(Native Method) 
> at javax.security.auth.Subject.doAs(Subject.java:415) 
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.j 
> ava:1614) 
> at 
> org.apache.hadoop.mapred.JobClient.getJobUsingCluster(JobClient.java:580) 
> at org.apache.hadoop.mapred.JobClient.getJob(JobClient.java:598) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExe 
> cHelper.java:288) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExe 
> cHelper.java:547) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:426) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:136) 
> at org.apache.hadoop.hive.ql.exec.

[jira] [Updated] (HADOOP-12424) Add a function to build unique cache key for Token.

2015-09-18 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-12424:
---
Status: Patch Available  (was: Open)

> Add a function to build unique cache key for Token.
> ---
>
> Key: HADOOP-12424
> URL: https://issues.apache.org/jira/browse/HADOOP-12424
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Critical
> Attachments: HADOOP-12424.001.patch
>
>
> HDFS-8855 needs facility function from Token to build unique cache key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12424) Add a function to build unique cache key for Token.

2015-09-18 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14876701#comment-14876701
 ] 

Xiaobing Zhou commented on HADOOP-12424:


Made patch V1. [~jnp] / [~vinodkv] could you review it? Thanks.

> Add a function to build unique cache key for Token.
> ---
>
> Key: HADOOP-12424
> URL: https://issues.apache.org/jira/browse/HADOOP-12424
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Critical
> Attachments: HADOOP-12424.001.patch
>
>
> HDFS-8855 needs facility function from Token to build unique cache key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12424) Add a function to build unique cache key for Token.

2015-09-18 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-12424:
---
Attachment: HADOOP-12424.001.patch

> Add a function to build unique cache key for Token.
> ---
>
> Key: HADOOP-12424
> URL: https://issues.apache.org/jira/browse/HADOOP-12424
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Critical
> Attachments: HADOOP-12424.001.patch
>
>
> HDFS-8855 needs facility function from Token to build unique cache key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12424) Add a function to build unique cache key for Token.

2015-09-18 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-12424:
---
Priority: Critical  (was: Major)

> Add a function to build unique cache key for Token.
> ---
>
> Key: HADOOP-12424
> URL: https://issues.apache.org/jira/browse/HADOOP-12424
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>Priority: Critical
>
> HDFS-8855 needs facility function from Token to build unique cache key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12424) Add a function to build unique cache key for Token.

2015-09-18 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-12424:
---
Summary: Add a function to build unique cache key for Token.  (was: Add a 
function to build unique cache key for Token)

> Add a function to build unique cache key for Token.
> ---
>
> Key: HADOOP-12424
> URL: https://issues.apache.org/jira/browse/HADOOP-12424
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> HDFS-8855 needs facility function from Token to build unique cache key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12424) Add a function to build unique cache key for Token

2015-09-18 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-12424:
---
Description: HDFS-8855 needs facility function from Token to build unique 
cache key.  (was: HDFS-8855 needs facility function from Token to b)

> Add a function to build unique cache key for Token
> --
>
> Key: HADOOP-12424
> URL: https://issues.apache.org/jira/browse/HADOOP-12424
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> HDFS-8855 needs facility function from Token to build unique cache key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12424) Add a function to build unique cache key for Token

2015-09-18 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-12424:
---
Description: HDFS-8855 needs facility function from Token to b

> Add a function to build unique cache key for Token
> --
>
> Key: HADOOP-12424
> URL: https://issues.apache.org/jira/browse/HADOOP-12424
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> HDFS-8855 needs facility function from Token to b



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12424) Add a function to build unique cache key for Token

2015-09-18 Thread Xiaobing Zhou (JIRA)
Xiaobing Zhou created HADOOP-12424:
--

 Summary: Add a function to build unique cache key for Token
 Key: HADOOP-12424
 URL: https://issues.apache.org/jira/browse/HADOOP-12424
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Xiaobing Zhou
Assignee: Xiaobing Zhou






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12404) Disable caching for JarURLConnection to avoid sharing JarFile with other users when loading resource from URL in Configuration class.

2015-09-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14876660#comment-14876660
 ] 

Hudson commented on HADOOP-12404:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2331 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2331/])
HADOOP-12404. Disable caching for JarURLConnection to avoid sharing JarFile 
with other users when loading resource from URL in Configuration class. 
Contributed by Zhihai Xu (zxu: rev 88d89267ff6b66e144bfcceb09532191975f2a4a)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Disable caching for JarURLConnection to avoid sharing JarFile with other 
> users when loading resource from URL in Configuration class.
> -
>
> Key: HADOOP-12404
> URL: https://issues.apache.org/jira/browse/HADOOP-12404
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Reporter: zhihai xu
>Assignee: zhihai xu
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12404.000.patch
>
>
> Disable caching for JarURLConnection to avoid sharing JarFile with other 
> users when loading resource from URL in Configuration class.
> Currently {{Configuration#parse}} will call {{url.openStream}} to get the 
> InputStream for {{DocumentBuilder}} to parse.
> Based on the JDK source code, the calling sequence is 
> url.openStream => 
> [handler.openConnection.getInputStream|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/Handler.java]
>  => [new 
> JarURLConnection|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/JarURLConnection.java#JarURLConnection]
>  => JarURLConnection.connect => [factory.get(getJarFileURL(), 
> getUseCaches())|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/JarFileFactory.java]
>  =>  
> [URLJarFile.getInputStream|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/URLJarFile.java#URLJarFile.getJarFile%28java.net.URL%2Csun.net.www.protocol.jar.URLJarFile.URLJarFileCloseController%29]=>[JarFile.getInputStream|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/util/jar/JarFile.java#JarFile.getInputStream%28java.util.zip.ZipEntry%29]=>ZipFile.getInputStream
> If {{URLConnection#getUseCaches}} is true (by default), URLJarFile will be 
> shared for the same URL. If the shared URLJarFile is closed by other users, 
> all the InputStream returned by URLJarFile#getInputStream will be closed 
> based on the 
> [document|http://docs.oracle.com/javase/7/docs/api/java/util/zip/ZipFile.html#getInputStream(java.util.zip.ZipEntry)]
> So we saw the following exception in a heavy-load system at rare situation 
> which cause a hive job failed 
> {code}
> 2014-10-21 23:44:41,856 ERROR org.apache.hadoop.hive.ql.exec.Task: Ended 
> Job = job_1413909398487_3696 with exception 
> 'java.lang.RuntimeException(java.io.IOException: Stream closed)' 
> java.lang.RuntimeException: java.io.IOException: Stream closed 
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2484) 
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2337) 
> at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2254) 
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:861) 
> at 
> org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:2030) 
> at org.apache.hadoop.mapred.JobConf.(JobConf.java:479) 
> at org.apache.hadoop.mapred.JobConf.(JobConf.java:469) 
> at org.apache.hadoop.mapreduce.Cluster.getJob(Cluster.java:187) 
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:582) 
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:580) 
> at java.security.AccessController.doPrivileged(Native Method) 
> at javax.security.auth.Subject.doAs(Subject.java:415) 
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.j 
> ava:1614) 
> at 
> org.apache.hadoop.mapred.JobClient.getJobUsingCluster(JobClient.java:580) 
> at org.apache.hadoop.mapred.JobClient.getJob(JobClient.java:598) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExe 
> cHelper.java:288) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExe 
> cHelper.java:547) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:426) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:136) 
> at org.apache.hadoop.hive.ql.exec.Task.execu

[jira] [Commented] (HADOOP-11628) SPNEGO auth does not work with CNAMEs in JDK8

2015-09-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14876229#comment-14876229
 ] 

Hadoop QA commented on HADOOP-11628:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m 36s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   9m 10s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 32s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 25s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 24s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 38s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 36s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   0m 47s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  13m 40s | Tests passed in 
hadoop-auth. |
| | |  56m 52s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12700519/HADOOP-11628.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 88d89267 |
| hadoop-auth test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7678/artifact/patchprocess/testrun_hadoop-auth.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7678/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7678/console |


This message was automatically generated.

> SPNEGO auth does not work with CNAMEs in JDK8
> -
>
> Key: HADOOP-11628
> URL: https://issues.apache.org/jira/browse/HADOOP-11628
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Blocker
>  Labels: jdk8
> Attachments: HADOOP-11628.patch
>
>
> Pre-JDK8, GSSName auto-canonicalized the hostname when constructing the 
> principal for SPNEGO.  JDK8 no longer does this which breaks the use of 
> user-friendly CNAMEs for services.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12404) Disable caching for JarURLConnection to avoid sharing JarFile with other users when loading resource from URL in Configuration class.

2015-09-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14876206#comment-14876206
 ] 

Hudson commented on HADOOP-12404:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #1151 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1151/])
HADOOP-12404. Disable caching for JarURLConnection to avoid sharing JarFile 
with other users when loading resource from URL in Configuration class. 
Contributed by Zhihai Xu (zxu: rev 88d89267ff6b66e144bfcceb09532191975f2a4a)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java


> Disable caching for JarURLConnection to avoid sharing JarFile with other 
> users when loading resource from URL in Configuration class.
> -
>
> Key: HADOOP-12404
> URL: https://issues.apache.org/jira/browse/HADOOP-12404
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Reporter: zhihai xu
>Assignee: zhihai xu
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12404.000.patch
>
>
> Disable caching for JarURLConnection to avoid sharing JarFile with other 
> users when loading resource from URL in Configuration class.
> Currently {{Configuration#parse}} will call {{url.openStream}} to get the 
> InputStream for {{DocumentBuilder}} to parse.
> Based on the JDK source code, the calling sequence is 
> url.openStream => 
> [handler.openConnection.getInputStream|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/Handler.java]
>  => [new 
> JarURLConnection|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/JarURLConnection.java#JarURLConnection]
>  => JarURLConnection.connect => [factory.get(getJarFileURL(), 
> getUseCaches())|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/JarFileFactory.java]
>  =>  
> [URLJarFile.getInputStream|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/URLJarFile.java#URLJarFile.getJarFile%28java.net.URL%2Csun.net.www.protocol.jar.URLJarFile.URLJarFileCloseController%29]=>[JarFile.getInputStream|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/util/jar/JarFile.java#JarFile.getInputStream%28java.util.zip.ZipEntry%29]=>ZipFile.getInputStream
> If {{URLConnection#getUseCaches}} is true (by default), URLJarFile will be 
> shared for the same URL. If the shared URLJarFile is closed by other users, 
> all the InputStream returned by URLJarFile#getInputStream will be closed 
> based on the 
> [document|http://docs.oracle.com/javase/7/docs/api/java/util/zip/ZipFile.html#getInputStream(java.util.zip.ZipEntry)]
> So we saw the following exception in a heavy-load system at rare situation 
> which cause a hive job failed 
> {code}
> 2014-10-21 23:44:41,856 ERROR org.apache.hadoop.hive.ql.exec.Task: Ended 
> Job = job_1413909398487_3696 with exception 
> 'java.lang.RuntimeException(java.io.IOException: Stream closed)' 
> java.lang.RuntimeException: java.io.IOException: Stream closed 
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2484) 
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2337) 
> at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2254) 
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:861) 
> at 
> org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:2030) 
> at org.apache.hadoop.mapred.JobConf.(JobConf.java:479) 
> at org.apache.hadoop.mapred.JobConf.(JobConf.java:469) 
> at org.apache.hadoop.mapreduce.Cluster.getJob(Cluster.java:187) 
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:582) 
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:580) 
> at java.security.AccessController.doPrivileged(Native Method) 
> at javax.security.auth.Subject.doAs(Subject.java:415) 
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.j 
> ava:1614) 
> at 
> org.apache.hadoop.mapred.JobClient.getJobUsingCluster(JobClient.java:580) 
> at org.apache.hadoop.mapred.JobClient.getJob(JobClient.java:598) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExe 
> cHelper.java:288) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExe 
> cHelper.java:547) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:426) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:136) 
> at org.apache.hadoop.hive.ql.exec.Task.execu

[jira] [Commented] (HADOOP-12404) Disable caching for JarURLConnection to avoid sharing JarFile with other users when loading resource from URL in Configuration class.

2015-09-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14876193#comment-14876193
 ] 

Hudson commented on HADOOP-12404:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2357 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2357/])
HADOOP-12404. Disable caching for JarURLConnection to avoid sharing JarFile 
with other users when loading resource from URL in Configuration class. 
Contributed by Zhihai Xu (zxu: rev 88d89267ff6b66e144bfcceb09532191975f2a4a)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Disable caching for JarURLConnection to avoid sharing JarFile with other 
> users when loading resource from URL in Configuration class.
> -
>
> Key: HADOOP-12404
> URL: https://issues.apache.org/jira/browse/HADOOP-12404
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Reporter: zhihai xu
>Assignee: zhihai xu
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12404.000.patch
>
>
> Disable caching for JarURLConnection to avoid sharing JarFile with other 
> users when loading resource from URL in Configuration class.
> Currently {{Configuration#parse}} will call {{url.openStream}} to get the 
> InputStream for {{DocumentBuilder}} to parse.
> Based on the JDK source code, the calling sequence is 
> url.openStream => 
> [handler.openConnection.getInputStream|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/Handler.java]
>  => [new 
> JarURLConnection|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/JarURLConnection.java#JarURLConnection]
>  => JarURLConnection.connect => [factory.get(getJarFileURL(), 
> getUseCaches())|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/JarFileFactory.java]
>  =>  
> [URLJarFile.getInputStream|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/URLJarFile.java#URLJarFile.getJarFile%28java.net.URL%2Csun.net.www.protocol.jar.URLJarFile.URLJarFileCloseController%29]=>[JarFile.getInputStream|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/util/jar/JarFile.java#JarFile.getInputStream%28java.util.zip.ZipEntry%29]=>ZipFile.getInputStream
> If {{URLConnection#getUseCaches}} is true (by default), URLJarFile will be 
> shared for the same URL. If the shared URLJarFile is closed by other users, 
> all the InputStream returned by URLJarFile#getInputStream will be closed 
> based on the 
> [document|http://docs.oracle.com/javase/7/docs/api/java/util/zip/ZipFile.html#getInputStream(java.util.zip.ZipEntry)]
> So we saw the following exception in a heavy-load system at rare situation 
> which cause a hive job failed 
> {code}
> 2014-10-21 23:44:41,856 ERROR org.apache.hadoop.hive.ql.exec.Task: Ended 
> Job = job_1413909398487_3696 with exception 
> 'java.lang.RuntimeException(java.io.IOException: Stream closed)' 
> java.lang.RuntimeException: java.io.IOException: Stream closed 
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2484) 
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2337) 
> at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2254) 
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:861) 
> at 
> org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:2030) 
> at org.apache.hadoop.mapred.JobConf.(JobConf.java:479) 
> at org.apache.hadoop.mapred.JobConf.(JobConf.java:469) 
> at org.apache.hadoop.mapreduce.Cluster.getJob(Cluster.java:187) 
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:582) 
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:580) 
> at java.security.AccessController.doPrivileged(Native Method) 
> at javax.security.auth.Subject.doAs(Subject.java:415) 
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.j 
> ava:1614) 
> at 
> org.apache.hadoop.mapred.JobClient.getJobUsingCluster(JobClient.java:580) 
> at org.apache.hadoop.mapred.JobClient.getJob(JobClient.java:598) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExe 
> cHelper.java:288) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExe 
> cHelper.java:547) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:426) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:136) 
> at org.apache.hadoop.hive.ql.exec.

[jira] [Commented] (HADOOP-11364) [Java 8] Over usage of virtual memory

2015-09-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14876169#comment-14876169
 ] 

Steve Loughran commented on HADOOP-11364:
-

set in yarn-site and read by the resource manager: it's a cluster-wide policy.

> [Java 8] Over usage of virtual memory
> -
>
> Key: HADOOP-11364
> URL: https://issues.apache.org/jira/browse/HADOOP-11364
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Mohammad Kamrul Islam
>Assignee: Mohammad Kamrul Islam
>
> In our Hadoop 2 + Java8 effort , we found few jobs are being Killed by Hadoop 
> due to excessive virtual memory allocation.  Although the physical memory 
> usage is low.
> The most common error message is "Container [pid=??,containerID=container_??] 
> is running beyond virtual memory limits. Current usage: 365.1 MB of 1 GB 
> physical memory used; 3.2 GB of 2.1 GB virtual memory used. Killing 
> container."
> We see this problem for MR job as well as in spark driver/executor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12404) Disable caching for JarURLConnection to avoid sharing JarFile with other users when loading resource from URL in Configuration class.

2015-09-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14876135#comment-14876135
 ] 

Hudson commented on HADOOP-12404:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #419 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/419/])
HADOOP-12404. Disable caching for JarURLConnection to avoid sharing JarFile 
with other users when loading resource from URL in Configuration class. 
Contributed by Zhihai Xu (zxu: rev 88d89267ff6b66e144bfcceb09532191975f2a4a)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Disable caching for JarURLConnection to avoid sharing JarFile with other 
> users when loading resource from URL in Configuration class.
> -
>
> Key: HADOOP-12404
> URL: https://issues.apache.org/jira/browse/HADOOP-12404
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Reporter: zhihai xu
>Assignee: zhihai xu
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12404.000.patch
>
>
> Disable caching for JarURLConnection to avoid sharing JarFile with other 
> users when loading resource from URL in Configuration class.
> Currently {{Configuration#parse}} will call {{url.openStream}} to get the 
> InputStream for {{DocumentBuilder}} to parse.
> Based on the JDK source code, the calling sequence is 
> url.openStream => 
> [handler.openConnection.getInputStream|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/Handler.java]
>  => [new 
> JarURLConnection|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/JarURLConnection.java#JarURLConnection]
>  => JarURLConnection.connect => [factory.get(getJarFileURL(), 
> getUseCaches())|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/JarFileFactory.java]
>  =>  
> [URLJarFile.getInputStream|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/URLJarFile.java#URLJarFile.getJarFile%28java.net.URL%2Csun.net.www.protocol.jar.URLJarFile.URLJarFileCloseController%29]=>[JarFile.getInputStream|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/util/jar/JarFile.java#JarFile.getInputStream%28java.util.zip.ZipEntry%29]=>ZipFile.getInputStream
> If {{URLConnection#getUseCaches}} is true (by default), URLJarFile will be 
> shared for the same URL. If the shared URLJarFile is closed by other users, 
> all the InputStream returned by URLJarFile#getInputStream will be closed 
> based on the 
> [document|http://docs.oracle.com/javase/7/docs/api/java/util/zip/ZipFile.html#getInputStream(java.util.zip.ZipEntry)]
> So we saw the following exception in a heavy-load system at rare situation 
> which cause a hive job failed 
> {code}
> 2014-10-21 23:44:41,856 ERROR org.apache.hadoop.hive.ql.exec.Task: Ended 
> Job = job_1413909398487_3696 with exception 
> 'java.lang.RuntimeException(java.io.IOException: Stream closed)' 
> java.lang.RuntimeException: java.io.IOException: Stream closed 
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2484) 
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2337) 
> at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2254) 
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:861) 
> at 
> org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:2030) 
> at org.apache.hadoop.mapred.JobConf.(JobConf.java:479) 
> at org.apache.hadoop.mapred.JobConf.(JobConf.java:469) 
> at org.apache.hadoop.mapreduce.Cluster.getJob(Cluster.java:187) 
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:582) 
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:580) 
> at java.security.AccessController.doPrivileged(Native Method) 
> at javax.security.auth.Subject.doAs(Subject.java:415) 
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.j 
> ava:1614) 
> at 
> org.apache.hadoop.mapred.JobClient.getJobUsingCluster(JobClient.java:580) 
> at org.apache.hadoop.mapred.JobClient.getJob(JobClient.java:598) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExe 
> cHelper.java:288) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExe 
> cHelper.java:547) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:426) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:136) 
> at org.apache.hadoop.hive.ql.exec.

[jira] [Commented] (HADOOP-12404) Disable caching for JarURLConnection to avoid sharing JarFile with other users when loading resource from URL in Configuration class.

2015-09-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14876128#comment-14876128
 ] 

Hudson commented on HADOOP-12404:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #411 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/411/])
HADOOP-12404. Disable caching for JarURLConnection to avoid sharing JarFile 
with other users when loading resource from URL in Configuration class. 
Contributed by Zhihai Xu (zxu: rev 88d89267ff6b66e144bfcceb09532191975f2a4a)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Disable caching for JarURLConnection to avoid sharing JarFile with other 
> users when loading resource from URL in Configuration class.
> -
>
> Key: HADOOP-12404
> URL: https://issues.apache.org/jira/browse/HADOOP-12404
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Reporter: zhihai xu
>Assignee: zhihai xu
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12404.000.patch
>
>
> Disable caching for JarURLConnection to avoid sharing JarFile with other 
> users when loading resource from URL in Configuration class.
> Currently {{Configuration#parse}} will call {{url.openStream}} to get the 
> InputStream for {{DocumentBuilder}} to parse.
> Based on the JDK source code, the calling sequence is 
> url.openStream => 
> [handler.openConnection.getInputStream|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/Handler.java]
>  => [new 
> JarURLConnection|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/JarURLConnection.java#JarURLConnection]
>  => JarURLConnection.connect => [factory.get(getJarFileURL(), 
> getUseCaches())|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/JarFileFactory.java]
>  =>  
> [URLJarFile.getInputStream|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/URLJarFile.java#URLJarFile.getJarFile%28java.net.URL%2Csun.net.www.protocol.jar.URLJarFile.URLJarFileCloseController%29]=>[JarFile.getInputStream|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/util/jar/JarFile.java#JarFile.getInputStream%28java.util.zip.ZipEntry%29]=>ZipFile.getInputStream
> If {{URLConnection#getUseCaches}} is true (by default), URLJarFile will be 
> shared for the same URL. If the shared URLJarFile is closed by other users, 
> all the InputStream returned by URLJarFile#getInputStream will be closed 
> based on the 
> [document|http://docs.oracle.com/javase/7/docs/api/java/util/zip/ZipFile.html#getInputStream(java.util.zip.ZipEntry)]
> So we saw the following exception in a heavy-load system at rare situation 
> which cause a hive job failed 
> {code}
> 2014-10-21 23:44:41,856 ERROR org.apache.hadoop.hive.ql.exec.Task: Ended 
> Job = job_1413909398487_3696 with exception 
> 'java.lang.RuntimeException(java.io.IOException: Stream closed)' 
> java.lang.RuntimeException: java.io.IOException: Stream closed 
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2484) 
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2337) 
> at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2254) 
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:861) 
> at 
> org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:2030) 
> at org.apache.hadoop.mapred.JobConf.(JobConf.java:479) 
> at org.apache.hadoop.mapred.JobConf.(JobConf.java:469) 
> at org.apache.hadoop.mapreduce.Cluster.getJob(Cluster.java:187) 
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:582) 
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:580) 
> at java.security.AccessController.doPrivileged(Native Method) 
> at javax.security.auth.Subject.doAs(Subject.java:415) 
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.j 
> ava:1614) 
> at 
> org.apache.hadoop.mapred.JobClient.getJobUsingCluster(JobClient.java:580) 
> at org.apache.hadoop.mapred.JobClient.getJob(JobClient.java:598) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExe 
> cHelper.java:288) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExe 
> cHelper.java:547) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:426) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:136) 
> at org.apache.hadoop.hiv

[jira] [Updated] (HADOOP-11628) SPNEGO auth does not work with CNAMEs in JDK8

2015-09-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11628:

Status: Patch Available  (was: Open)

> SPNEGO auth does not work with CNAMEs in JDK8
> -
>
> Key: HADOOP-11628
> URL: https://issues.apache.org/jira/browse/HADOOP-11628
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Blocker
>  Labels: jdk8
> Attachments: HADOOP-11628.patch
>
>
> Pre-JDK8, GSSName auto-canonicalized the hostname when constructing the 
> principal for SPNEGO.  JDK8 no longer does this which breaks the use of 
> user-friendly CNAMEs for services.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12404) Disable caching for JarURLConnection to avoid sharing JarFile with other users when loading resource from URL in Configuration class.

2015-09-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14876052#comment-14876052
 ] 

Hudson commented on HADOOP-12404:
-

FAILURE: Integrated in Hadoop-trunk-Commit #8485 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8485/])
HADOOP-12404. Disable caching for JarURLConnection to avoid sharing JarFile 
with other users when loading resource from URL in Configuration class. 
Contributed by Zhihai Xu (zxu: rev 88d89267ff6b66e144bfcceb09532191975f2a4a)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java


> Disable caching for JarURLConnection to avoid sharing JarFile with other 
> users when loading resource from URL in Configuration class.
> -
>
> Key: HADOOP-12404
> URL: https://issues.apache.org/jira/browse/HADOOP-12404
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Reporter: zhihai xu
>Assignee: zhihai xu
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12404.000.patch
>
>
> Disable caching for JarURLConnection to avoid sharing JarFile with other 
> users when loading resource from URL in Configuration class.
> Currently {{Configuration#parse}} will call {{url.openStream}} to get the 
> InputStream for {{DocumentBuilder}} to parse.
> Based on the JDK source code, the calling sequence is 
> url.openStream => 
> [handler.openConnection.getInputStream|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/Handler.java]
>  => [new 
> JarURLConnection|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/JarURLConnection.java#JarURLConnection]
>  => JarURLConnection.connect => [factory.get(getJarFileURL(), 
> getUseCaches())|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/JarFileFactory.java]
>  =>  
> [URLJarFile.getInputStream|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/URLJarFile.java#URLJarFile.getJarFile%28java.net.URL%2Csun.net.www.protocol.jar.URLJarFile.URLJarFileCloseController%29]=>[JarFile.getInputStream|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/util/jar/JarFile.java#JarFile.getInputStream%28java.util.zip.ZipEntry%29]=>ZipFile.getInputStream
> If {{URLConnection#getUseCaches}} is true (by default), URLJarFile will be 
> shared for the same URL. If the shared URLJarFile is closed by other users, 
> all the InputStream returned by URLJarFile#getInputStream will be closed 
> based on the 
> [document|http://docs.oracle.com/javase/7/docs/api/java/util/zip/ZipFile.html#getInputStream(java.util.zip.ZipEntry)]
> So we saw the following exception in a heavy-load system at rare situation 
> which cause a hive job failed 
> {code}
> 2014-10-21 23:44:41,856 ERROR org.apache.hadoop.hive.ql.exec.Task: Ended 
> Job = job_1413909398487_3696 with exception 
> 'java.lang.RuntimeException(java.io.IOException: Stream closed)' 
> java.lang.RuntimeException: java.io.IOException: Stream closed 
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2484) 
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2337) 
> at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2254) 
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:861) 
> at 
> org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:2030) 
> at org.apache.hadoop.mapred.JobConf.(JobConf.java:479) 
> at org.apache.hadoop.mapred.JobConf.(JobConf.java:469) 
> at org.apache.hadoop.mapreduce.Cluster.getJob(Cluster.java:187) 
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:582) 
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:580) 
> at java.security.AccessController.doPrivileged(Native Method) 
> at javax.security.auth.Subject.doAs(Subject.java:415) 
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.j 
> ava:1614) 
> at 
> org.apache.hadoop.mapred.JobClient.getJobUsingCluster(JobClient.java:580) 
> at org.apache.hadoop.mapred.JobClient.getJob(JobClient.java:598) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExe 
> cHelper.java:288) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExe 
> cHelper.java:547) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:426) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:136) 
> at org.apache.hadoop.hive.ql.exec.Task.e

[jira] [Updated] (HADOOP-12404) Disable caching for JarURLConnection to avoid sharing JarFile with other users when loading resource from URL in Configuration class.

2015-09-18 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated HADOOP-12404:
---
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

> Disable caching for JarURLConnection to avoid sharing JarFile with other 
> users when loading resource from URL in Configuration class.
> -
>
> Key: HADOOP-12404
> URL: https://issues.apache.org/jira/browse/HADOOP-12404
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Reporter: zhihai xu
>Assignee: zhihai xu
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12404.000.patch
>
>
> Disable caching for JarURLConnection to avoid sharing JarFile with other 
> users when loading resource from URL in Configuration class.
> Currently {{Configuration#parse}} will call {{url.openStream}} to get the 
> InputStream for {{DocumentBuilder}} to parse.
> Based on the JDK source code, the calling sequence is 
> url.openStream => 
> [handler.openConnection.getInputStream|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/Handler.java]
>  => [new 
> JarURLConnection|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/JarURLConnection.java#JarURLConnection]
>  => JarURLConnection.connect => [factory.get(getJarFileURL(), 
> getUseCaches())|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/JarFileFactory.java]
>  =>  
> [URLJarFile.getInputStream|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/URLJarFile.java#URLJarFile.getJarFile%28java.net.URL%2Csun.net.www.protocol.jar.URLJarFile.URLJarFileCloseController%29]=>[JarFile.getInputStream|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/util/jar/JarFile.java#JarFile.getInputStream%28java.util.zip.ZipEntry%29]=>ZipFile.getInputStream
> If {{URLConnection#getUseCaches}} is true (by default), URLJarFile will be 
> shared for the same URL. If the shared URLJarFile is closed by other users, 
> all the InputStream returned by URLJarFile#getInputStream will be closed 
> based on the 
> [document|http://docs.oracle.com/javase/7/docs/api/java/util/zip/ZipFile.html#getInputStream(java.util.zip.ZipEntry)]
> So we saw the following exception in a heavy-load system at rare situation 
> which cause a hive job failed 
> {code}
> 2014-10-21 23:44:41,856 ERROR org.apache.hadoop.hive.ql.exec.Task: Ended 
> Job = job_1413909398487_3696 with exception 
> 'java.lang.RuntimeException(java.io.IOException: Stream closed)' 
> java.lang.RuntimeException: java.io.IOException: Stream closed 
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2484) 
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2337) 
> at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2254) 
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:861) 
> at 
> org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:2030) 
> at org.apache.hadoop.mapred.JobConf.(JobConf.java:479) 
> at org.apache.hadoop.mapred.JobConf.(JobConf.java:469) 
> at org.apache.hadoop.mapreduce.Cluster.getJob(Cluster.java:187) 
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:582) 
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:580) 
> at java.security.AccessController.doPrivileged(Native Method) 
> at javax.security.auth.Subject.doAs(Subject.java:415) 
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.j 
> ava:1614) 
> at 
> org.apache.hadoop.mapred.JobClient.getJobUsingCluster(JobClient.java:580) 
> at org.apache.hadoop.mapred.JobClient.getJob(JobClient.java:598) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExe 
> cHelper.java:288) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExe 
> cHelper.java:547) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:426) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:136) 
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153) 
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85) 
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1516) 
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1283) 
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1101) 
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:924) 
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:919) 
> a

[jira] [Commented] (HADOOP-12404) Disable caching for JarURLConnection to avoid sharing JarFile with other users when loading resource from URL in Configuration class.

2015-09-18 Thread zhihai xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14876031#comment-14876031
 ] 

zhihai xu commented on HADOOP-12404:


Committed it to branch-2 and trunk! thanks [~asuresh] for the review!

> Disable caching for JarURLConnection to avoid sharing JarFile with other 
> users when loading resource from URL in Configuration class.
> -
>
> Key: HADOOP-12404
> URL: https://issues.apache.org/jira/browse/HADOOP-12404
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Reporter: zhihai xu
>Assignee: zhihai xu
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12404.000.patch
>
>
> Disable caching for JarURLConnection to avoid sharing JarFile with other 
> users when loading resource from URL in Configuration class.
> Currently {{Configuration#parse}} will call {{url.openStream}} to get the 
> InputStream for {{DocumentBuilder}} to parse.
> Based on the JDK source code, the calling sequence is 
> url.openStream => 
> [handler.openConnection.getInputStream|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/Handler.java]
>  => [new 
> JarURLConnection|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/JarURLConnection.java#JarURLConnection]
>  => JarURLConnection.connect => [factory.get(getJarFileURL(), 
> getUseCaches())|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/JarFileFactory.java]
>  =>  
> [URLJarFile.getInputStream|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/URLJarFile.java#URLJarFile.getJarFile%28java.net.URL%2Csun.net.www.protocol.jar.URLJarFile.URLJarFileCloseController%29]=>[JarFile.getInputStream|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/util/jar/JarFile.java#JarFile.getInputStream%28java.util.zip.ZipEntry%29]=>ZipFile.getInputStream
> If {{URLConnection#getUseCaches}} is true (by default), URLJarFile will be 
> shared for the same URL. If the shared URLJarFile is closed by other users, 
> all the InputStream returned by URLJarFile#getInputStream will be closed 
> based on the 
> [document|http://docs.oracle.com/javase/7/docs/api/java/util/zip/ZipFile.html#getInputStream(java.util.zip.ZipEntry)]
> So we saw the following exception in a heavy-load system at rare situation 
> which cause a hive job failed 
> {code}
> 2014-10-21 23:44:41,856 ERROR org.apache.hadoop.hive.ql.exec.Task: Ended 
> Job = job_1413909398487_3696 with exception 
> 'java.lang.RuntimeException(java.io.IOException: Stream closed)' 
> java.lang.RuntimeException: java.io.IOException: Stream closed 
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2484) 
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2337) 
> at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2254) 
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:861) 
> at 
> org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:2030) 
> at org.apache.hadoop.mapred.JobConf.(JobConf.java:479) 
> at org.apache.hadoop.mapred.JobConf.(JobConf.java:469) 
> at org.apache.hadoop.mapreduce.Cluster.getJob(Cluster.java:187) 
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:582) 
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:580) 
> at java.security.AccessController.doPrivileged(Native Method) 
> at javax.security.auth.Subject.doAs(Subject.java:415) 
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.j 
> ava:1614) 
> at 
> org.apache.hadoop.mapred.JobClient.getJobUsingCluster(JobClient.java:580) 
> at org.apache.hadoop.mapred.JobClient.getJob(JobClient.java:598) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExe 
> cHelper.java:288) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExe 
> cHelper.java:547) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:426) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:136) 
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153) 
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85) 
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1516) 
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1283) 
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1101) 
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:924) 
> at org.apache.hadoop.hive.ql.Driver

[jira] [Commented] (HADOOP-12321) Make JvmPauseMonitor to AbstractService

2015-09-18 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14875875#comment-14875875
 ] 

Sunil G commented on HADOOP-12321:
--

I feel here also hadoop-common jar is not latest which is picked. Hence 
JvmPauseMonitor.init() failed for HDFS test cases. A clean build is needed, 
will kick jenkins later.

> Make JvmPauseMonitor to AbstractService
> ---
>
> Key: HADOOP-12321
> URL: https://issues.apache.org/jira/browse/HADOOP-12321
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Sunil G
> Attachments: 0001-HADOOP-12321.patch, 0002-HADOOP-12321.patch, 
> 0004-HADOOP-12321.patch, HADOOP-12321-003.patch, 
> HADOOP-12321-005-aggregated.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The new JVM pause monitor has been written with its own start/stop lifecycle 
> which has already proven brittle to both ordering of operations and, even 
> after HADOOP-12313, is not thread safe (both start and stop are potentially 
> re-entrant).
> It also requires every class which supports the monitor to add another field 
> and perform the lifecycle operations in its own lifecycle, which, for all 
> Yarn services, is the YARN app lifecycle (as implemented in Hadoop common)
> Making the  monitor a subclass of {{AbstractService}} and moving the 
> init/start & stop operations in {{serviceInit()}}, {{serviceStart()}} & 
> {{serviceStop()}} methods will fix the concurrency and state model issues, 
> and make it trivial to add as a child to any YARN service which subclasses 
> {{CompositeService}} (most the NM and RM apps) will be able to hook up the 
> monitor simply by creating one in the ctor and adding it as a child.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12360) Create StatsD metrics2 sink

2015-09-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14875800#comment-14875800
 ] 

Hadoop QA commented on HADOOP-12360:


\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  16m 51s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 42s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 55s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m  5s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 28s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 52s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  22m 35s | Tests passed in 
hadoop-common. |
| | |  62m 29s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12761119/HADOOP-12360.010.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 2ff6faf |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7677/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7677/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7677/console |


This message was automatically generated.

> Create StatsD metrics2 sink
> ---
>
> Key: HADOOP-12360
> URL: https://issues.apache.org/jira/browse/HADOOP-12360
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics
>Affects Versions: 2.7.1
>Reporter: Dave Marion
>Assignee: Dave Marion
>Priority: Minor
> Attachments: HADOOP-12360.001.patch, HADOOP-12360.002.patch, 
> HADOOP-12360.003.patch, HADOOP-12360.004.patch, HADOOP-12360.005.patch, 
> HADOOP-12360.006.patch, HADOOP-12360.007.patch, HADOOP-12360.008.patch, 
> HADOOP-12360.009.patch, HADOOP-12360.010.patch
>
>
> Create a metrics sink that pushes to a StatsD daemon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12321) Make JvmPauseMonitor to AbstractService

2015-09-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14875795#comment-14875795
 ] 

Hadoop QA commented on HADOOP-12321:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  24m 46s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 3 new or modified test files. |
| {color:green}+1{color} | javac |   7m 57s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  4s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   4m 21s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 38s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |  10m 19s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  22m 47s | Tests passed in 
hadoop-common. |
| {color:red}-1{color} | mapreduce tests |   5m 49s | Tests failed in 
hadoop-mapreduce-client-hs. |
| {color:green}+1{color} | yarn tests |   3m  9s | Tests passed in 
hadoop-yarn-server-applicationhistoryservice. |
| {color:green}+1{color} | yarn tests |   7m 38s | Tests passed in 
hadoop-yarn-server-nodemanager. |
| {color:red}-1{color} | yarn tests |  54m 25s | Tests failed in 
hadoop-yarn-server-resourcemanager. |
| {color:green}+1{color} | yarn tests |   0m 23s | Tests passed in 
hadoop-yarn-server-web-proxy. |
| {color:red}-1{color} | hdfs tests |  75m  2s | Tests failed in hadoop-hdfs. |
| {color:red}-1{color} | hdfs tests |   0m 13s | Tests failed in 
hadoop-hdfs-nfs. |
| | | 229m 34s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.mapreduce.v2.hs.TestJobHistoryParsing |
|   | hadoop.mapreduce.v2.hs.webapp.dao.TestJobInfo |
|   | hadoop.mapreduce.v2.hs.TestJobHistoryEntities |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestAllocationFileLoaderService
 |
|   | hadoop.hdfs.server.namenode.ha.TestXAttrsWithHA |
|   | hadoop.hdfs.server.namenode.ha.TestFailureOfSharedDir |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.server.namenode.TestEditLogRace |
|   | hadoop.hdfs.TestDFSRename |
|   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithNodeGroup |
|   | hadoop.hdfs.TestDatanodeConfig |
|   | hadoop.hdfs.server.namenode.TestQuotaByStorageType |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotFileLength |
|   | hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant |
|   | hadoop.hdfs.server.namenode.TestXAttrConfigFlag |
|   | hadoop.hdfs.TestBlockReaderFactory |
|   | hadoop.hdfs.TestDFSClientRetries |
|   | hadoop.hdfs.server.namenode.TestFSImage |
|   | hadoop.hdfs.TestFSOutputSummer |
|   | hadoop.hdfs.TestFileCreationDelete |
|   | hadoop.hdfs.TestSnapshotCommands |
|   | hadoop.hdfs.TestFileConcurrentReader |
|   | hadoop.hdfs.server.namenode.TestCheckpoint |
|   | hadoop.hdfs.TestFileAppend2 |
|   | hadoop.hdfs.tools.TestDFSAdmin |
|   | hadoop.hdfs.TestBlockStoragePolicy |
|   | hadoop.hdfs.web.TestWebHdfsWithAuthenticationFilter |
|   | hadoop.hdfs.server.namenode.TestNNStorageRetentionFunctional |
|   | hadoop.hdfs.server.namenode.TestMetaSave |
|   | hadoop.hdfs.TestDFSRemove |
|   | hadoop.hdfs.TestLeaseRecovery |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.tools.TestDelegationTokenFetcher |
|   | hadoop.tracing.TestTraceAdmin |
|   | hadoop.hdfs.server.namenode.TestGenericJournalConf |
|   | hadoop.hdfs.TestWriteConfigurationToDFS |
|   | hadoop.hdfs.tools.TestStoragePolicyCommands |
|   | hadoop.hdfs.server.namenode.TestNNThroughputBenchmark |
|   | hadoop.hdfs.TestReplication |
|   | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots |
|   | hadoop.hdfs.server.blockmanagement.TestComputeInvalidateWork |
|   | hadoop.hdfs.server.namenode.TestAclConfigFlag |
|   | hadoop.hdfs.TestPipelines |
|   | hadoop.hdfs.tools.TestGetGroups |
|   | hadoop.hdfs.TestFileCreationClient |
|   | hadoop.hdfs.TestParallelShortCircuitRead |
|   | hadoop.hdfs.server.namenode.ha.TestHAFsck |
|   | hadoop.hdfs.server.blockmanagement.TestHeartbeatHandling |
|   | hadoop.hdfs.TestRenameWhileOpen |
|   | hadoop.hdfs.server.blockmanagement.TestSequentialBlockId |
|   | hadoop.hdfs.TestBlocksScheduledCounter |
|   | hadoop.hdfs.TestRollingUpgradeRollback |

[jira] [Commented] (HADOOP-11684) S3a to use thread pool that blocks clients

2015-09-18 Thread Thomas Demoor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14875712#comment-14875712
 ] 

Thomas Demoor commented on HADOOP-11684:


As said previously, I agree with your points above. I propose to use 
CallerRunsPolicy (I'll add a patch) and we can always move to the current patch 
later if the threaded writers would come up.



> S3a to use thread pool that blocks clients
> --
>
> Key: HADOOP-11684
> URL: https://issues.apache.org/jira/browse/HADOOP-11684
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Thomas Demoor
>Assignee: Thomas Demoor
> Attachments: HADOOP-11684-001.patch, HADOOP-11684-002.patch
>
>
> Currently, if fs.s3a.max.total.tasks are queued and another (part)upload 
> wants to start, a RejectedExecutionException is thrown. 
> We should use a threadpool that blocks clients, nicely throtthling them, 
> rather than throwing an exception. F.i. something similar to 
> https://github.com/apache/incubator-s4/blob/master/subprojects/s4-comm/src/main/java/org/apache/s4/comm/staging/BlockingThreadPoolExecutorService.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12111) [Umbrella] Split test-patch off into its own TLP

2015-09-18 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14875709#comment-14875709
 ] 

Sean Busbey commented on HADOOP-12111:
--

i considered that, but then the jira ID for things already in the repo won't 
match? The jira API might still give us the correct information given the old 
ID?

How about I move some open ones first so that we can see what releasedocmaker 
says about them?

> [Umbrella] Split test-patch off into its own TLP
> 
>
> Key: HADOOP-12111
> URL: https://issues.apache.org/jira/browse/HADOOP-12111
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
>
> Given test-patch's tendency to get forked into a variety of different 
> projects, it makes a lot of sense to make an Apache TLP so that everyone can 
> benefit from a common code base.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12360) Create StatsD metrics2 sink

2015-09-18 Thread Dave Marion (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Marion updated HADOOP-12360:
-
Status: Patch Available  (was: Open)

> Create StatsD metrics2 sink
> ---
>
> Key: HADOOP-12360
> URL: https://issues.apache.org/jira/browse/HADOOP-12360
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics
>Affects Versions: 2.7.1
>Reporter: Dave Marion
>Assignee: Dave Marion
>Priority: Minor
> Attachments: HADOOP-12360.001.patch, HADOOP-12360.002.patch, 
> HADOOP-12360.003.patch, HADOOP-12360.004.patch, HADOOP-12360.005.patch, 
> HADOOP-12360.006.patch, HADOOP-12360.007.patch, HADOOP-12360.008.patch, 
> HADOOP-12360.009.patch, HADOOP-12360.010.patch
>
>
> Create a metrics sink that pushes to a StatsD daemon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12360) Create StatsD metrics2 sink

2015-09-18 Thread Dave Marion (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Marion updated HADOOP-12360:
-
Attachment: HADOOP-12360.010.patch

fix checkstyle issues

> Create StatsD metrics2 sink
> ---
>
> Key: HADOOP-12360
> URL: https://issues.apache.org/jira/browse/HADOOP-12360
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics
>Affects Versions: 2.7.1
>Reporter: Dave Marion
>Assignee: Dave Marion
>Priority: Minor
> Attachments: HADOOP-12360.001.patch, HADOOP-12360.002.patch, 
> HADOOP-12360.003.patch, HADOOP-12360.004.patch, HADOOP-12360.005.patch, 
> HADOOP-12360.006.patch, HADOOP-12360.007.patch, HADOOP-12360.008.patch, 
> HADOOP-12360.009.patch, HADOOP-12360.010.patch
>
>
> Create a metrics sink that pushes to a StatsD daemon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12360) Create StatsD metrics2 sink

2015-09-18 Thread Dave Marion (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Marion updated HADOOP-12360:
-
Status: Open  (was: Patch Available)

checkstyle issues

> Create StatsD metrics2 sink
> ---
>
> Key: HADOOP-12360
> URL: https://issues.apache.org/jira/browse/HADOOP-12360
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics
>Affects Versions: 2.7.1
>Reporter: Dave Marion
>Assignee: Dave Marion
>Priority: Minor
> Attachments: HADOOP-12360.001.patch, HADOOP-12360.002.patch, 
> HADOOP-12360.003.patch, HADOOP-12360.004.patch, HADOOP-12360.005.patch, 
> HADOOP-12360.006.patch, HADOOP-12360.007.patch, HADOOP-12360.008.patch, 
> HADOOP-12360.009.patch
>
>
> Create a metrics sink that pushes to a StatsD daemon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12420) While trying to access Amazon S3 through hadoop-aws(Spark basically) I was getting Exception in thread "main" java.lang.NoSuchMethodError: com.amazonaws.services.s3.t

2015-09-18 Thread Thomas Demoor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14875693#comment-14875693
 ] 

Thomas Demoor commented on HADOOP-12420:


Ok, just see now that it was backported to 2 already, disregard my previous 
comment.

> While trying to access Amazon S3 through hadoop-aws(Spark basically) I was 
> getting Exception in thread "main" java.lang.NoSuchMethodError: 
> com.amazonaws.services.s3.transfer.TransferManagerConfiguration.setMultipartUploadThreshold(I)V
> --
>
> Key: HADOOP-12420
> URL: https://issues.apache.org/jira/browse/HADOOP-12420
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.1
>Reporter: Tariq Mohammad
>Assignee: Tariq Mohammad
>Priority: Minor
>
> While trying to access data stored in Amazon S3 through Apache Spark, which  
> internally uses hadoop-aws jar I was getting the following exception :
> Exception in thread "main" java.lang.NoSuchMethodError: 
> com.amazonaws.services.s3.transfer.TransferManagerConfiguration.setMultipartUploadThreshold(I)V
> Probable reason could be the fact that aws java sdk expects a long parameter 
> for the setMultipartUploadThreshold(long multiPartThreshold) method, but 
> hadoop-aws was using a parameter of type int(multiPartThreshold). 
> I tried using the downloaded hadoop-aws jar and the build through its maven 
> dependency, but in both the cases I encountered the same exception. Although 
> I can see private long multiPartThreshold; in hadoop-aws GitHub repo, it's 
> not getting reflected in the downloaded jar or in the jar created from maven 
> dependency.
> Following lines in the S3AFileSystem class create this difference :
> Build from trunk : 
> private long multiPartThreshold;
> this.multiPartThreshold = conf.getLong("fs.s3a.multipart.threshold", 
> 2147483647L); => Line 267
> Build through maven dependency : 
> private int multiPartThreshold;
> multiPartThreshold = conf.getInt(MIN_MULTIPART_THRESHOLD, 
> DEFAULT_MIN_MULTIPART_THRESHOLD); => Line 249



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12420) While trying to access Amazon S3 through hadoop-aws(Spark basically) I was getting Exception in thread "main" java.lang.NoSuchMethodError: com.amazonaws.services.s3.t

2015-09-18 Thread Thomas Demoor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14875689#comment-14875689
 ] 

Thomas Demoor commented on HADOOP-12420:


I'm not sure I understand the issue completely. Spark with hadoop-2.7.1 and 
aws-java-sdk 1.7.4 should work. The upgrade to aws-sdk-s3 1.10.6 is only in 
hadoop-trunk. Are you building build spark vs hadoop-trunk yourself? With 
hadoop-provided (http://spark.apache.org/docs/latest/building-spark.html)? From 
the error it seems you have the old hadoop code but the updated aws-sdk. 

[~ste...@apache.org], backporting HADOOP-12269 fixes some bugs on the aws side 
such as MultipartThreshold->long but also more serious ones (HADOOP-12267). 
THese might be serious enough to justify the backport to branch-2.

> While trying to access Amazon S3 through hadoop-aws(Spark basically) I was 
> getting Exception in thread "main" java.lang.NoSuchMethodError: 
> com.amazonaws.services.s3.transfer.TransferManagerConfiguration.setMultipartUploadThreshold(I)V
> --
>
> Key: HADOOP-12420
> URL: https://issues.apache.org/jira/browse/HADOOP-12420
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.1
>Reporter: Tariq Mohammad
>Assignee: Tariq Mohammad
>Priority: Minor
>
> While trying to access data stored in Amazon S3 through Apache Spark, which  
> internally uses hadoop-aws jar I was getting the following exception :
> Exception in thread "main" java.lang.NoSuchMethodError: 
> com.amazonaws.services.s3.transfer.TransferManagerConfiguration.setMultipartUploadThreshold(I)V
> Probable reason could be the fact that aws java sdk expects a long parameter 
> for the setMultipartUploadThreshold(long multiPartThreshold) method, but 
> hadoop-aws was using a parameter of type int(multiPartThreshold). 
> I tried using the downloaded hadoop-aws jar and the build through its maven 
> dependency, but in both the cases I encountered the same exception. Although 
> I can see private long multiPartThreshold; in hadoop-aws GitHub repo, it's 
> not getting reflected in the downloaded jar or in the jar created from maven 
> dependency.
> Following lines in the S3AFileSystem class create this difference :
> Build from trunk : 
> private long multiPartThreshold;
> this.multiPartThreshold = conf.getLong("fs.s3a.multipart.threshold", 
> 2147483647L); => Line 267
> Build through maven dependency : 
> private int multiPartThreshold;
> multiPartThreshold = conf.getInt(MIN_MULTIPART_THRESHOLD, 
> DEFAULT_MIN_MULTIPART_THRESHOLD); => Line 249



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12360) Create StatsD metrics2 sink

2015-09-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14875665#comment-14875665
 ] 

Hadoop QA commented on HADOOP-12360:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  16m 47s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 43s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 58s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m  4s | The applied patch generated  2 
new checkstyle issues (total was 0, now 2). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 27s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 51s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  23m  0s | Tests passed in 
hadoop-common. |
| | |  62m 51s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12759800/HADOOP-12360.009.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 92c1af1 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/7676/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7676/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7676/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7676/console |


This message was automatically generated.

> Create StatsD metrics2 sink
> ---
>
> Key: HADOOP-12360
> URL: https://issues.apache.org/jira/browse/HADOOP-12360
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics
>Affects Versions: 2.7.1
>Reporter: Dave Marion
>Assignee: Dave Marion
>Priority: Minor
> Attachments: HADOOP-12360.001.patch, HADOOP-12360.002.patch, 
> HADOOP-12360.003.patch, HADOOP-12360.004.patch, HADOOP-12360.005.patch, 
> HADOOP-12360.006.patch, HADOOP-12360.007.patch, HADOOP-12360.008.patch, 
> HADOOP-12360.009.patch
>
>
> Create a metrics sink that pushes to a StatsD daemon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9657) NetUtils.wrapException to have special handling for 0.0.0.0 addresses and :0 ports

2015-09-18 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14875630#comment-14875630
 ] 

Varun Saxena commented on HADOOP-9657:
--

Thanks a lot Steve.
Will update.

> NetUtils.wrapException to have special handling for 0.0.0.0 addresses and :0 
> ports
> --
>
> Key: HADOOP-9657
> URL: https://issues.apache.org/jira/browse/HADOOP-9657
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net
>Affects Versions: 2.7.0
>Reporter: Steve Loughran
>Assignee: Varun Saxena
>Priority: Minor
> Attachments: HADOOP-9657.01.patch
>
>
> when an exception is wrapped, it may look like {{0.0.0.0:0 failed on 
> connection exception: java.net.ConnectException: Connection refused; For more 
> details see:  http://wiki.apache.org/hadoop/ConnectionRefused}}
> We should recognise all zero ip addresses and 0 ports and flag them as "your 
> configuration of the endpoint is wrong", as it is clearly the case



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12111) [Umbrella] Split test-patch off into its own TLP

2015-09-18 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14864503#comment-14864503
 ] 

Allen Wittenauer commented on HADOOP-12111:
---

I'm tempted to say move them all, open and closed.

> [Umbrella] Split test-patch off into its own TLP
> 
>
> Key: HADOOP-12111
> URL: https://issues.apache.org/jira/browse/HADOOP-12111
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
>
> Given test-patch's tendency to get forked into a variety of different 
> projects, it makes a lot of sense to make an Apache TLP so that everyone can 
> benefit from a common code base.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12360) Create StatsD metrics2 sink

2015-09-18 Thread Dave Marion (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Marion updated HADOOP-12360:
-
Attachment: HADOOP-12360.009.patch

Fix findbugs warnings

> Create StatsD metrics2 sink
> ---
>
> Key: HADOOP-12360
> URL: https://issues.apache.org/jira/browse/HADOOP-12360
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics
>Affects Versions: 2.7.1
>Reporter: Dave Marion
>Assignee: Dave Marion
>Priority: Minor
> Attachments: HADOOP-12360.001.patch, HADOOP-12360.002.patch, 
> HADOOP-12360.003.patch, HADOOP-12360.004.patch, HADOOP-12360.005.patch, 
> HADOOP-12360.006.patch, HADOOP-12360.007.patch, HADOOP-12360.008.patch, 
> HADOOP-12360.009.patch
>
>
> Create a metrics sink that pushes to a StatsD daemon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12360) Create StatsD metrics2 sink

2015-09-18 Thread Dave Marion (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Marion updated HADOOP-12360:
-
Status: Patch Available  (was: Open)

> Create StatsD metrics2 sink
> ---
>
> Key: HADOOP-12360
> URL: https://issues.apache.org/jira/browse/HADOOP-12360
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics
>Affects Versions: 2.7.1
>Reporter: Dave Marion
>Assignee: Dave Marion
>Priority: Minor
> Attachments: HADOOP-12360.001.patch, HADOOP-12360.002.patch, 
> HADOOP-12360.003.patch, HADOOP-12360.004.patch, HADOOP-12360.005.patch, 
> HADOOP-12360.006.patch, HADOOP-12360.007.patch, HADOOP-12360.008.patch, 
> HADOOP-12360.009.patch
>
>
> Create a metrics sink that pushes to a StatsD daemon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12360) Create StatsD metrics2 sink

2015-09-18 Thread Dave Marion (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Marion updated HADOOP-12360:
-
Status: Open  (was: Patch Available)

fix findbugs warnings

> Create StatsD metrics2 sink
> ---
>
> Key: HADOOP-12360
> URL: https://issues.apache.org/jira/browse/HADOOP-12360
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics
>Affects Versions: 2.7.1
>Reporter: Dave Marion
>Assignee: Dave Marion
>Priority: Minor
> Attachments: HADOOP-12360.001.patch, HADOOP-12360.002.patch, 
> HADOOP-12360.003.patch, HADOOP-12360.004.patch, HADOOP-12360.005.patch, 
> HADOOP-12360.006.patch, HADOOP-12360.007.patch, HADOOP-12360.008.patch
>
>
> Create a metrics sink that pushes to a StatsD daemon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12423) ShutdownHookManager throws exception if JVM is already being shut down

2015-09-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14847344#comment-14847344
 ] 

Hadoop QA commented on HADOOP-12423:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 11s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 46s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  7s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m  5s | The applied patch generated  1 
new checkstyle issues (total was 7, now 7). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 29s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 58s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | common tests |  22m 45s | Tests failed in 
hadoop-common. |
| | |  63m 22s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.ipc.TestRPC |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12758382/HADOOP-12423.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 92c1af1 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/7674/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7674/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7674/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7674/console |


This message was automatically generated.

> ShutdownHookManager throws exception if JVM is already being shut down
> --
>
> Key: HADOOP-12423
> URL: https://issues.apache.org/jira/browse/HADOOP-12423
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Abhishek Agarwal
>Assignee: Abhishek Agarwal
>Priority: Minor
> Attachments: HADOOP-12423.patch
>
>
> If JVM is under shutdown, static method in ShutdownHookManager will throw an 
> IllegalStateException. This exception should be caught and ignored while 
> registering the hooks. 
> Stack trace: 
> {noformat}
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.hadoop.util.ShutdownHookManager
>at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2639) 
> ~[stormjar.jar:1.4.0-SNAPSHOT]
>at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612) 
> ~[stormjar.jar:1.4.0-SNAPSHOT]
>at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) 
> ~[stormjar.jar:1.4.0-SNAPSHOT]
>...
>...
>at 
> backtype.storm.daemon.executor$fn__6647$fn__6659.invoke(executor.clj:692) 
> ~[storm-core-0.9.5.jar:0.9.5]
>at backtype.storm.util$async_loop$fn__459.invoke(util.clj:461) 
> ~[storm-core-0.9.5.jar:0.9.5]
>at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
>at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12423) ShutdownHookManager throws exception if JVM is already being shut down

2015-09-18 Thread Abhishek Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Agarwal updated HADOOP-12423:
--
Description: 
If JVM is under shutdown, static method in ShutdownHookManager will throw an 
IllegalStateException. This exception should be caught and ignored while 
registering the hooks. 

Stack trace: 
{noformat}
java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.hadoop.util.ShutdownHookManager
   at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2639) 
~[stormjar.jar:1.4.0-SNAPSHOT]
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612) 
~[stormjar.jar:1.4.0-SNAPSHOT]
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) 
~[stormjar.jar:1.4.0-SNAPSHOT]
   ...
   ...
   at 
backtype.storm.daemon.executor$fn__6647$fn__6659.invoke(executor.clj:692) 
~[storm-core-0.9.5.jar:0.9.5]
   at backtype.storm.util$async_loop$fn__459.invoke(util.clj:461) 
~[storm-core-0.9.5.jar:0.9.5]
   at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
{noformat}


  was:
If JVM is under shutdown, static method in ShutdownHookManager will throw an 
IllegalStateException. This exception should be caught and ignored while 
registering the hooks. 



> ShutdownHookManager throws exception if JVM is already being shut down
> --
>
> Key: HADOOP-12423
> URL: https://issues.apache.org/jira/browse/HADOOP-12423
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Abhishek Agarwal
>Assignee: Abhishek Agarwal
>Priority: Minor
> Attachments: HADOOP-12423.patch
>
>
> If JVM is under shutdown, static method in ShutdownHookManager will throw an 
> IllegalStateException. This exception should be caught and ignored while 
> registering the hooks. 
> Stack trace: 
> {noformat}
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.hadoop.util.ShutdownHookManager
>at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2639) 
> ~[stormjar.jar:1.4.0-SNAPSHOT]
>at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612) 
> ~[stormjar.jar:1.4.0-SNAPSHOT]
>at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) 
> ~[stormjar.jar:1.4.0-SNAPSHOT]
>...
>...
>at 
> backtype.storm.daemon.executor$fn__6647$fn__6659.invoke(executor.clj:692) 
> ~[storm-core-0.9.5.jar:0.9.5]
>at backtype.storm.util$async_loop$fn__459.invoke(util.clj:461) 
> ~[storm-core-0.9.5.jar:0.9.5]
>at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
>at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11918) Listing an empty s3a root directory throws FileNotFound.

2015-09-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14847309#comment-14847309
 ] 

Hadoop QA commented on HADOOP-11918:


\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 39s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 44s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  3s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 25s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 27s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 29s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 37s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  22m 57s | Tests passed in 
hadoop-common. |
| {color:green}+1{color} | tools/hadoop tests |   0m 14s | Tests passed in 
hadoop-aws. |
| | |  65m 11s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12757477/HADOOP-11918-003.patch 
|
| Optional Tests | javac unit findbugs checkstyle javadoc |
| git revision | trunk / a7201d6 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7673/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-aws test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7673/artifact/patchprocess/testrun_hadoop-aws.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7673/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7673/console |


This message was automatically generated.

> Listing an empty s3a root directory throws FileNotFound.
> 
>
> Key: HADOOP-11918
> URL: https://issues.apache.org/jira/browse/HADOOP-11918
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.7.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
>  Labels: BB2015-05-TBR, s3
> Attachments: HADOOP-11918-002.patch, HADOOP-11918-003.patch, 
> HADOOP-11918.000.patch, HADOOP-11918.001.patch
>
>
> With an empty s3 bucket and run
> {code}
> $ hadoop fs -D... -ls s3a://hdfs-s3a-test/
> 15/05/04 15:21:34 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> ls: `s3a://hdfs-s3a-test/': No such file or directory
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12423) ShutdownHookManager throws exception if JVM is already being shut down

2015-09-18 Thread Laxman (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14847286#comment-14847286
 ] 

Laxman commented on HADOOP-12423:
-

[~abhishek.agarwal], sorry. I misread it. After going through patch, I 
understood the issue you are referring to.
In the scenario you mentioned, throwing an exception in static block may lead 
to Class initialization exception and errors may be misleading.

+1 patch looks good to me (non-binding).

> ShutdownHookManager throws exception if JVM is already being shut down
> --
>
> Key: HADOOP-12423
> URL: https://issues.apache.org/jira/browse/HADOOP-12423
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Abhishek Agarwal
>Assignee: Abhishek Agarwal
>Priority: Minor
> Attachments: HADOOP-12423.patch
>
>
> If JVM is under shutdown, static method in ShutdownHookManager will throw an 
> IllegalStateException. This exception should be caught and ignored while 
> registering the hooks. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12423) ShutdownHookManager throws exception if JVM is already being shut down

2015-09-18 Thread Laxman (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14847270#comment-14847270
 ] 

Laxman commented on HADOOP-12423:
-

[~abhishek.agarwal], that is expected behavior. client should not be able to 
register a hook when shutdown is already in progress and client should be 
handling this exception. we should not be suppressing the exception in 
ShutdownHookManager.

> ShutdownHookManager throws exception if JVM is already being shut down
> --
>
> Key: HADOOP-12423
> URL: https://issues.apache.org/jira/browse/HADOOP-12423
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Abhishek Agarwal
>Assignee: Abhishek Agarwal
>Priority: Minor
> Attachments: HADOOP-12423.patch
>
>
> If JVM is under shutdown, static method in ShutdownHookManager will throw an 
> IllegalStateException. This exception should be caught and ignored while 
> registering the hooks. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12423) ShutdownHookManager throws exception if JVM is already being shut down

2015-09-18 Thread Abhishek Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Agarwal updated HADOOP-12423:
--
Status: Patch Available  (was: Open)

> ShutdownHookManager throws exception if JVM is already being shut down
> --
>
> Key: HADOOP-12423
> URL: https://issues.apache.org/jira/browse/HADOOP-12423
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Abhishek Agarwal
>Assignee: Abhishek Agarwal
>Priority: Minor
> Attachments: HADOOP-12423.patch
>
>
> If JVM is under shutdown, static method in ShutdownHookManager will throw an 
> IllegalStateException. This exception should be caught and ignored while 
> registering the hooks. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12423) ShutdownHookManager throws exception if JVM is already being shut down

2015-09-18 Thread Abhishek Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Agarwal updated HADOOP-12423:
--
Attachment: HADOOP-12423.patch

> ShutdownHookManager throws exception if JVM is already being shut down
> --
>
> Key: HADOOP-12423
> URL: https://issues.apache.org/jira/browse/HADOOP-12423
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Abhishek Agarwal
>Assignee: Abhishek Agarwal
>Priority: Minor
> Attachments: HADOOP-12423.patch
>
>
> If JVM is under shutdown, static method in ShutdownHookManager will throw an 
> IllegalStateException. This exception should be caught and ignored while 
> registering the hooks. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12423) ShutdownHookManager throws exception if JVM is already being shut down

2015-09-18 Thread Abhishek Agarwal (JIRA)
Abhishek Agarwal created HADOOP-12423:
-

 Summary: ShutdownHookManager throws exception if JVM is already 
being shut down
 Key: HADOOP-12423
 URL: https://issues.apache.org/jira/browse/HADOOP-12423
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Abhishek Agarwal
Assignee: Abhishek Agarwal
Priority: Minor


If JVM is under shutdown, static method in ShutdownHookManager will throw an 
IllegalStateException. This exception should be caught and ignored while 
registering the hooks. 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11918) Listing an empty s3a root directory throws FileNotFound.

2015-09-18 Thread Thomas Demoor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Demoor updated HADOOP-11918:
---
Attachment: HADOOP-11918-003.patch

I uploaded patch-003, which is what we are running. Feel free to rework as you 
see fit.

In this patch [~PieterReuse] has already addressed Steve's request to make the 
test apply to all filesystems by adding it to 
{{AbstractContractRootDirectoryTest}}. The test deletes all files in the root 
dir to assure it is empty before we test the listing code. Therefore, extra 
care was taken via {{skipIfUnsupported()}} similar to the other tests in this 
class.

> Listing an empty s3a root directory throws FileNotFound.
> 
>
> Key: HADOOP-11918
> URL: https://issues.apache.org/jira/browse/HADOOP-11918
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.7.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
>  Labels: BB2015-05-TBR, s3
> Attachments: HADOOP-11918-002.patch, HADOOP-11918-003.patch, 
> HADOOP-11918.000.patch, HADOOP-11918.001.patch
>
>
> With an empty s3 bucket and run
> {code}
> $ hadoop fs -D... -ls s3a://hdfs-s3a-test/
> 15/05/04 15:21:34 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> ls: `s3a://hdfs-s3a-test/': No such file or directory
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)