[jira] [Comment Edited] (HADOOP-12827) WebHdfs socket timeouts should be configurable

2016-03-01 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15175208#comment-15175208
 ] 

Chris Douglas edited comment on HADOOP-12827 at 3/2/16 7:42 AM:


OK. Added a line to {{hdfs-default.xml}} explaining the default unit.


was (Author: chris.douglas):
OK. Added a line to the default explaining the default.

> WebHdfs socket timeouts should be configurable
> --
>
> Key: HADOOP-12827
> URL: https://issues.apache.org/jira/browse/HADOOP-12827
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
> Environment: all
>Reporter: Austin Donnelly
>Assignee: Austin Donnelly
>  Labels: easyfix, newbie
> Attachments: HADOOP-12827.001.patch, HADOOP-12827.002.patch, 
> HADOOP-12827.002.patch, HADOOP-12827.002.patch, HADOOP-12827.003.patch, 
> HADOOP-12827.004.patch
>
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> WebHdfs client connections use sockets with fixed timeouts of 60 seconds to 
> connect, and 60 seconds for reads.
> This is a problem because I am trying to use WebHdfs to access an archive 
> storage system which can take minutes to hours to return the requested data 
> over WebHdfs.
> The fix is to add new configuration file options to allow these 60s defaults 
> to be customised in hdfs-site.xml.
> If the new configuration options are not present, the behavior is unchanged 
> from before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12827) WebHdfs socket timeouts should be configurable

2016-03-01 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-12827:
---
Attachment: HADOOP-12827.004.patch

OK. Added a line to the default explaining the default.

> WebHdfs socket timeouts should be configurable
> --
>
> Key: HADOOP-12827
> URL: https://issues.apache.org/jira/browse/HADOOP-12827
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
> Environment: all
>Reporter: Austin Donnelly
>Assignee: Austin Donnelly
>  Labels: easyfix, newbie
> Attachments: HADOOP-12827.001.patch, HADOOP-12827.002.patch, 
> HADOOP-12827.002.patch, HADOOP-12827.002.patch, HADOOP-12827.003.patch, 
> HADOOP-12827.004.patch
>
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> WebHdfs client connections use sockets with fixed timeouts of 60 seconds to 
> connect, and 60 seconds for reads.
> This is a problem because I am trying to use WebHdfs to access an archive 
> storage system which can take minutes to hours to return the requested data 
> over WebHdfs.
> The fix is to add new configuration file options to allow these 60s defaults 
> to be customised in hdfs-site.xml.
> If the new configuration options are not present, the behavior is unchanged 
> from before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12717) NPE when trying to rename a directory in Windows Azure Storage FileSystem

2016-03-01 Thread madhumita chakraborty (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15175182#comment-15175182
 ] 

madhumita chakraborty commented on HADOOP-12717:


Looks good to me. Would it be possible to add one unit test for this scenario?

> NPE when trying to rename a directory in Windows Azure Storage FileSystem
> -
>
> Key: HADOOP-12717
> URL: https://issues.apache.org/jira/browse/HADOOP-12717
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Robert Yokota
>Assignee: Robert Yokota
> Attachments: HADOOP-12717.001.patch, diff.txt
>
>
> Encountered an NPE when trying to use the HBase utility ExportSnapshot with 
> Azure as the target.  
> It turns out verifyAndConvertToStandardFormat is returning null when 
> determining the hbaseRoot, and this is being added to the atomicRenameDirs 
> set.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.isKeyForDirectorySet(AzureNativeFileSystemStore.java:1059)
> at 
> org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.isAtomicRenameKey(AzureNativeFileSystemStore.java:1053)
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.prepareAtomicFolderRename(NativeAzureFileSystem.java:2098)
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.rename(NativeAzureFileSystem.java:1996)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:944)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> com.yammer.calmie.snapshot.AbstractSnapshotUtil.exportSnapshot(AbstractSnapshotUtil.java:210)
> at 
> com.yammer.calmie.snapshot.AbstractSnapshotUtil.run(AbstractSnapshotUtil.java:79)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> com.yammer.calmie.snapshot.SnapshotAzureBlobUtil.main(SnapshotAzureBlobUtil.java:85)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12827) WebHdfs socket timeouts should be configurable

2016-03-01 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15175181#comment-15175181
 ] 

Xiaoyu Yao commented on HADOOP-12827:
-

Thanks [~chris.douglas] for updating the patch. 

In hdfs-dfault.xml, can rephrase the following into something like "Value is 
recommended to followed by a unit specifier If no unit specified is given, 
the default will be milliseconds."
bq. The user should always provide a unit, as Configuration::getTimeDuration 
complains if it is not provided. Would you mind if the default unit remained 
unspecified?

I would suggest having the expected behavior documented when time unit 
unspecified. Otherwise, we should add a different version of 
Configuration::getTimeDuration() that does not take default unit parameter and 
throw exception instead of only logging a warn. 



> WebHdfs socket timeouts should be configurable
> --
>
> Key: HADOOP-12827
> URL: https://issues.apache.org/jira/browse/HADOOP-12827
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
> Environment: all
>Reporter: Austin Donnelly
>Assignee: Austin Donnelly
>  Labels: easyfix, newbie
> Attachments: HADOOP-12827.001.patch, HADOOP-12827.002.patch, 
> HADOOP-12827.002.patch, HADOOP-12827.002.patch, HADOOP-12827.003.patch
>
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> WebHdfs client connections use sockets with fixed timeouts of 60 seconds to 
> connect, and 60 seconds for reads.
> This is a problem because I am trying to use WebHdfs to access an archive 
> storage system which can take minutes to hours to return the requested data 
> over WebHdfs.
> The fix is to add new configuration file options to allow these 60s defaults 
> to be customised in hdfs-site.xml.
> If the new configuration options are not present, the behavior is unchanged 
> from before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12470) In-page TOC of documentation should be automatically generated by doxia macro

2016-03-01 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15175136#comment-15175136
 ] 

Masatake Iwasaki commented on HADOOP-12470:
---

I uploaded built docs for reviewer's convenience:

trunk: https://iwasakims.github.io/trunk/hadoop-project/
patched: https://iwasakims.github.io/HADOOP-12470/hadoop-project/


> In-page TOC of documentation should be automatically generated by doxia macro
> -
>
> Key: HADOOP-12470
> URL: https://issues.apache.org/jira/browse/HADOOP-12470
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
> Attachments: HADOOP-12470.001.patch
>
>
> In-page TOC of each documentation page is maintained by hand now. It should 
> be automatically generated once doxia macro is supported by 
> doxia-module-markdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12827) WebHdfs socket timeouts should be configurable

2016-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15175116#comment-15175116
 ] 

Hadoop QA commented on HADOOP-12827:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
46s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 22s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 31s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 14s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 15s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 21s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 10s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 49s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_72. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 67m 54s 
{color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 58s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 71m 4s 
{color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| 

[jira] [Updated] (HADOOP-12470) In-page TOC of documentation should be automatically generated by doxia macro

2016-03-01 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-12470:
--
Target Version/s: 2.8.0
  Status: Patch Available  (was: Open)

> In-page TOC of documentation should be automatically generated by doxia macro
> -
>
> Key: HADOOP-12470
> URL: https://issues.apache.org/jira/browse/HADOOP-12470
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
> Attachments: HADOOP-12470.001.patch
>
>
> In-page TOC of each documentation page is maintained by hand now. It should 
> be automatically generated once doxia macro is supported by 
> doxia-module-markdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12470) In-page TOC of documentation should be automatically generated by doxia macro

2016-03-01 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15175047#comment-15175047
 ] 

Masatake Iwasaki commented on HADOOP-12470:
---

Since maven-site-plugin 3.5 was released, we can use toc macro in Markdown. We 
need to bump the version of maven-stylus-skin along with maven-site-plugin 
otherwise build fails.

> In-page TOC of documentation should be automatically generated by doxia macro
> -
>
> Key: HADOOP-12470
> URL: https://issues.apache.org/jira/browse/HADOOP-12470
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
> Attachments: HADOOP-12470.001.patch
>
>
> In-page TOC of each documentation page is maintained by hand now. It should 
> be automatically generated once doxia macro is supported by 
> doxia-module-markdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12470) In-page TOC of documentation should be automatically generated by doxia macro

2016-03-01 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-12470:
--
Attachment: HADOOP-12470.001.patch

> In-page TOC of documentation should be automatically generated by doxia macro
> -
>
> Key: HADOOP-12470
> URL: https://issues.apache.org/jira/browse/HADOOP-12470
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
> Attachments: HADOOP-12470.001.patch
>
>
> In-page TOC of each documentation page is maintained by hand now. It should 
> be automatically generated once doxia macro is supported by 
> doxia-module-markdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12827) WebHdfs socket timeouts should be configurable

2016-03-01 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-12827:
---
Attachment: HADOOP-12827.003.patch

bq. In hdfs-dfault.xml, can rephrase the following into something like "Value 
is recommended to followed by a unit specifier If no unit specified is 
given, the default will be milliseconds."

The user should always provide a unit, as {{Configuration::getTimeDuration}} 
complains if it is not provided. Would you mind if the default unit remained 
unspecified?

bq. In WebHDFS.md, I don't think we should put the 
dfs.webhdfs.socket.connect-timeout and dfs.webhdfs.socket.read-timeout in the 
oauth2 table. Can you add a new section with its own anchor for them?

Updated patch, grouped the timeouts with other WebHDFS properties.

The patch looks good to me. [~xyao], anything else before commit?

> WebHdfs socket timeouts should be configurable
> --
>
> Key: HADOOP-12827
> URL: https://issues.apache.org/jira/browse/HADOOP-12827
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
> Environment: all
>Reporter: Austin Donnelly
>Assignee: Austin Donnelly
>  Labels: easyfix, newbie
> Attachments: HADOOP-12827.001.patch, HADOOP-12827.002.patch, 
> HADOOP-12827.002.patch, HADOOP-12827.002.patch, HADOOP-12827.003.patch
>
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> WebHdfs client connections use sockets with fixed timeouts of 60 seconds to 
> connect, and 60 seconds for reads.
> This is a problem because I am trying to use WebHdfs to access an archive 
> storage system which can take minutes to hours to return the requested data 
> over WebHdfs.
> The fix is to add new configuration file options to allow these 60s defaults 
> to be customised in hdfs-site.xml.
> If the new configuration options are not present, the behavior is unchanged 
> from before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12793) Write a new group mapping service guide

2016-03-01 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15174887#comment-15174887
 ] 

Masatake Iwasaki commented on HADOOP-12793:
---

Thanks for the work, [~jojochuang]. The patch looks good. Just some nits.

* HadoopGroupMapping.md should be renamed to GroupsMapping.md.
* s/addtional/additional/
* s/execept/except/


> Write a new group mapping service guide
> ---
>
> Key: HADOOP-12793
> URL: https://issues.apache.org/jira/browse/HADOOP-12793
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: ldap, supportability
> Attachments: HADOOP-12791.001.patch, HADOOP-12793.002.patch, 
> HADOOP-12793.003.patch
>
>
> LdapGroupsMapping has lots of configurable properties and is thus fairly 
> complex in nature. _HDFS Permissions Guide_ has a minimal introduction to 
> LdapGroupsMapping, with reference to "More information on configuring the 
> group mapping service is available in the Javadocs."
> However, its Javadoc provides no information about how to configure it. 
> Core-default.xml has descriptions for each property, but still lacks a 
> comprehensive tutorial. Without a tutorial/guide, these configurable 
> properties would be buried under the sea of properties.
> Both Cloudera and HortonWorks has some information regarding LDAP group 
> mapping:
> http://www.cloudera.com/documentation/enterprise/latest/topics/cm_sg_ldap_grp_mappings.html
> http://hortonworks.com/blog/hadoop-groupmapping-ldap-integration/
> But neither cover all configurable features, such as using SSL with LDAP, and 
> POSIX group semantics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12747) support wildcard in libjars argument

2016-03-01 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HADOOP-12747:
-
Target Version/s: 2.9.0

No problem. I'll update the patch soon.

> support wildcard in libjars argument
> 
>
> Key: HADOOP-12747
> URL: https://issues.apache.org/jira/browse/HADOOP-12747
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: HADOOP-12747.01.patch, HADOOP-12747.02.patch, 
> HADOOP-12747.03.patch
>
>
> There is a problem when a user job adds too many dependency jars in their 
> command line. The HADOOP_CLASSPATH part can be addressed, including using 
> wildcards (\*). But the same cannot be done with the -libjars argument. Today 
> it takes only fully specified file paths.
> We may want to consider supporting wildcards as a way to help users in this 
> situation. The idea is to handle it the same way the JVM does it: \* expands 
> to the list of jars in that directory. It does not traverse into any child 
> directory.
> Also, it probably would be a good idea to do it only for libjars (i.e. don't 
> do it for -files and -archives).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12717) NPE when trying to rename a directory in Windows Azure Storage FileSystem

2016-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15174730#comment-15174730
 ] 

Hadoop QA commented on HADOOP-12717:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 24s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 29s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 30s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12790815/HADOOP-12717.001.patch
 |
| JIRA Issue | HADOOP-12717 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6d1d45018570 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HADOOP-12717) NPE when trying to rename a directory in Windows Azure Storage FileSystem

2016-03-01 Thread Gaurav Kanade (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15174708#comment-15174708
 ] 

Gaurav Kanade commented on HADOOP-12717:


[~madhuch-ms] could you please also review? pending any concerns you or 
[~dchickabasapa] may find it has a +1 from me

> NPE when trying to rename a directory in Windows Azure Storage FileSystem
> -
>
> Key: HADOOP-12717
> URL: https://issues.apache.org/jira/browse/HADOOP-12717
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Robert Yokota
>Assignee: Robert Yokota
> Attachments: HADOOP-12717.001.patch, diff.txt
>
>
> Encountered an NPE when trying to use the HBase utility ExportSnapshot with 
> Azure as the target.  
> It turns out verifyAndConvertToStandardFormat is returning null when 
> determining the hbaseRoot, and this is being added to the atomicRenameDirs 
> set.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.isKeyForDirectorySet(AzureNativeFileSystemStore.java:1059)
> at 
> org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.isAtomicRenameKey(AzureNativeFileSystemStore.java:1053)
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.prepareAtomicFolderRename(NativeAzureFileSystem.java:2098)
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.rename(NativeAzureFileSystem.java:1996)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:944)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> com.yammer.calmie.snapshot.AbstractSnapshotUtil.exportSnapshot(AbstractSnapshotUtil.java:210)
> at 
> com.yammer.calmie.snapshot.AbstractSnapshotUtil.run(AbstractSnapshotUtil.java:79)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> com.yammer.calmie.snapshot.SnapshotAzureBlobUtil.main(SnapshotAzureBlobUtil.java:85)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12717) NPE when trying to rename a directory in Windows Azure Storage FileSystem

2016-03-01 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12717:
---
Status: Patch Available  (was: Open)

> NPE when trying to rename a directory in Windows Azure Storage FileSystem
> -
>
> Key: HADOOP-12717
> URL: https://issues.apache.org/jira/browse/HADOOP-12717
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Robert Yokota
>Assignee: Robert Yokota
> Attachments: HADOOP-12717.001.patch, diff.txt
>
>
> Encountered an NPE when trying to use the HBase utility ExportSnapshot with 
> Azure as the target.  
> It turns out verifyAndConvertToStandardFormat is returning null when 
> determining the hbaseRoot, and this is being added to the atomicRenameDirs 
> set.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.isKeyForDirectorySet(AzureNativeFileSystemStore.java:1059)
> at 
> org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.isAtomicRenameKey(AzureNativeFileSystemStore.java:1053)
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.prepareAtomicFolderRename(NativeAzureFileSystem.java:2098)
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.rename(NativeAzureFileSystem.java:1996)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:944)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> com.yammer.calmie.snapshot.AbstractSnapshotUtil.exportSnapshot(AbstractSnapshotUtil.java:210)
> at 
> com.yammer.calmie.snapshot.AbstractSnapshotUtil.run(AbstractSnapshotUtil.java:79)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> com.yammer.calmie.snapshot.SnapshotAzureBlobUtil.main(SnapshotAzureBlobUtil.java:85)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12717) NPE when trying to rename a directory in Windows Azure Storage FileSystem

2016-03-01 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12717:
---
Assignee: Robert Yokota

> NPE when trying to rename a directory in Windows Azure Storage FileSystem
> -
>
> Key: HADOOP-12717
> URL: https://issues.apache.org/jira/browse/HADOOP-12717
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Robert Yokota
>Assignee: Robert Yokota
> Attachments: HADOOP-12717.001.patch, diff.txt
>
>
> Encountered an NPE when trying to use the HBase utility ExportSnapshot with 
> Azure as the target.  
> It turns out verifyAndConvertToStandardFormat is returning null when 
> determining the hbaseRoot, and this is being added to the atomicRenameDirs 
> set.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.isKeyForDirectorySet(AzureNativeFileSystemStore.java:1059)
> at 
> org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.isAtomicRenameKey(AzureNativeFileSystemStore.java:1053)
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.prepareAtomicFolderRename(NativeAzureFileSystem.java:2098)
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.rename(NativeAzureFileSystem.java:1996)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:944)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> com.yammer.calmie.snapshot.AbstractSnapshotUtil.exportSnapshot(AbstractSnapshotUtil.java:210)
> at 
> com.yammer.calmie.snapshot.AbstractSnapshotUtil.run(AbstractSnapshotUtil.java:79)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> com.yammer.calmie.snapshot.SnapshotAzureBlobUtil.main(SnapshotAzureBlobUtil.java:85)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12717) NPE when trying to rename a directory in Windows Azure Storage FileSystem

2016-03-01 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12717:
---
Attachment: HADOOP-12717.001.patch

I'm attaching the exact same patch file, renamed to conform to the pre-commit 
test job's expectations.

> NPE when trying to rename a directory in Windows Azure Storage FileSystem
> -
>
> Key: HADOOP-12717
> URL: https://issues.apache.org/jira/browse/HADOOP-12717
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Robert Yokota
> Attachments: HADOOP-12717.001.patch, diff.txt
>
>
> Encountered an NPE when trying to use the HBase utility ExportSnapshot with 
> Azure as the target.  
> It turns out verifyAndConvertToStandardFormat is returning null when 
> determining the hbaseRoot, and this is being added to the atomicRenameDirs 
> set.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.isKeyForDirectorySet(AzureNativeFileSystemStore.java:1059)
> at 
> org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.isAtomicRenameKey(AzureNativeFileSystemStore.java:1053)
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.prepareAtomicFolderRename(NativeAzureFileSystem.java:2098)
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.rename(NativeAzureFileSystem.java:1996)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:944)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> com.yammer.calmie.snapshot.AbstractSnapshotUtil.exportSnapshot(AbstractSnapshotUtil.java:210)
> at 
> com.yammer.calmie.snapshot.AbstractSnapshotUtil.run(AbstractSnapshotUtil.java:79)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> com.yammer.calmie.snapshot.SnapshotAzureBlobUtil.main(SnapshotAzureBlobUtil.java:85)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12859) Disable hidding field style checks in class setters

2016-03-01 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15174684#comment-15174684
 ] 

Kai Zheng commented on HADOOP-12859:


Thanks Andrew for the review. Sure I will do a manual test locally.

> Disable hidding field style checks in class setters
> ---
>
> Key: HADOOP-12859
> URL: https://issues.apache.org/jira/browse/HADOOP-12859
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12859-v1.patch
>
>
> As discussed in mailing list, this will disable style checks in class setters 
> like the following:
> {noformat}
> void setBlockLocations(LocatedBlocks blockLocations) {:42: 'blockLocations' 
> hides a field.
> void setTimeout(int timeout) {:25: 'timeout' hides a field.
> void setLocatedBlocks(List locatedBlocks) {:46: 'locatedBlocks' 
> hides a field.
> void setRemaining(long remaining) {:28: 'remaining' hides a field.
> void setBytesPerCRC(int bytesPerCRC) {:29: 'bytesPerCRC' hides a field.
> void setCrcType(DataChecksum.Type crcType) {:39: 'crcType' hides a field.
> void setCrcPerBlock(long crcPerBlock) {:30: 'crcPerBlock' hides a field.
> void setRefetchBlocks(boolean refetchBlocks) {:35: 'refetchBlocks' hides a 
> field.
> void setLastRetriedIndex(int lastRetriedIndex) {:34: 'lastRetriedIndex' hides 
> a field.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12717) NPE when trying to rename a directory in Windows Azure Storage FileSystem

2016-03-01 Thread Gaurav Kanade (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15174682#comment-15174682
 ] 

Gaurav Kanade commented on HADOOP-12717:


[~rayokota][~cnauroth] It seems other customers are too hitting this issue and 
is pretty annoying; this needs to be fixed. At first glance the patch seems 
fine to me.

However not sure if it may break some other dependencies in the WASB driver so 
adding [~dchickabasapa] to take a look and review

> NPE when trying to rename a directory in Windows Azure Storage FileSystem
> -
>
> Key: HADOOP-12717
> URL: https://issues.apache.org/jira/browse/HADOOP-12717
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Robert Yokota
> Attachments: diff.txt
>
>
> Encountered an NPE when trying to use the HBase utility ExportSnapshot with 
> Azure as the target.  
> It turns out verifyAndConvertToStandardFormat is returning null when 
> determining the hbaseRoot, and this is being added to the atomicRenameDirs 
> set.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.isKeyForDirectorySet(AzureNativeFileSystemStore.java:1059)
> at 
> org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.isAtomicRenameKey(AzureNativeFileSystemStore.java:1053)
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.prepareAtomicFolderRename(NativeAzureFileSystem.java:2098)
> at 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem.rename(NativeAzureFileSystem.java:1996)
> at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:944)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> com.yammer.calmie.snapshot.AbstractSnapshotUtil.exportSnapshot(AbstractSnapshotUtil.java:210)
> at 
> com.yammer.calmie.snapshot.AbstractSnapshotUtil.run(AbstractSnapshotUtil.java:79)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> com.yammer.calmie.snapshot.SnapshotAzureBlobUtil.main(SnapshotAzureBlobUtil.java:85)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12747) support wildcard in libjars argument

2016-03-01 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15174657#comment-15174657
 ] 

Chris Nauroth commented on HADOOP-12747:


bq. Back to the original point, are you suggesting that we do allow wildcards 
for non-local paths and do similar expansion?

Yes, after noticing that the "local only" restriction applies only to the 
client classpath, I now think it's better to make this consistent and do 
wildcard expansion for non-local paths too.

Do you mind targeting this change to 2.9.0?  The wildcard matching logic in 
{{FileUtil}} has been finicky in the past, so I'm reluctant to change it now 
while 2.8.0 is closing down.

> support wildcard in libjars argument
> 
>
> Key: HADOOP-12747
> URL: https://issues.apache.org/jira/browse/HADOOP-12747
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: HADOOP-12747.01.patch, HADOOP-12747.02.patch, 
> HADOOP-12747.03.patch
>
>
> There is a problem when a user job adds too many dependency jars in their 
> command line. The HADOOP_CLASSPATH part can be addressed, including using 
> wildcards (\*). But the same cannot be done with the -libjars argument. Today 
> it takes only fully specified file paths.
> We may want to consider supporting wildcards as a way to help users in this 
> situation. The idea is to handle it the same way the JVM does it: \* expands 
> to the list of jars in that directory. It does not traverse into any child 
> directory.
> Also, it probably would be a good idea to do it only for libjars (i.e. don't 
> do it for -files and -archives).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12747) support wildcard in libjars argument

2016-03-01 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15174633#comment-15174633
 ] 

Sangjin Lee commented on HADOOP-12747:
--

{quote}
That's very interesting. I missed the point that non-local jars are skipped 
only for adding to the client's own classpath. JobResourceUploader separately 
parses libjars and does not do the same filtering. Certainly since non-local 
libjars for the task is already supported, we'd have to maintain that behavior 
for reasons of backwards compatibility.

I find the lack of consistency quite confusing. It's unclear to me how much of 
this behavior is by design and how much is accidental. I assume the filtering 
away from the client's classpath was done to avoid the complexity of needing to 
run some kind of "mini-localization" on the client side to support non-local 
files.
{quote}

Yes, that's what I thought as well. The inconsistency may be that 
{{URLClassLoader}} does not support non-local paths by default, and we did not 
want the hassle of supporting them on the client-side classpath.

Back to the original point, are you suggesting that we do allow wildcards for 
non-local paths and do similar expansion? I can update the patch to do that. 
Let me know. Thanks!

> support wildcard in libjars argument
> 
>
> Key: HADOOP-12747
> URL: https://issues.apache.org/jira/browse/HADOOP-12747
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: HADOOP-12747.01.patch, HADOOP-12747.02.patch, 
> HADOOP-12747.03.patch
>
>
> There is a problem when a user job adds too many dependency jars in their 
> command line. The HADOOP_CLASSPATH part can be addressed, including using 
> wildcards (\*). But the same cannot be done with the -libjars argument. Today 
> it takes only fully specified file paths.
> We may want to consider supporting wildcards as a way to help users in this 
> situation. The idea is to handle it the same way the JVM does it: \* expands 
> to the list of jars in that directory. It does not traverse into any child 
> directory.
> Also, it probably would be a good idea to do it only for libjars (i.e. don't 
> do it for -files and -archives).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12859) Disable hidding field style checks in class setters

2016-03-01 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15174511#comment-15174511
 ] 

Andrew Wang commented on HADOOP-12859:
--

LGTM, though could you do a manual test? I ran before and after, but didn't see 
any difference in the # of "hides a field" errors in the checkstyle output for 
HDFS.

> Disable hidding field style checks in class setters
> ---
>
> Key: HADOOP-12859
> URL: https://issues.apache.org/jira/browse/HADOOP-12859
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12859-v1.patch
>
>
> As discussed in mailing list, this will disable style checks in class setters 
> like the following:
> {noformat}
> void setBlockLocations(LocatedBlocks blockLocations) {:42: 'blockLocations' 
> hides a field.
> void setTimeout(int timeout) {:25: 'timeout' hides a field.
> void setLocatedBlocks(List locatedBlocks) {:46: 'locatedBlocks' 
> hides a field.
> void setRemaining(long remaining) {:28: 'remaining' hides a field.
> void setBytesPerCRC(int bytesPerCRC) {:29: 'bytesPerCRC' hides a field.
> void setCrcType(DataChecksum.Type crcType) {:39: 'crcType' hides a field.
> void setCrcPerBlock(long crcPerBlock) {:30: 'crcPerBlock' hides a field.
> void setRefetchBlocks(boolean refetchBlocks) {:35: 'refetchBlocks' hides a 
> field.
> void setLastRetriedIndex(int lastRetriedIndex) {:34: 'lastRetriedIndex' hides 
> a field.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12853) Change WASB documentation regarding page blob support

2016-03-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15174480#comment-15174480
 ] 

Hudson commented on HADOOP-12853:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9405 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9405/])
HADOOP-12853. Change WASB documentation regarding page blob support. (cnauroth: 
rev f98dff329b1f94c9f53022baf0209fc1a7aaf7c2)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-tools/hadoop-azure/src/site/markdown/index.md


> Change WASB documentation regarding page blob support
> -
>
> Key: HADOOP-12853
> URL: https://issues.apache.org/jira/browse/HADOOP-12853
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: madhumita chakraborty
>Assignee: madhumita chakraborty
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12853.001.patch
>
>
> WASB page blob support documentation in Features list at the top:
> Supports both page blobs (suitable for most use cases, such as MapReduce) and 
> block blobs (suitable for continuous write use cases, such as an HBase 
> write-ahead log).
> Which is actually the opposite. Block blobs are better for typical big data 
> use cases, and page blobs were implemented to support the HBase WAL.
> It is also mentioned that
> "Page blobs can be used for other purposes beyond just HBase log files 
> though."
> This is not strong enough because page blob has been tested only in context 
> of HBase write ahead logs. They have not been broadly certified for use 
> across the Hadoop ecosystem, hence is not recommended for use as a general 
> purpose file-system via Hadoop cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12858) Reduce UGI getGroups overhead

2016-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15174471#comment-15174471
 ] 

Hadoop QA commented on HADOOP-12858:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 44s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 35s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 46s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 40s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} hadoop-common-project/hadoop-common: patch 
generated 0 new + 334 unchanged - 4 fixed = 334 total (was 338) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 25s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 34s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 24s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12790782/HADOOP-12858.patch |
| JIRA Issue | HADOOP-12858 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a9d667b04ec3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 

[jira] [Updated] (HADOOP-12853) Change WASB documentation regarding page blob support

2016-03-01 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12853:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

+1 for the patch.  I have committed this to trunk, branch-2 and branch-2.8.  
[~madhuch-ms], thank you for improving the documentation.

> Change WASB documentation regarding page blob support
> -
>
> Key: HADOOP-12853
> URL: https://issues.apache.org/jira/browse/HADOOP-12853
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: madhumita chakraborty
>Assignee: madhumita chakraborty
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12853.001.patch
>
>
> WASB page blob support documentation in Features list at the top:
> Supports both page blobs (suitable for most use cases, such as MapReduce) and 
> block blobs (suitable for continuous write use cases, such as an HBase 
> write-ahead log).
> Which is actually the opposite. Block blobs are better for typical big data 
> use cases, and page blobs were implemented to support the HBase WAL.
> It is also mentioned that
> "Page blobs can be used for other purposes beyond just HBase log files 
> though."
> This is not strong enough because page blob has been tested only in context 
> of HBase write ahead logs. They have not been broadly certified for use 
> across the Hadoop ecosystem, hence is not recommended for use as a general 
> purpose file-system via Hadoop cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-7733) Mapreduce jobs are failing when JT has hadoop.security.token.service.use_ip=false and client has hadoop.security.token.service.use_ip=true

2016-03-01 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15174409#comment-15174409
 ] 

Chris Nauroth commented on HADOOP-7733:
---

However, MAPREDUCE-6565 reports a similar problem that is still relevant to the 
current codebase for deployments where job submissions distribute a specific 
version of the MR framework.

> Mapreduce jobs are failing when JT has 
> hadoop.security.token.service.use_ip=false and client has 
> hadoop.security.token.service.use_ip=true
> --
>
> Key: HADOOP-7733
> URL: https://issues.apache.org/jira/browse/HADOOP-7733
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.20.205.0
>Reporter: Rajit Saha
>Assignee: Daryn Sharp
>
> I have added following property in core-site.xml of all the nodes in cluster 
> and restarted
> 
> hadoop.security.token.service.use_ip
> false
> desc
> 
> 
> Then ran a randomwriter, distcp jobs, they are all failing 
> $HADOOP_HOME/bin/hadoop --config $HADOOP_CONFIG_DIR jar 
> $HADOOP_HOME/hadoop-examples.jar randomwriter 
> -Dtest.randomwrite.bytes_per_map=256000 input_1318325953
> Running 140 maps.
> Job started: Tue Oct 11 09:48:09 UTC 2011
> 11/10/11 09:48:09 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 14 
> for  on :8020
> 11/10/11 09:48:09 INFO security.TokenCache: Got dt for
> hdfs:// Hostname>/user//.staging/job_201110110946_0001;uri= IP>:8020;t.service=:8020
> 11/10/11 09:48:09 INFO mapred.JobClient: Cleaning up the staging area
> hdfs:///user//.staging/job_201110110946_0001
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: 
> java.io.IOException: Call to
> /:8020 failed on local exception: 
> java.io.IOException:
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided
> (Mechanism level: Failed to find any Kerberos tgt)]
> at org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:3943)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
> Caused by: java.io.IOException: Call to / IP>:8020 failed on local exception:
> java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed 
> [Caused by GSSException: No valid
> credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
> at org.apache.hadoop.ipc.Client.wrapException(Client.java:1103)
> at org.apache.hadoop.ipc.Client.call(Client.java:1071)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
> at $Proxy7.getProtocolVersion(Unknown Source)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
> at 
> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:118)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:222)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:187)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
> at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1328)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:65)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1346)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:244)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
> at 
> org.apache.hadoop.mapred.JobInProgress$2.run(JobInProgress.java:401)
> at 
> org.apache.hadoop.mapred.JobInProgress$2.run(JobInProgress.java:399)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
> at 
> org.apache.hadoop.mapred.JobInProgress.(JobInProgress.java:399)
> at 

[jira] [Commented] (HADOOP-12858) Reduce UGI getGroups overhead

2016-03-01 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15174398#comment-15174398
 ] 

Mingliang Liu commented on HADOOP-12858:


Thanks for the patch, [~daryn]. The fix looks good to me (non-binding). 
Specially, I double checked that it's not breaking compatibility.

One comment is that, in modules other than {{hadoop-common}}, we may need to 
update {{getGroups}} accordingly, e.g. in HDFS 
{{DataNode#checkSuperuserPrivilege()}} method. Is this tracked in your other 
"performance patches"?

> Reduce UGI getGroups overhead
> -
>
> Key: HADOOP-12858
> URL: https://issues.apache.org/jira/browse/HADOOP-12858
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HADOOP-12858.patch, HADOOP-12858.patch
>
>
> Group lookup generates excessive garbage with multiple conversions between 
> collections and arrays.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10940) RPC client does no bounds checking of responses

2016-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15174397#comment-15174397
 ] 

Hadoop QA commented on HADOOP-10940:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 59s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 46s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 48s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 44s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} hadoop-common-project/hadoop-common: patch 
generated 0 new + 268 unchanged - 4 fixed = 268 total (was 272) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 29s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_72. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 8s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 24s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_72 Failed junit tests | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12778043/HADOOP-10940.patch |
| JIRA Issue | HADOOP-10940 |
| Optional Tests |  asflicense  xml  compile  javac  javadoc  mvninstall  
mvnsite  unit  findbugs  checkstyle  |
| uname | Linux 305873995c18 

[jira] [Resolved] (HADOOP-7733) Mapreduce jobs are failing when JT has hadoop.security.token.service.use_ip=false and client has hadoop.security.token.service.use_ip=true

2016-03-01 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp resolved HADOOP-7733.
-
Resolution: Won't Fix

The configuration mismatch isn't something that can be worked around.  The 
submitter and tasks used conflicting confs which prevented the token selector 
from finding the token. (note: this was filed by y! many years ago and it's no 
longer an issue here)

> Mapreduce jobs are failing when JT has 
> hadoop.security.token.service.use_ip=false and client has 
> hadoop.security.token.service.use_ip=true
> --
>
> Key: HADOOP-7733
> URL: https://issues.apache.org/jira/browse/HADOOP-7733
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.20.205.0
>Reporter: Rajit Saha
>Assignee: Daryn Sharp
>
> I have added following property in core-site.xml of all the nodes in cluster 
> and restarted
> 
> hadoop.security.token.service.use_ip
> false
> desc
> 
> 
> Then ran a randomwriter, distcp jobs, they are all failing 
> $HADOOP_HOME/bin/hadoop --config $HADOOP_CONFIG_DIR jar 
> $HADOOP_HOME/hadoop-examples.jar randomwriter 
> -Dtest.randomwrite.bytes_per_map=256000 input_1318325953
> Running 140 maps.
> Job started: Tue Oct 11 09:48:09 UTC 2011
> 11/10/11 09:48:09 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 14 
> for  on :8020
> 11/10/11 09:48:09 INFO security.TokenCache: Got dt for
> hdfs:// Hostname>/user//.staging/job_201110110946_0001;uri= IP>:8020;t.service=:8020
> 11/10/11 09:48:09 INFO mapred.JobClient: Cleaning up the staging area
> hdfs:///user//.staging/job_201110110946_0001
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: 
> java.io.IOException: Call to
> /:8020 failed on local exception: 
> java.io.IOException:
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided
> (Mechanism level: Failed to find any Kerberos tgt)]
> at org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:3943)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
> Caused by: java.io.IOException: Call to / IP>:8020 failed on local exception:
> java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed 
> [Caused by GSSException: No valid
> credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
> at org.apache.hadoop.ipc.Client.wrapException(Client.java:1103)
> at org.apache.hadoop.ipc.Client.call(Client.java:1071)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
> at $Proxy7.getProtocolVersion(Unknown Source)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
> at 
> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:118)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:222)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:187)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
> at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1328)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:65)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1346)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:244)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
> at 
> org.apache.hadoop.mapred.JobInProgress$2.run(JobInProgress.java:401)
> at 
> org.apache.hadoop.mapred.JobInProgress$2.run(JobInProgress.java:399)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
> at 
> 

[jira] [Updated] (HADOOP-12858) Reduce UGI getGroups overhead

2016-03-01 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-12858:
-
Attachment: HADOOP-12858.patch

Removed the 2 imports caught by checkstyle.

> Reduce UGI getGroups overhead
> -
>
> Key: HADOOP-12858
> URL: https://issues.apache.org/jira/browse/HADOOP-12858
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HADOOP-12858.patch, HADOOP-12858.patch
>
>
> Group lookup generates excessive garbage with multiple conversions between 
> collections and arrays.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12747) support wildcard in libjars argument

2016-03-01 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15174355#comment-15174355
 ] 

Chris Nauroth commented on HADOOP-12747:


bq. You mentioned earlier that libjars don't support non-local paths, but 
strictly speaking HADOOP-7112 addresses only the aspect of adding libjars back 
to the client classpath.

That's very interesting.  I missed the point that non-local jars are skipped 
only for adding to the client's own classpath.  {{JobResourceUploader}} 
separately parses libjars and does not do the same filtering.  Certainly since 
non-local libjars for the task is already supported, we'd have to maintain that 
behavior for reasons of backwards compatibility.

I find the lack of consistency quite confusing.  It's unclear to me how much of 
this behavior is by design and how much is accidental.  I assume the filtering 
away from the client's classpath was done to avoid the complexity of needing to 
run some kind of "mini-localization" on the client side to support non-local 
files.

Regarding the proposed options, I have a question on this con for option 2:

bq. con: need to re-interpret or deprecate (minor) behavior, such as adding 
libjar entries to the client classpath and allowing directories as a set of 
classfiles

This sounds backwards-incompatible, right?  If so, then that would tip my 
opinion towards option 1.

Also, if wildcard expansion is delayed, then it seems there could be a risk of 
unexpected behavior if the contents of the directory change after job 
submission but before launch of the container.  Maybe rolling upgrade scenarios 
would get weird.  (Maybe not if the directories themselves are version-stamped 
properly.)

> support wildcard in libjars argument
> 
>
> Key: HADOOP-12747
> URL: https://issues.apache.org/jira/browse/HADOOP-12747
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: HADOOP-12747.01.patch, HADOOP-12747.02.patch, 
> HADOOP-12747.03.patch
>
>
> There is a problem when a user job adds too many dependency jars in their 
> command line. The HADOOP_CLASSPATH part can be addressed, including using 
> wildcards (\*). But the same cannot be done with the -libjars argument. Today 
> it takes only fully specified file paths.
> We may want to consider supporting wildcards as a way to help users in this 
> situation. The idea is to handle it the same way the JVM does it: \* expands 
> to the list of jars in that directory. It does not traverse into any child 
> directory.
> Also, it probably would be a good idea to do it only for libjars (i.e. don't 
> do it for -files and -archives).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12853) Change WASB documentation regarding page blob support

2016-03-01 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12853:
---
Description: 
WASB page blob support documentation in Features list at the top:

Supports both page blobs (suitable for most use cases, such as MapReduce) and 
block blobs (suitable for continuous write use cases, such as an HBase 
write-ahead log).
Which is actually the opposite. Block blobs are better for typical big data use 
cases, and page blobs were implemented to support the HBase WAL.

It is also mentioned that
"Page blobs can be used for other purposes beyond just HBase log files though."
This is not strong enough because page blob has been tested only in context of 
HBase write ahead logs. They have not been broadly certified for use across the 
Hadoop ecosystem, hence is not recommended for use as a general purpose 
file-system via Hadoop cluster

  was:
WASB page blob support documentation in Features list at the top:

Supports both page blobs (suitable for most use cases, such as MapReduce) and 
block blobs (suitable for continuous write use cases, such as an HBase 
write-ahead log).
Which is actually the opposite. Block blobs are better for typical big data use 
cases, and page blobs were implemented to support the HBase WAL.

It is also mentioned that
"Page blobs can be used for other purposes beyond just HBase log files though."
This is not strong enough because page blob has been tested only in context of 
HBase write ahead logs. They have not been broadly certified for use across the 
HDP stack, hence is not recommended for use as a general purpose file-system 
via Hadoop cluster


> Change WASB documentation regarding page blob support
> -
>
> Key: HADOOP-12853
> URL: https://issues.apache.org/jira/browse/HADOOP-12853
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: madhumita chakraborty
>Assignee: madhumita chakraborty
>Priority: Minor
> Attachments: HADOOP-12853.001.patch
>
>
> WASB page blob support documentation in Features list at the top:
> Supports both page blobs (suitable for most use cases, such as MapReduce) and 
> block blobs (suitable for continuous write use cases, such as an HBase 
> write-ahead log).
> Which is actually the opposite. Block blobs are better for typical big data 
> use cases, and page blobs were implemented to support the HBase WAL.
> It is also mentioned that
> "Page blobs can be used for other purposes beyond just HBase log files 
> though."
> This is not strong enough because page blob has been tested only in context 
> of HBase write ahead logs. They have not been broadly certified for use 
> across the Hadoop ecosystem, hence is not recommended for use as a general 
> purpose file-system via Hadoop cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12861) RPC client fails too quickly when server connection limit is reached

2016-03-01 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15174321#comment-15174321
 ] 

Daryn Sharp commented on HADOOP-12861:
--

Sure would be nice to have HADOOP-10940 integrated so I don't have to make a 
different patch...

> RPC client fails too quickly when server connection limit is reached
> 
>
> Key: HADOOP-12861
> URL: https://issues.apache.org/jira/browse/HADOOP-12861
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>
> The NN's rpc server immediately closes new client connections when a 
> connection limit is reached. The client rapidly retries a small number of 
> times with no delay which causes clients to fail quickly. If the connection 
> is refused or timedout, the connection retry policy tries with backoff. 
> Clients should treat a reset connection as a connection failure so the 
> connection retry policy is used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12815) TestS3ContractRootDir#testRmEmptyRootDirNonRecursive and TestS3ContractRootDir#testRmRootRecursive fail on branch-2.

2016-03-01 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15174320#comment-15174320
 ] 

Chris Nauroth commented on HADOOP-12815:


Hello [~mattpaduano].  Thank you for your diligence tracking down the history 
of these test failures.  Unfortunately, the attached patch is more change in S3 
than I'd like to commit, considering the poor recent track record of the S3 
code and the plan to deprecate it in HADOOP-12709.  Unless [~ste...@apache.org] 
has a different opinion or some further background about why HADOOP-9258 wasn't 
committed to branch-2, then I'd instead propose we simply delete the failing 
tests (essentially declaring defeat) as a small step towards the 
deprecation/removal.

> TestS3ContractRootDir#testRmEmptyRootDirNonRecursive and 
> TestS3ContractRootDir#testRmRootRecursive fail on branch-2.
> 
>
> Key: HADOOP-12815
> URL: https://issues.apache.org/jira/browse/HADOOP-12815
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chris Nauroth
>Assignee: Matthew Paduano
> Attachments: HADOOP-12815.branch-2.01.patch
>
>
> TestS3ContractRootDir#testRmEmptyRootDirNonRecursive and 
> TestS3ContractRootDir#testRmRootRecursive fail on branch-2.  The tests pass 
> on trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12861) RPC client fails too quickly when server connection limit is reached

2016-03-01 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15174305#comment-15174305
 ] 

Kihwal Lee commented on HADOOP-12861:
-

Due to the way the IPC Client does SASL negotiation, once a connection is 
established, an immediately following connection reset is handled by 
{{handleSaslConnectionFailure()}}.  This will throw an {{IOException}}, which 
{{FailoverOnNetworkExceptionRetry.shouldRetry}} prescribes 
{{RetryAction.FAILOVER_AND_RETRY}} for, without any delay. The client can burn 
though retries rather quickly and have a permanent failure.

> RPC client fails too quickly when server connection limit is reached
> 
>
> Key: HADOOP-12861
> URL: https://issues.apache.org/jira/browse/HADOOP-12861
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>
> The NN's rpc server immediately closes new client connections when a 
> connection limit is reached. The client rapidly retries a small number of 
> times with no delay which causes clients to fail quickly. If the connection 
> is refused or timedout, the connection retry policy tries with backoff. 
> Clients should treat a reset connection as a connection failure so the 
> connection retry policy is used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12803) RunJar should allow overriding the manifest Main-Class via a cli parameter.

2016-03-01 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15174289#comment-15174289
 ] 

Jason Lowe commented on HADOOP-12803:
-

The main issue with using a new parameter is differentiating it from the 
arbitrary program arguments.  We'd have to name it such that there is as close 
to zero chance as possible it would collide with any other parameter that could 
be in use.  Also there's the issue of positioning of the parameter -- we'd have 
to make it clear it's only supported in a very specific place in the cmdline or 
support scanning the arbitrary arguments for it and removing it once found.

How would the conf property work in practice?  Kind of annoying to use if it 
cannot be specified on the command line, and if we support specifying it on the 
command line then we're basically back at the first option of providing a new 
parameter.

Another option is to provide a utility jar that in turn takes another jar and 
main class as arguments.  In other words, instead of:
{noformat}
hadoop jar myjar.jar themainclass and some custom args here
{noformat}
we could do something like this:
{noformat}
hadoop jar $HADOOP_PREFIX/share/hadoop/tools/hadoop-runjar-shim.jar myjar.jar 
themainclass and some custom args here
{noformat}
and we could further simplify usage by adding a new subcommand that selects it 
automatically, e.g.: (note "runjar" instead of "jar")
{noformat}
hadoop runjar myjar.jar themainclass and some custom args here
{noformat}

Not thrilled with having two different subcommands to run jars, but it does 
have the advantage of eliminating any potential of argument collision and would 
only have to be used in the case of needing to override a jar's 
manifest-specified main class.


> RunJar should allow overriding the manifest Main-Class via a cli parameter.
> ---
>
> Key: HADOOP-12803
> URL: https://issues.apache.org/jira/browse/HADOOP-12803
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.4
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
> Attachments: HADOOP-12803.001.patch
>
>
> Currently there is no way to override the main class in the manifest even 
> though main class can be passed as a parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store

2016-03-01 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-12862:


 Summary: LDAP Group Mapping over SSL can not specify trust store
 Key: HADOOP-12862
 URL: https://issues.apache.org/jira/browse/HADOOP-12862
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wei-Chiu Chuang


In a secure environment, SSL is used to encrypt LDAP request for group mapping 
resolution.
We (+[~yoderme], +[~tgrayson]) have found that its implementation is strange.

For information, Hadoop name node, as an LDAP client, talks to a LDAP server to 
resolve the group mapping of a user. In the case of LDAP over SSL, a typical 
scenario is to establish one-way authentication (the client verifies the 
server's certificate is real) by storing the server's certificate in the 
client's truststore.

A rarer scenario is to establish two-way authentication: in addition to store 
truststore for the client to verify the server, the server also verifies the 
client's certificate is real, and the client stores its own certificate in its 
keystore.

However, the current implementation for LDAP over SSL does not seem to be 
correct in that it only configures keystore but no truststore (so LDAP server 
can verify Hadoop's certificate, but Hadoop may not be able to verify LDAP 
server's certificate)

I think there should an extra pair of properties to specify the 
truststore/password for LDAP server, and use that to configure system 
properties {{javax.net.ssl.trustStore}}/{{javax.net.ssl.trustStorePassword}}

I am a security layman so my words can be imprecise. But I hope this makes 
sense.

Oracle's SSL LDAP documentation: 
http://docs.oracle.com/javase/jndi/tutorial/ldap/security/ssl.html
JSSE reference guide: 
http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12861) RPC client fails too quickly when server connection limit is reached

2016-03-01 Thread Daryn Sharp (JIRA)
Daryn Sharp created HADOOP-12861:


 Summary: RPC client fails too quickly when server connection limit 
is reached
 Key: HADOOP-12861
 URL: https://issues.apache.org/jira/browse/HADOOP-12861
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 2.7.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp


The NN's rpc server immediately closes new client connections when a connection 
limit is reached. The client rapidly retries a small number of times with no 
delay which causes clients to fail quickly. If the connection is refused or 
timedout, the connection retry policy tries with backoff. Clients should treat 
a reset connection as a connection failure so the connection retry policy is 
used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12803) RunJar should allow overriding the manifest Main-Class via a cli parameter.

2016-03-01 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15174102#comment-15174102
 ] 

Gera Shegalov commented on HADOOP-12803:


I'd prefer the latter. What do people think?

> RunJar should allow overriding the manifest Main-Class via a cli parameter.
> ---
>
> Key: HADOOP-12803
> URL: https://issues.apache.org/jira/browse/HADOOP-12803
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.4
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
> Attachments: HADOOP-12803.001.patch
>
>
> Currently there is no way to override the main class in the manifest even 
> though main class can be passed as a parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12803) RunJar should allow overriding the manifest Main-Class via a cli parameter.

2016-03-01 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15174101#comment-15174101
 ] 

Gera Shegalov commented on HADOOP-12803:


Thanks [~jlowe]! Yes, we should not break program driver use case. I see two 
options going from here, either add a new parameter or introduce an opt-in conf 
to the tune hadoop.util.runjar.main-class=


> RunJar should allow overriding the manifest Main-Class via a cli parameter.
> ---
>
> Key: HADOOP-12803
> URL: https://issues.apache.org/jira/browse/HADOOP-12803
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.4
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
> Attachments: HADOOP-12803.001.patch
>
>
> Currently there is no way to override the main class in the manifest even 
> though main class can be passed as a parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12845) Improve Openssl library finding on RedHat system

2016-03-01 Thread Sebastien Barrier (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15174033#comment-15174033
 ] 

Sebastien Barrier commented on HADOOP-12845:


Then user should be able to link it to the desired openssl library name. I 
tried "-Drequire.openssl -Dopenssl.lib=/usr/lib64/libcrypto.so.10" for example 
but then compilation failed. Anything else is needed to get it working ?

> Improve Openssl library finding on RedHat system
> 
>
> Key: HADOOP-12845
> URL: https://issues.apache.org/jira/browse/HADOOP-12845
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: Sebastien Barrier
>Priority: Minor
>
> The issue is related to [https://issues.apache.org/jira/browse/HADOOP-11216].
> In the BUILDING.txt it's specified "Use -Drequire.openssl to fail the build 
> if libcrypto.so is not found".
> On RedHat system (Fedora/Centos/...) the /usr/lib64/libcrypto.so is a link 
> provided by openssl-devel RPM package which is fine on a build/development 
> host,  but devel packages are not supposed to be installed on Production 
> servers (Hadoop Cluster) and the openssl RPM package don't include that link 
> which is a problem.
> # hadoop checknative -a
> ...
> openssl: false Cannot load libcrypto.so (libcrypto.so: cannot open shared 
> object file: No such file or directory)!
> There's only /usr/lib64/libcrypto.so.10 but no /usr/lib64/libcrypto.so
> Also trying to compile with "-Drequire.openssl 
> -Dopenssl.lib=/usr/lib64/libcrypto.so.10" failed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12845) Improve Openssl library finding on RedHat system

2016-03-01 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15174021#comment-15174021
 ] 

Colin Patrick McCabe commented on HADOOP-12845:
---

There was a long discussion about this on HADOOP-11216.  Basically, we don't 
want to have to build custom packages for each minor release of each Linux 
distribution.  But, on the other hand, there is no standardized naming scheme 
for openssl... some distros have libcrypto.so.10, some have libcrypto.so.1.0.0, 
some have libcrypto.so.1.0.0e.  That's why we settled on just linking against 
the no-extension version (i.e. the devel version).

We could potentially have it check for whatever full library name it found 
during building, in addition to checking for the no-extension version.

> Improve Openssl library finding on RedHat system
> 
>
> Key: HADOOP-12845
> URL: https://issues.apache.org/jira/browse/HADOOP-12845
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: Sebastien Barrier
>Priority: Minor
>
> The issue is related to [https://issues.apache.org/jira/browse/HADOOP-11216].
> In the BUILDING.txt it's specified "Use -Drequire.openssl to fail the build 
> if libcrypto.so is not found".
> On RedHat system (Fedora/Centos/...) the /usr/lib64/libcrypto.so is a link 
> provided by openssl-devel RPM package which is fine on a build/development 
> host,  but devel packages are not supposed to be installed on Production 
> servers (Hadoop Cluster) and the openssl RPM package don't include that link 
> which is a problem.
> # hadoop checknative -a
> ...
> openssl: false Cannot load libcrypto.so (libcrypto.so: cannot open shared 
> object file: No such file or directory)!
> There's only /usr/lib64/libcrypto.so.10 but no /usr/lib64/libcrypto.so
> Also trying to compile with "-Drequire.openssl 
> -Dopenssl.lib=/usr/lib64/libcrypto.so.10" failed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12860) Expand section "Data Encryption on HTTP" in SecureMode documentation

2016-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15174006#comment-15174006
 ] 

Hadoop QA commented on HADOOP-12860:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 11s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 10m 28s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12790745/HADOOP-12860.001.patch
 |
| JIRA Issue | HADOOP-12860 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 095d6f52cfa4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 44d9bac |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8756/console |
| Powered by | Apache Yetus 0.3.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Expand section "Data Encryption on HTTP" in SecureMode documentation
> 
>
> Key: HADOOP-12860
> URL: https://issues.apache.org/jira/browse/HADOOP-12860
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: docuentation
> Attachments: HADOOP-12860.001.patch
>
>
> Section {{Data Encryption on HTTP}} in _Hadoop in Secure Mode_ should be be 
> expanded to talk about configurations needed to enable SSL for web UI of 
> HDFS/YARN/MapReduce daemons.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12803) RunJar should allow overriding the manifest Main-Class via a cli parameter.

2016-03-01 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15174000#comment-15174000
 ] 

Jason Lowe commented on HADOOP-12803:
-

Thanks for the patch, Gera!   Unfortunately this breaks backwards 
compatibility.  For example, http://wiki.apache.org/hadoop/WordCount references 
this sample command-line:
{quote}
bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r <#reducers>] 
  
{quote}

With the proposed change it results in this error:
{noformat}
Exception in thread "main" java.lang.ClassNotFoundException: wordcount
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:270)
at org.apache.hadoop.util.RunJar.run(RunJar.java:231)
at org.apache.hadoop.util.RunJar.main(RunJar.java:139)
{noformat}

I'm not sure how we're going to differentiate a main class from an arbitrary 
first argument to pass on.  Previously it used the presence of a main class in 
the manifest to know whether the first argument is a main class or the first 
argument to the program being launched.  Not necessarily how I would have done 
it, but that's how it works and users have built things around those semantics.


> RunJar should allow overriding the manifest Main-Class via a cli parameter.
> ---
>
> Key: HADOOP-12803
> URL: https://issues.apache.org/jira/browse/HADOOP-12803
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.4
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
> Attachments: HADOOP-12803.001.patch
>
>
> Currently there is no way to override the main class in the manifest even 
> though main class can be passed as a parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12860) Expand section "Data Encryption on HTTP" in SecureMode documentation

2016-03-01 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12860:
-
Attachment: HADOOP-12860.001.patch

Expanded the section a bit, and also mention YARN RM and NM https port 
configuration for competence.

I have verified the config works on a CDH Hadoop cluster.

> Expand section "Data Encryption on HTTP" in SecureMode documentation
> 
>
> Key: HADOOP-12860
> URL: https://issues.apache.org/jira/browse/HADOOP-12860
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: docuentation
> Attachments: HADOOP-12860.001.patch
>
>
> Section {{Data Encryption on HTTP}} in _Hadoop in Secure Mode_ should be be 
> expanded to talk about configurations needed to enable SSL for web UI of 
> HDFS/YARN/MapReduce daemons.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12860) Expand section "Data Encryption on HTTP" in SecureMode documentation

2016-03-01 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12860:
-
Status: Patch Available  (was: Open)

> Expand section "Data Encryption on HTTP" in SecureMode documentation
> 
>
> Key: HADOOP-12860
> URL: https://issues.apache.org/jira/browse/HADOOP-12860
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HADOOP-12860.001.patch
>
>
> Section {{Data Encryption on HTTP}} in _Hadoop in Secure Mode_ should be be 
> expanded to talk about configurations needed to enable SSL for web UI of 
> HDFS/YARN/MapReduce daemons.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12860) Expand section "Data Encryption on HTTP" in SecureMode documentation

2016-03-01 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12860:
-
Labels: docuentation  (was: )

> Expand section "Data Encryption on HTTP" in SecureMode documentation
> 
>
> Key: HADOOP-12860
> URL: https://issues.apache.org/jira/browse/HADOOP-12860
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: docuentation
> Attachments: HADOOP-12860.001.patch
>
>
> Section {{Data Encryption on HTTP}} in _Hadoop in Secure Mode_ should be be 
> expanded to talk about configurations needed to enable SSL for web UI of 
> HDFS/YARN/MapReduce daemons.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12860) Expand section "Data Encryption on HTTP" in SecureMode documentation

2016-03-01 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-12860:


 Summary: Expand section "Data Encryption on HTTP" in SecureMode 
documentation
 Key: HADOOP-12860
 URL: https://issues.apache.org/jira/browse/HADOOP-12860
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.7.2
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang
Priority: Minor


Section {{Data Encryption on HTTP}} in _Hadoop in Secure Mode_ should be be 
expanded to talk about configurations needed to enable SSL for web UI of 
HDFS/YARN/MapReduce daemons.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12843) Fix findbugs warnings in hadoop-common (branch-2)

2016-03-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15173907#comment-15173907
 ] 

Hudson commented on HADOOP-12843:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9401 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9401/])
Add HADOOP-12843 to 2.8.0 in CHANGES.txt. (aajisaka: rev 
44d9bac1f58ae64b8a62a197df790e63ec912a72)
* hadoop-common-project/hadoop-common/CHANGES.txt


> Fix findbugs warnings in hadoop-common (branch-2)
> -
>
> Key: HADOOP-12843
> URL: https://issues.apache.org/jira/browse/HADOOP-12843
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>  Labels: newbie
> Fix For: 2.8.0
>
> Attachments: HADOOP-12843.branch-2.01.patch, 
> HADOOP-12843.branch-2.02.patch, findbugsHtml.html
>
>
> There are 5 findbugs warnings in branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12821) Change "Auth successful" audit log level from info to debug

2016-03-01 Thread Yong Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yong Zhang updated HADOOP-12821:

Resolution: Invalid
Status: Resolved  (was: Patch Available)

Close it, because this audit log level is configurable.

> Change "Auth successful" audit log level from info to debug
> ---
>
> Key: HADOOP-12821
> URL: https://issues.apache.org/jira/browse/HADOOP-12821
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Yong Zhang
>Assignee: Yong Zhang
>Priority: Minor
> Attachments: HADOOP-12821.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12843) Fix findbugs warnings in hadoop-common (branch-2)

2016-03-01 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-12843:
---
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed this to branch-2 and branch-2.8. Thanks [~ste...@apache.org] for your 
review!

> Fix findbugs warnings in hadoop-common (branch-2)
> -
>
> Key: HADOOP-12843
> URL: https://issues.apache.org/jira/browse/HADOOP-12843
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>  Labels: newbie
> Fix For: 2.8.0
>
> Attachments: HADOOP-12843.branch-2.01.patch, 
> HADOOP-12843.branch-2.02.patch, findbugsHtml.html
>
>
> There are 5 findbugs warnings in branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12847) hadoop daemonlog should support https and SPNEGO for Kerberized cluster

2016-03-01 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12847:
-
Attachment: HADOOP-12847.002.patch

Rev02: this revision adds an option to select HTTP or HTTPS and test cases.
* I basically rewrote the client side to make it more complete. The client side 
SSL config (keystore/truststore) comes from Hadoop configuration files.
* Rewrote the test cases to validate client command options, and run tests 
using both HTTP and HTTPS.
* Updated docs to reflect the additional command line parameter.
* server side is unchanged.

I did not add tests for SPENGO authentication and this code does not 
automatically reads Kerberos keytab to login, as it's way too complex and 
beyond my current understanding of Hadoop. The user of this code has to run 
{{kinit}} before running this command.

> hadoop daemonlog should support https and SPNEGO for Kerberized cluster
> ---
>
> Key: HADOOP-12847
> URL: https://issues.apache.org/jira/browse/HADOOP-12847
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12847.001.patch, HADOOP-12847.002.patch
>
>
> {{hadoop daemonlog}} is a simple, yet useful tool for debugging.
> However, it does not support https, nor does it support a Kerberized Hadoop 
> cluster.
> Using {{AuthenticatedURL}}, it will be able to support SPNEGO negotiation 
> with a Kerberized name node web ui. It will also fall back to simple 
> authentication if the cluster is not Kerberized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12856) FileUtil.checkDest() and RawLocalFileSystem.mkdirs() to throw stricter IOEs; RawLocalFS contract tests to verify

2016-03-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15173710#comment-15173710
 ] 

Hadoop QA commented on HADOOP-12856:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 2s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 44s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 56s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 37s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 53s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 11s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 54s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12790711/HADOOP-12856-002.patch
 |
| JIRA Issue | HADOOP-12856 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux dc4a2dbb5274 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HADOOP-12851) S3AFileSystem Uptake of ProviderUtils.excludeIncompatibleCredentialProviders

2016-03-01 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15173680#comment-15173680
 ] 

Larry McCay commented on HADOOP-12851:
--

Thanks, [~cnauroth]!

> S3AFileSystem Uptake of ProviderUtils.excludeIncompatibleCredentialProviders
> 
>
> Key: HADOOP-12851
> URL: https://issues.apache.org/jira/browse/HADOOP-12851
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 2.8.0
>
> Attachments: HADOOP-12851-001.patch, HADOOP-12851-002.patch, 
> HADOOP-12851-003.patch, HADOOP-12851-004.patch
>
>
> HADOOP-12846 introduced the ability for FileSystem based integration points 
> of credential providers to eliminate the threat of a recursive infinite loop 
> due to a provider in the same filesystem being configured.
> It was WASB has already uptaken its use in HADOOP-12846 and this patch will 
> add it to the S3A integration point as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12843) Fix findbugs warnings in hadoop-common (branch-2)

2016-03-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15173655#comment-15173655
 ] 

Steve Loughran commented on HADOOP-12843:
-

LGTM
+1

> Fix findbugs warnings in hadoop-common (branch-2)
> -
>
> Key: HADOOP-12843
> URL: https://issues.apache.org/jira/browse/HADOOP-12843
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>  Labels: newbie
> Attachments: HADOOP-12843.branch-2.01.patch, 
> HADOOP-12843.branch-2.02.patch, findbugsHtml.html
>
>
> There are 5 findbugs warnings in branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12856) FileUtil.checkDest() and RawLocalFileSystem.mkdirs() to throw stricter IOEs; RawLocalFS contract tests to verify

2016-03-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12856:

Assignee: Steve Loughran
  Status: Patch Available  (was: Open)

> FileUtil.checkDest() and RawLocalFileSystem.mkdirs() to throw stricter IOEs; 
> RawLocalFS contract tests to verify
> 
>
> Key: HADOOP-12856
> URL: https://issues.apache.org/jira/browse/HADOOP-12856
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-12856-001.patch, HADOOP-12856-002.patch
>
>
> {{FileUtil.checkDest()}} throws IOEs when its conditions are not met, with 
> meaningful text.
> The code could be improved if it were to throw {{PathExistsException}} and 
> {{PathIsDirectoryException}} when appropriate



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12856) FileUtil.checkDest() and RawLocalFileSystem.mkdirs() to throw stricter IOEs; RawLocalFS contract tests to verify

2016-03-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12856:

Attachment: HADOOP-12856-002.patch

HADOOP-12856 patch 002. address style and in contract tests strict->false

> FileUtil.checkDest() and RawLocalFileSystem.mkdirs() to throw stricter IOEs; 
> RawLocalFS contract tests to verify
> 
>
> Key: HADOOP-12856
> URL: https://issues.apache.org/jira/browse/HADOOP-12856
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
> Attachments: HADOOP-12856-001.patch, HADOOP-12856-002.patch
>
>
> {{FileUtil.checkDest()}} throws IOEs when its conditions are not met, with 
> meaningful text.
> The code could be improved if it were to throw {{PathExistsException}} and 
> {{PathIsDirectoryException}} when appropriate



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12856) FileUtil.checkDest() and RawLocalFileSystem.mkdirs() to throw stricter IOEs; RawLocalFS contract tests to verify

2016-03-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15173651#comment-15173651
 ] 

Steve Loughran commented on HADOOP-12856:
-

tests failed as this patch actually had strict=true in the tests; the RawLocal 
throws FileNotFoundException on attempts to overwrite a directory, which is not 
the exception expected.

> FileUtil.checkDest() and RawLocalFileSystem.mkdirs() to throw stricter IOEs; 
> RawLocalFS contract tests to verify
> 
>
> Key: HADOOP-12856
> URL: https://issues.apache.org/jira/browse/HADOOP-12856
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
> Attachments: HADOOP-12856-001.patch
>
>
> {{FileUtil.checkDest()}} throws IOEs when its conditions are not met, with 
> meaningful text.
> The code could be improved if it were to throw {{PathExistsException}} and 
> {{PathIsDirectoryException}} when appropriate



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12856) FileUtil.checkDest() and RawLocalFileSystem.mkdirs() to throw stricter IOEs; RawLocalFS contract tests to verify

2016-03-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12856:

Status: Open  (was: Patch Available)

> FileUtil.checkDest() and RawLocalFileSystem.mkdirs() to throw stricter IOEs; 
> RawLocalFS contract tests to verify
> 
>
> Key: HADOOP-12856
> URL: https://issues.apache.org/jira/browse/HADOOP-12856
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
> Attachments: HADOOP-12856-001.patch
>
>
> {{FileUtil.checkDest()}} throws IOEs when its conditions are not met, with 
> meaningful text.
> The code could be improved if it were to throw {{PathExistsException}} and 
> {{PathIsDirectoryException}} when appropriate



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)