[jira] [Created] (HADOOP-12837) FileStatus.getModificationTime not working on S3

2016-02-24 Thread Jagdish Kewat (JIRA)
Jagdish Kewat created HADOOP-12837:
--

 Summary: FileStatus.getModificationTime not working on S3
 Key: HADOOP-12837
 URL: https://issues.apache.org/jira/browse/HADOOP-12837
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Jagdish Kewat


Hi Team,

We have observed an issue with the FileStatus.getModificationTime() API on S3 
filesystem. The method always returns 0.

I googled for this however couldn't find any solution as such which would fit 
in my scheme of things. S3FileStatus seems to be an option however I would be 
using this API on HDFS as well as S3 both so can't go for it.

I tried to run the job on:
* Release label:emr-4.2.0
* Hadoop distribution:Amazon 2.6.0
* Hadoop Common jar: hadoop-common-2.6.0.jar

Please advise if any patch or fix available for this.

Thanks,
Jagdish





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12827) WebHdfs socket timeouts should be configurable

2016-02-24 Thread Austin Donnelly (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Austin Donnelly updated HADOOP-12827:
-
Status: In Progress  (was: Patch Available)

> WebHdfs socket timeouts should be configurable
> --
>
> Key: HADOOP-12827
> URL: https://issues.apache.org/jira/browse/HADOOP-12827
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
> Environment: all
>Reporter: Austin Donnelly
>Assignee: Austin Donnelly
>  Labels: easyfix, newbie
> Attachments: HADOOP-12827.001.patch, HADOOP-12827.002.patch
>
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> WebHdfs client connections use sockets with fixed timeouts of 60 seconds to 
> connect, and 60 seconds for reads.
> This is a problem because I am trying to use WebHdfs to access an archive 
> storage system which can take minutes to hours to return the requested data 
> over WebHdfs.
> The fix is to add new configuration file options to allow these 60s defaults 
> to be customised in hdfs-site.xml.
> If the new configuration options are not present, the behavior is unchanged 
> from before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12827) WebHdfs socket timeouts should be configurable

2016-02-24 Thread Austin Donnelly (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Austin Donnelly updated HADOOP-12827:
-
Status: Patch Available  (was: In Progress)

> WebHdfs socket timeouts should be configurable
> --
>
> Key: HADOOP-12827
> URL: https://issues.apache.org/jira/browse/HADOOP-12827
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
> Environment: all
>Reporter: Austin Donnelly
>Assignee: Austin Donnelly
>  Labels: easyfix, newbie
> Attachments: HADOOP-12827.001.patch, HADOOP-12827.002.patch
>
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> WebHdfs client connections use sockets with fixed timeouts of 60 seconds to 
> connect, and 60 seconds for reads.
> This is a problem because I am trying to use WebHdfs to access an archive 
> storage system which can take minutes to hours to return the requested data 
> over WebHdfs.
> The fix is to add new configuration file options to allow these 60s defaults 
> to be customised in hdfs-site.xml.
> If the new configuration options are not present, the behavior is unchanged 
> from before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2016-02-24 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15162849#comment-15162849
 ] 

Tsuyoshi Ozawa commented on HADOOP-9613:


The failures of TestReloadingX509TrustManager,TestHttpServerLifecycle are also 
intermittent failures. 

I also confirmed that all tests pass on my local.
{code}
mvn test 
-Dtest=TestAMAuthorization,TestClientRMTokens,TestGetGroups,TestYarnCLI,TestAMRMClient,TestYarnClient,TestNMClient,TestReloadingX509TrustManager,TestHttpServerLifecycle
{code}

[~ste...@apache.org] Please check the comment 
[here|https://issues.apache.org/jira/browse/HADOOP-9613?focusedCommentId=15158876&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15158876].
 Do you have additional comments? Or can I commit the patch to the trunk?

> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: maven
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.006.incompatible.patch, HADOOP-9613.007.incompatible.patch, 
> HADOOP-9613.008.incompatible.patch, HADOOP-9613.009.incompatible.patch, 
> HADOOP-9613.010.incompatible.patch, HADOOP-9613.011.incompatible.patch, 
> HADOOP-9613.012.incompatible.patch, HADOOP-9613.013.incompatible.patch, 
> HADOOP-9613.1.patch, HADOOP-9613.2.patch, HADOOP-9613.3.patch, 
> HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12816) Log cipher suite negotiation more verbosely

2016-02-24 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12816:
-
Status: Patch Available  (was: Open)

submit the patch for testing.

> Log cipher suite negotiation more verbosely
> ---
>
> Key: HADOOP-12816
> URL: https://issues.apache.org/jira/browse/HADOOP-12816
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: encryption, supportability
> Attachments: HADOOP-12816.001.patch
>
>
> We've had difficulty probing the root cause of performance slowdown with 
> in-transit encryption using AES-NI. We finally found the root cause was the 
> Hadoop client did not configure encryption properties correctly, so they did 
> not negotiate AES cipher suite when creating an encrypted stream pair, 
> despite the server (a data node) supports it. Existing debug message did not 
> help. We saw debug message "Server using cipher suite AES/CTR/NoPadding" on 
> the same data node, but that refers to the communication with other data 
> nodes.
> It would be really helpful to log a debug message if a SASL server configures 
> AES cipher suite, but the SASL client doesn't, or vice versa. This debug 
> message should also log the client address to differentiate it from other 
> stream pairs. 
> More over, the debug message "Server using cipher suite AES/CTR/NoPadding" 
> should also be extended to include the client's address.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12666) Support Microsoft Azure Data Lake - as a file system in Hadoop

2016-02-24 Thread Vishwajeet Dusane (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15162960#comment-15162960
 ] 

Vishwajeet Dusane commented on HADOOP-12666:


Extracted comments out 

* AF>This class seems to have serious issues that need addressing:
1. Local race conditions in caller PrivateAzureDataLakeFileSystem
[Vishwajeet] - Already explained in the previous replies to comments 
from [~cnauroth] and [~eddyxu]. In the worst case, there would be a cache miss 
and this would not break any functionality.

2. No mechanism for cache invalidation across nodes in the cluster.
[Vishwajeet] - Scope of cache is per process. Invalidation across nodes 
would be taken care as part of the distributed cache mechanism in the future.

 * AF>Update comment? This uses "adl" scheme, right?
 [Vishwajeet] - Yes
 
 * AF>What is this class used for? I didn't see any uses.
 [Vishwajeet] - No longer required. This will is removed. 
 
 * AF>Care to comment why this is in the ..hdfs.web package instead of fs.adl?
 [Vishwajeet] Already explained in the previous comments. In short, requirement 
to use protected functionality within `WebHdfsFileSystem` implementation to 
disable redirect operation, Access Runner class for configuration.. etc. 
 
 * AF>?
 [Vishwajeet] Comment was trimmed, updated again. 
  
 * AF>Due to the bug or due to the fix? The fix was merged in 2.8.0, right?
 * AF>I'm not understanding this last sentence, can you explain?
 
 [Vishwajeet] Not needed anymore since homedirectory would constructed locally 
and no back-end call would be necessary. Updated comment accordingly.
 
 * AF>Is this a race condition?
thread 1> getFileStatus(), cache miss
super.getStatus -> s1
cache.get() -> null
thread 2> delete()
cache.clear()
thread 1> cache.put(s1)
Maybe provide an atomic putIfAbsent() for FileStatusCacheManager. You 
can
synchronize on the underlying map object I believe (see
Collections.synchronizedMap()).

[Vishwajeet] - Already using `syncMap = Collections.synchronizedMap(map);` We 
are aware of the limitation of the current implementation. Cache is short lived 
and would not be persistent effect. Is there a real user scenario where such 
issue can surface frequently ? We have recommended to turn off the `FileStatus` 
cache feature in case of misbehavior.

 * AF> Seems like there is a less-likely race condition here. (f is replaced by 
a directory after checking fs.isFile())
[Vishwajeet] Why there would be race condition?

 * AF>Similar pattern of get/mutate non-atomically repeats here and below.
 [Vishwajeet] Could you please elaborate the issue ? 
 
 * AF>typo
  [Vishwajeet] Thanks, corrected.
  
 * AF>Did you guys consider handling this as transparently reconnecting, instead
of doing separate connections for each op? Seems like performance would be a lot
better?
  [Vishwajeet] Exactly the same. Since transparent re-connection would require 
to pass new offset and length value again which is separate op. HTTP persistent 
connection ensures the same socket is reused so no impact on the performance as 
well. We have done enough perf test to ensure.
  
 * AF>I'd expect you to want a connection per-thread, instead of per-op.
 [Vishwajeet] Connection per-op
 
 * AF> This case could use some perf optimization. e.g. Three calls to get 
system time. + return fin; + }
 [Vishwajeet] Agree. Corrected.

 * AF> How about adding ADLLogger.logWithTimestamp(). That way, if the logger 
is disabled, you don't keep getting system time.
 [Vishwajeet] Removed system.currentTimeinMs in case debug or perf flags are 
OFF.
 
 * AF> Redundant check of isLogEnabled() + ADLLogger.log("getFileBlockLocations 
}
[Vishwajeet] Corrected

 * AF>Just use "name" twice instead of defining host?
 [Vishwajeet] Mainly for readability on block location computation so would not 
change that.
 
 * AF> Why the runtime check of a compile-time constant? How about just add a 
comment near the definition "must be non-zero" + throw new 
IllegalArgumentException( + "The block size for the given file is not a 
positive number}
[Vishwajeet] Removed.

 * AF> Redundant check of isLogEnabled() + ADLLogger.log("getFileBlockLocations 
}
[Vishwajeet] Removed 

 * AF>Formatting. Missing newline above.
 [Vishwajeet] Using Apache.xml in intellij for formatting per apache guideline. 
Will look at it later in rulset defined in Apache.xml and raise JIRA 
accordingly. 
 
 * AF>Why volatile here? Needs comments.
I have a feeling this is wrong and you need some synchronized blocks 
below
instead.
 [Vishwajeet] - We removed synchronization block for performance reason and 
used Volatile for specific variables which require synchronization. Problem 
could be when same FsInputSteam instance being used across threads to read from 
the 

[jira] [Updated] (HADOOP-12827) WebHdfs socket timeouts should be configurable

2016-02-24 Thread Austin Donnelly (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Austin Donnelly updated HADOOP-12827:
-
Status: Open  (was: Patch Available)

> WebHdfs socket timeouts should be configurable
> --
>
> Key: HADOOP-12827
> URL: https://issues.apache.org/jira/browse/HADOOP-12827
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
> Environment: all
>Reporter: Austin Donnelly
>Assignee: Austin Donnelly
>  Labels: easyfix, newbie
> Attachments: HADOOP-12827.001.patch, HADOOP-12827.002.patch
>
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> WebHdfs client connections use sockets with fixed timeouts of 60 seconds to 
> connect, and 60 seconds for reads.
> This is a problem because I am trying to use WebHdfs to access an archive 
> storage system which can take minutes to hours to return the requested data 
> over WebHdfs.
> The fix is to add new configuration file options to allow these 60s defaults 
> to be customised in hdfs-site.xml.
> If the new configuration options are not present, the behavior is unchanged 
> from before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12827) WebHdfs socket timeouts should be configurable

2016-02-24 Thread Austin Donnelly (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Austin Donnelly updated HADOOP-12827:
-
Status: Patch Available  (was: Open)

> WebHdfs socket timeouts should be configurable
> --
>
> Key: HADOOP-12827
> URL: https://issues.apache.org/jira/browse/HADOOP-12827
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
> Environment: all
>Reporter: Austin Donnelly
>Assignee: Austin Donnelly
>  Labels: easyfix, newbie
> Attachments: HADOOP-12827.001.patch, HADOOP-12827.002.patch
>
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> WebHdfs client connections use sockets with fixed timeouts of 60 seconds to 
> connect, and 60 seconds for reads.
> This is a problem because I am trying to use WebHdfs to access an archive 
> storage system which can take minutes to hours to return the requested data 
> over WebHdfs.
> The fix is to add new configuration file options to allow these 60s defaults 
> to be customised in hdfs-site.xml.
> If the new configuration options are not present, the behavior is unchanged 
> from before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12666) Support Microsoft Azure Data Lake - as a file system in Hadoop

2016-02-24 Thread Vishwajeet Dusane (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15163049#comment-15163049
 ] 

Vishwajeet Dusane commented on HADOOP-12666:


Sorry [~mackrorysd] i missed your earlier comment.

For `org.apache.hadoop.hdfs.web` packaging - Already explained in the previous 
replies. Short summary is current design of `WebHdfsFileSystem` does not allow 
certain configuration to be done outside `org.apache.hadoop.hdfs.web` namespace.

For common properties like dfs.webhdfs.oauth2.access.token.provider - The scope 
of the this patch set does not cover the design changes on the existing OAuth2 
implementation in ASF. You have bring a valid point and we need to create 
separate JIRA for the same. I will take it once this patch is through.

> Support Microsoft Azure Data Lake - as a file system in Hadoop
> --
>
> Key: HADOOP-12666
> URL: https://issues.apache.org/jira/browse/HADOOP-12666
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: HADOOP-12666-002.patch, HADOOP-12666-003.patch, 
> HADOOP-12666-004.patch, HADOOP-12666-005.patch, HADOOP-12666-006.patch, 
> HADOOP-12666-1.patch
>
>   Original Estimate: 336h
>  Time Spent: 336h
>  Remaining Estimate: 0h
>
> h2. Description
> This JIRA describes a new file system implementation for accessing Microsoft 
> Azure Data Lake Store (ADL) from within Hadoop. This would enable existing 
> Hadoop applications such has MR, HIVE, Hbase etc..,  to use ADL store as 
> input or output.
>  
> ADL is ultra-high capacity, Optimized for massive throughput with rich 
> management and security features. More details available at 
> https://azure.microsoft.com/en-us/services/data-lake-store/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12837) FileStatus.getModificationTime not working on S3

2016-02-24 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12837:

Component/s: fs/s3

> FileStatus.getModificationTime not working on S3
> 
>
> Key: HADOOP-12837
> URL: https://issues.apache.org/jira/browse/HADOOP-12837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Jagdish Kewat
>
> Hi Team,
> We have observed an issue with the FileStatus.getModificationTime() API on S3 
> filesystem. The method always returns 0.
> I googled for this however couldn't find any solution as such which would fit 
> in my scheme of things. S3FileStatus seems to be an option however I would be 
> using this API on HDFS as well as S3 both so can't go for it.
> I tried to run the job on:
> * Release label:emr-4.2.0
> * Hadoop distribution:Amazon 2.6.0
> * Hadoop Common jar: hadoop-common-2.6.0.jar
> Please advise if any patch or fix available for this.
> Thanks,
> Jagdish



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12837) FileStatus.getModificationTime not working on S3

2016-02-24 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15163058#comment-15163058
 ] 

Steve Loughran commented on HADOOP-12837:
-

If this is amazon EMR, then it's their own code talking to s3 and so can't be 
handled here. You'll need to take it with the EMR team via support channels.

If it is pure ASF EMR, then is this an s3:// s3n:// or s3a:// URL

> FileStatus.getModificationTime not working on S3
> 
>
> Key: HADOOP-12837
> URL: https://issues.apache.org/jira/browse/HADOOP-12837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Jagdish Kewat
>
> Hi Team,
> We have observed an issue with the FileStatus.getModificationTime() API on S3 
> filesystem. The method always returns 0.
> I googled for this however couldn't find any solution as such which would fit 
> in my scheme of things. S3FileStatus seems to be an option however I would be 
> using this API on HDFS as well as S3 both so can't go for it.
> I tried to run the job on:
> * Release label:emr-4.2.0
> * Hadoop distribution:Amazon 2.6.0
> * Hadoop Common jar: hadoop-common-2.6.0.jar
> Please advise if any patch or fix available for this.
> Thanks,
> Jagdish



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12837) FileStatus.getModificationTime not working on S3

2016-02-24 Thread Jagdish Kewat (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15163074#comment-15163074
 ] 

Jagdish Kewat commented on HADOOP-12837:


Thanks Steve for responding. Yes its s3n:// URL however the same code works on 
hdfs so the code is not EMR specific.


> FileStatus.getModificationTime not working on S3
> 
>
> Key: HADOOP-12837
> URL: https://issues.apache.org/jira/browse/HADOOP-12837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Jagdish Kewat
>
> Hi Team,
> We have observed an issue with the FileStatus.getModificationTime() API on S3 
> filesystem. The method always returns 0.
> I googled for this however couldn't find any solution as such which would fit 
> in my scheme of things. S3FileStatus seems to be an option however I would be 
> using this API on HDFS as well as S3 both so can't go for it.
> I tried to run the job on:
> * Release label:emr-4.2.0
> * Hadoop distribution:Amazon 2.6.0
> * Hadoop Common jar: hadoop-common-2.6.0.jar
> Please advise if any patch or fix available for this.
> Thanks,
> Jagdish



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12666) Support Microsoft Azure Data Lake - as a file system in Hadoop

2016-02-24 Thread Vishwajeet Dusane (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15163080#comment-15163080
 ] 

Vishwajeet Dusane commented on HADOOP-12666:


*For the common concern over the dependency on `org.apache.hadoop.hdfs.web` 
packaging* - Already explained in the previous replies. However i would like to 
reiterate that due to current design constraint in `org.apache.hadoop.hdfs.web` 
namespace, extended file system from `WebHdfsFileSystem` can not access certain 
functionalities outside `org.apache.hadoop.hdfs.web`. Example :  Control over 
additional or existing query parameters, HTTP configuration .. etc. Being said 
that, We do desire to have only `org.apache.hadoop.fs.adl` package which 
contains all the functionalities. 

In order to achieve our common goal, I would have to file few more JIRA's on 
the `org.apache.hadoop.hdfs.web` package and work on to make extended 
FileSystem from `org.apache.hadoop.hdfs.web` configurable and refactor existing 
ADL package accordingly. I would take up this activity once the Rev 1 i.e. this 
patch set is pushed in to ASF.

> Support Microsoft Azure Data Lake - as a file system in Hadoop
> --
>
> Key: HADOOP-12666
> URL: https://issues.apache.org/jira/browse/HADOOP-12666
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: HADOOP-12666-002.patch, HADOOP-12666-003.patch, 
> HADOOP-12666-004.patch, HADOOP-12666-005.patch, HADOOP-12666-006.patch, 
> HADOOP-12666-1.patch
>
>   Original Estimate: 336h
>  Time Spent: 336h
>  Remaining Estimate: 0h
>
> h2. Description
> This JIRA describes a new file system implementation for accessing Microsoft 
> Azure Data Lake Store (ADL) from within Hadoop. This would enable existing 
> Hadoop applications such has MR, HIVE, Hbase etc..,  to use ADL store as 
> input or output.
>  
> ADL is ultra-high capacity, Optimized for massive throughput with rich 
> management and security features. More details available at 
> https://azure.microsoft.com/en-us/services/data-lake-store/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12622) RetryPolicies (other than FailoverOnNetworkExceptionRetry) should put on retry failed reason or the log from RMProxy's retry could be very misleading.

2016-02-24 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-12622:

Attachment: HADOOP-12622-v5.patch

In v5 patch, consolidate if-else cases for failover log messages in 
RetryInvocationHandler.

> RetryPolicies (other than FailoverOnNetworkExceptionRetry) should put on 
> retry failed reason or the log from RMProxy's retry could be very misleading.
> --
>
> Key: HADOOP-12622
> URL: https://issues.apache.org/jira/browse/HADOOP-12622
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auto-failover
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Critical
> Attachments: HADOOP-12622-v2.patch, HADOOP-12622-v3.1.patch, 
> HADOOP-12622-v3.patch, HADOOP-12622-v4.patch, HADOOP-12622-v5.patch, 
> HADOOP-12622.patch
>
>
> In debugging a NM retry connection to RM (non-HA), the NM log during RM down 
> time is very misleading:
> {noformat}
> 2015-12-07 11:37:14,098 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 0 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:15,099 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 1 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:16,101 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 2 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:17,103 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 3 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:18,105 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 4 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:19,107 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 5 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:20,109 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 6 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:21,112 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 7 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:22,113 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 8 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:23,115 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 9 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:54,120 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 0 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:55,121 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 1 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:56,123 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 2 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:57,125 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 3 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:58,126 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 4 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:59,128 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:

[jira] [Commented] (HADOOP-12816) Log cipher suite negotiation more verbosely

2016-02-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15163310#comment-15163310
 ] 

Hadoop QA commented on HADOOP-12816:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 26s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 24s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 35s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
46s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 39s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 23s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 26s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 35s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 28s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 49s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_72. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 52m 48s 
{color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 55s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 50m 20s 
{color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color

[jira] [Commented] (HADOOP-12767) update apache httpclient version to the latest 4.5 for security

2016-02-24 Thread Artem Aliev (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15163315#comment-15163315
 ] 

Artem Aliev commented on HADOOP-12767:
--

[~jojochuang], I surprise the fix works for you, I got an compilation error 
with 4.3.6 and 4.5.1
{code}
#> man test

[INFO] Apache Hadoop Auth . FAILURE [ 15.191 s]
...

[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
(default-testCompile) on project hadoop-auth: Compilation failure
[ERROR] 
/Users/artemaliev/git/hadoop-test/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/client/AuthenticatorTestCase.java:[246,32]
 cannot access org.apache.http.config.Lookup
[ERROR] class file for org.apache.http.config.Lookup not found
[ERROR] -> [Help 1]
{code}


> update apache httpclient version to the latest 4.5 for security
> ---
>
> Key: HADOOP-12767
> URL: https://issues.apache.org/jira/browse/HADOOP-12767
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Artem Aliev
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12767.001.patch
>
>
> Various SSL security fixes are needed.  See:  CVE-2012-6153, CVE-2011-4461, 
> CVE-2014-3577, CVE-2015-5262.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12622) RetryPolicies (other than FailoverOnNetworkExceptionRetry) should put on retry failed reason or the log from RMProxy's retry could be very misleading.

2016-02-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15163357#comment-15163357
 ] 

Hadoop QA commented on HADOOP-12622:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 0s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 24s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 51s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 22s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} hadoop-common-project/hadoop-common: patch 
generated 0 new + 44 unchanged - 2 fixed = 44 total (was 46) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 4s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_72. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 47s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
25s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m 52s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_72 Timed out junit tests | 
org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12789587/HADOOP-12622-v5.patch 
|
| JIRA Issue | HADOOP-12622 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 35abd7d03aa3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | 

[jira] [Commented] (HADOOP-12827) WebHdfs socket timeouts should be configurable

2016-02-24 Thread Austin Donnelly (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15163355#comment-15163355
 ] 

Austin Donnelly commented on HADOOP-12827:
--

[newbie] How do I get Hadoop QA bot to evaluate my latest patch?  I've tried 
changing the Status earlier today, but nothing seems to have happened.  Any 
suggestions?

> WebHdfs socket timeouts should be configurable
> --
>
> Key: HADOOP-12827
> URL: https://issues.apache.org/jira/browse/HADOOP-12827
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
> Environment: all
>Reporter: Austin Donnelly
>Assignee: Austin Donnelly
>  Labels: easyfix, newbie
> Attachments: HADOOP-12827.001.patch, HADOOP-12827.002.patch
>
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> WebHdfs client connections use sockets with fixed timeouts of 60 seconds to 
> connect, and 60 seconds for reads.
> This is a problem because I am trying to use WebHdfs to access an archive 
> storage system which can take minutes to hours to return the requested data 
> over WebHdfs.
> The fix is to add new configuration file options to allow these 60s defaults 
> to be customised in hdfs-site.xml.
> If the new configuration options are not present, the behavior is unchanged 
> from before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12837) FileStatus.getModificationTime not working on S3

2016-02-24 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15163409#comment-15163409
 ] 

Chris Nauroth commented on HADOOP-12837:


Hello [~jagdishk].

Is this referring to a directory or a file?  If it's a directory, then s3n 
always returns 0 for mtime.  This is also true of s3a.  I don't believe there 
are currently any plans in progress to change this behavior.

The expected atomicity semantics of implementing directory mtime are more 
challenging to implement against a blob store compared to a traditional file 
system or HDFS.  If a new file or sub-directory gets created under a directory, 
then users have an expectation that the corresponding update to mtime at the 
parent folder is atomic with respect to the file/directory creation operation.  
On HDFS, we can take a central lock at the NameNode to do all of the metadata 
manipulations as a transaction.  For a blob store, this is multiple HTTP 
operations on different blob keys, and those multiple operations do not execute 
as an atomic transaction.

The Azure file system does provide mtime on directories, but it does not 
provide atomicity of the mtime updates.  (I just mention this to demonstrate 
that the behavior is not always consistent across different file system 
implementations.)

> FileStatus.getModificationTime not working on S3
> 
>
> Key: HADOOP-12837
> URL: https://issues.apache.org/jira/browse/HADOOP-12837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Jagdish Kewat
>
> Hi Team,
> We have observed an issue with the FileStatus.getModificationTime() API on S3 
> filesystem. The method always returns 0.
> I googled for this however couldn't find any solution as such which would fit 
> in my scheme of things. S3FileStatus seems to be an option however I would be 
> using this API on HDFS as well as S3 both so can't go for it.
> I tried to run the job on:
> * Release label:emr-4.2.0
> * Hadoop distribution:Amazon 2.6.0
> * Hadoop Common jar: hadoop-common-2.6.0.jar
> Please advise if any patch or fix available for this.
> Thanks,
> Jagdish



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12767) update apache httpclient version to the latest 4.5 for security

2016-02-24 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12767:
-
Affects Version/s: 3.0.0

> update apache httpclient version to the latest 4.5 for security
> ---
>
> Key: HADOOP-12767
> URL: https://issues.apache.org/jira/browse/HADOOP-12767
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Artem Aliev
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12767.001.patch
>
>
> Various SSL security fixes are needed.  See:  CVE-2012-6153, CVE-2011-4461, 
> CVE-2014-3577, CVE-2015-5262.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12767) update apache httpclient version to the latest 4.5 for security

2016-02-24 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15163445#comment-15163445
 ] 

Wei-Chiu Chuang commented on HADOOP-12767:
--

Hi [~artem.aliev] thanks for the review.
Did you apply the patch against trunk? I rebased the latest trunk and it 
compiles fine for me. The compilation error does not seem to stem from this 
patch though.
Also, you might want to build from the hadoop source code root directory, 
instead of a sub directory (like hadoop-hdfs/).

> update apache httpclient version to the latest 4.5 for security
> ---
>
> Key: HADOOP-12767
> URL: https://issues.apache.org/jira/browse/HADOOP-12767
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Artem Aliev
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12767.001.patch
>
>
> Various SSL security fixes are needed.  See:  CVE-2012-6153, CVE-2011-4461, 
> CVE-2014-3577, CVE-2015-5262.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12824) Collect network usage on the node in Windows

2016-02-24 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15163449#comment-15163449
 ] 

Xiaoyu Yao commented on HADOOP-12824:
-

[~elgoiri], thanks for working on this. The patch looks good overall, a few 
comments below:

1. The title needs to be updated to match with the patch, which collects both 
total network and disk usage.

2.  Potential memory leak in ReadTotalCounter() due to the direct return below. 
Suggest to bailout with goto to ensure pItems is freed before the multiple 
returns in this function.
{code}
status = PdhGetRawCounterArray(hCounter, &dwBufferSize, &dwItemCount, NULL);
248   if (PDH_MORE_DATA == status)
249   {
250 pItems = (PDH_RAW_COUNTER_ITEM *) malloc(dwBufferSize);
251 if (pItems)
252 {
253   // Actually query the counter
254   status = PdhGetRawCounterArray(hCounter, &dwBufferSize, 
&dwItemCount, pItems);
255   if (ERROR_SUCCESS == status) {
256 for (i = 0; i < dwItemCount; i++) {
257   if (wcscmp(L"_Total", pItems[i].szName) == 0) {
258 totalFound = 1;
259 *ret = pItems[i].RawValue.FirstValue;
260   } else if (!totalFound) {
261 *ret += pItems[i].RawValue.FirstValue;
262   }
263 }
264   } else {
265 return status;
266   }
{code}

3. I see usage of  SAL2 style annotations in the second parameter of 
ReadTotalCounter() but not other new functions added. I suggest we should use 
it consistently.
{code}
PDH_STATUS ReadTotalCounter(PDH_HCOUNTER hCounter, _Out_ LONGLONG* ret)
{code}

> Collect network usage on the node in Windows
> 
>
> Key: HADOOP-12824
> URL: https://issues.apache.org/jira/browse/HADOOP-12824
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HADOOP-12824-v000.patch, HADOOP-12824-v001.patch
>
>
> HADOOP-12210 collects the node network usage for Linux; this JIRA does it for 
> Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12815) TestS3ContractRootDir#testRmEmptyRootDirNonRecursive and TestS3ContractRootDir#testRmRootRecursive fail on branch-2.

2016-02-24 Thread Matthew Paduano (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15163463#comment-15163463
 ] 

Matthew Paduano commented on HADOOP-12815:
--

I tried copying the files in 'a10055cf' from trunk into the appropriate places 
in branch-2.
The native S3 tests/mods cause a list of test failures (I did not even look at 
them all).

But the changes to S3FileSystem and Jets3tFileSystemStore work and fix the issue
in branch-2.  The patch I attached passes the S3 tests when applied to branch-2:

{code}
---
 T E S T S
---
Running org.apache.hadoop.fs.contract.s3.TestS3ContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.396 sec - in 
org.apache.hadoop.fs.contract.s3.TestS3ContractDelete
Running org.apache.hadoop.fs.contract.s3.TestS3ContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 17.72 sec - in 
org.apache.hadoop.fs.contract.s3.TestS3ContractCreate
Running org.apache.hadoop.fs.contract.s3.TestS3ContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.866 sec - in 
org.apache.hadoop.fs.contract.s3.TestS3ContractRename
Running org.apache.hadoop.fs.contract.s3.TestS3ContractRootDir
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.763 sec - in 
org.apache.hadoop.fs.contract.s3.TestS3ContractRootDir
Running org.apache.hadoop.fs.contract.s3.TestS3ContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.448 sec - in 
org.apache.hadoop.fs.contract.s3.TestS3ContractMkdir
Running org.apache.hadoop.fs.contract.s3.TestS3ContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.839 sec - 
in org.apache.hadoop.fs.contract.s3.TestS3ContractSeek
Running org.apache.hadoop.fs.contract.s3.TestS3ContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.038 sec - in 
org.apache.hadoop.fs.contract.s3.TestS3ContractOpen

Results :

Tests run: 47, Failures: 0, Errors: 0, Skipped: 1
{code} 

> TestS3ContractRootDir#testRmEmptyRootDirNonRecursive and 
> TestS3ContractRootDir#testRmRootRecursive fail on branch-2.
> 
>
> Key: HADOOP-12815
> URL: https://issues.apache.org/jira/browse/HADOOP-12815
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chris Nauroth
>
> TestS3ContractRootDir#testRmEmptyRootDirNonRecursive and 
> TestS3ContractRootDir#testRmRootRecursive fail on branch-2.  The tests pass 
> on trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12815) TestS3ContractRootDir#testRmEmptyRootDirNonRecursive and TestS3ContractRootDir#testRmRootRecursive fail on branch-2.

2016-02-24 Thread Matthew Paduano (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthew Paduano updated HADOOP-12815:
-
Assignee: Matthew Paduano
Release Note: for branch-2 only
  Status: Patch Available  (was: Open)

> TestS3ContractRootDir#testRmEmptyRootDirNonRecursive and 
> TestS3ContractRootDir#testRmRootRecursive fail on branch-2.
> 
>
> Key: HADOOP-12815
> URL: https://issues.apache.org/jira/browse/HADOOP-12815
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chris Nauroth
>Assignee: Matthew Paduano
>
> TestS3ContractRootDir#testRmEmptyRootDirNonRecursive and 
> TestS3ContractRootDir#testRmRootRecursive fail on branch-2.  The tests pass 
> on trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12815) TestS3ContractRootDir#testRmEmptyRootDirNonRecursive and TestS3ContractRootDir#testRmRootRecursive fail on branch-2.

2016-02-24 Thread Matthew Paduano (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthew Paduano updated HADOOP-12815:
-
Attachment: HADOOP-12815.01.patch

> TestS3ContractRootDir#testRmEmptyRootDirNonRecursive and 
> TestS3ContractRootDir#testRmRootRecursive fail on branch-2.
> 
>
> Key: HADOOP-12815
> URL: https://issues.apache.org/jira/browse/HADOOP-12815
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chris Nauroth
>Assignee: Matthew Paduano
> Attachments: HADOOP-12815.01.patch
>
>
> TestS3ContractRootDir#testRmEmptyRootDirNonRecursive and 
> TestS3ContractRootDir#testRmRootRecursive fail on branch-2.  The tests pass 
> on trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12838) Add metrics for LDAP group mapping resolution time

2016-02-24 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-12838:


 Summary: Add metrics for LDAP group mapping resolution time
 Key: HADOOP-12838
 URL: https://issues.apache.org/jira/browse/HADOOP-12838
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.8.0
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang


LDAP group mapping needs to communicate with a LDAP server. Sometime it takes 
tremendous time to communicate and resolve group mapping. As a result, system 
performs degrades and it is not obvious why it goes worse without an in-depth 
investigation.

Let's add a Metrics for LDAP group mapping and log the resolution time to help 
debugging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12815) TestS3ContractRootDir#testRmEmptyRootDirNonRecursive and TestS3ContractRootDir#testRmRootRecursive fail on branch-2.

2016-02-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15163530#comment-15163530
 ] 

Hadoop QA commented on HADOOP-12815:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s {color} 
| {color:red} HADOOP-12815 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12789617/HADOOP-12815.01.patch 
|
| JIRA Issue | HADOOP-12815 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8706/console |
| Powered by | Apache Yetus 0.2.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestS3ContractRootDir#testRmEmptyRootDirNonRecursive and 
> TestS3ContractRootDir#testRmRootRecursive fail on branch-2.
> 
>
> Key: HADOOP-12815
> URL: https://issues.apache.org/jira/browse/HADOOP-12815
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chris Nauroth
>Assignee: Matthew Paduano
> Attachments: HADOOP-12815.01.patch
>
>
> TestS3ContractRootDir#testRmEmptyRootDirNonRecursive and 
> TestS3ContractRootDir#testRmRootRecursive fail on branch-2.  The tests pass 
> on trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12838) Add metrics for LDAP group mapping resolution time

2016-02-24 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12838:
-
Description: 
LDAP group mapping needs to communicate with a LDAP server. Sometime it takes 
extremely long time to communicate and resolve group mapping. As a result, 
system performance degrades and it is not obvious why it goes worse without an 
in-depth investigation.

Let's add a Metrics for LDAP group mapping and log the resolution time to help 
debugging.

  was:
LDAP group mapping needs to communicate with a LDAP server. Sometime it takes 
tremendous time to communicate and resolve group mapping. As a result, system 
performs degrades and it is not obvious why it goes worse without an in-depth 
investigation.

Let's add a Metrics for LDAP group mapping and log the resolution time to help 
debugging.


> Add metrics for LDAP group mapping resolution time
> --
>
> Key: HADOOP-12838
> URL: https://issues.apache.org/jira/browse/HADOOP-12838
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: LDAP, metrics, supportability
>
> LDAP group mapping needs to communicate with a LDAP server. Sometime it takes 
> extremely long time to communicate and resolve group mapping. As a result, 
> system performance degrades and it is not obvious why it goes worse without 
> an in-depth investigation.
> Let's add a Metrics for LDAP group mapping and log the resolution time to 
> help debugging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12827) WebHdfs socket timeouts should be configurable

2016-02-24 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15163550#comment-15163550
 ] 

Xiaoyu Yao commented on HADOOP-12827:
-

[~and1000], "Cancel Patch" and Resubmit will do the trick. You can check the 
precommit build queue here: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/.


> WebHdfs socket timeouts should be configurable
> --
>
> Key: HADOOP-12827
> URL: https://issues.apache.org/jira/browse/HADOOP-12827
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
> Environment: all
>Reporter: Austin Donnelly
>Assignee: Austin Donnelly
>  Labels: easyfix, newbie
> Attachments: HADOOP-12827.001.patch, HADOOP-12827.002.patch
>
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> WebHdfs client connections use sockets with fixed timeouts of 60 seconds to 
> connect, and 60 seconds for reads.
> This is a problem because I am trying to use WebHdfs to access an archive 
> storage system which can take minutes to hours to return the requested data 
> over WebHdfs.
> The fix is to add new configuration file options to allow these 60s defaults 
> to be customised in hdfs-site.xml.
> If the new configuration options are not present, the behavior is unchanged 
> from before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12824) Collect network usage on the node in Windows

2016-02-24 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15163603#comment-15163603
 ] 

Inigo Goiri commented on HADOOP-12824:
--

Regarding 1, I'll do everything together here and close HADOOP-12823.

> Collect network usage on the node in Windows
> 
>
> Key: HADOOP-12824
> URL: https://issues.apache.org/jira/browse/HADOOP-12824
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HADOOP-12824-v000.patch, HADOOP-12824-v001.patch
>
>
> HADOOP-12210 collects the node network usage for Linux; this JIRA does it for 
> Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12824) Collect network usage on the node in Windows

2016-02-24 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HADOOP-12824:
-
Attachment: HADOOP-12824-v002.patch

Tackling [~xyao] comments.

> Collect network usage on the node in Windows
> 
>
> Key: HADOOP-12824
> URL: https://issues.apache.org/jira/browse/HADOOP-12824
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HADOOP-12824-v000.patch, HADOOP-12824-v001.patch, 
> HADOOP-12824-v002.patch
>
>
> HADOOP-12210 collects the node network usage for Linux; this JIRA does it for 
> Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12824) Collect network and disk usage on the node in Windows

2016-02-24 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HADOOP-12824:
-
Summary: Collect network and disk usage on the node in Windows  (was: 
Collect network usage on the node in Windows)

> Collect network and disk usage on the node in Windows
> -
>
> Key: HADOOP-12824
> URL: https://issues.apache.org/jira/browse/HADOOP-12824
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HADOOP-12824-v000.patch, HADOOP-12824-v001.patch, 
> HADOOP-12824-v002.patch
>
>
> HADOOP-12210 collects the node network usage for Linux; this JIRA does it for 
> Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-12823) Collect disks usage on the node in Windows

2016-02-24 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri resolved HADOOP-12823.
--
Resolution: Duplicate

> Collect disks usage on the node in Windows
> --
>
> Key: HADOOP-12823
> URL: https://issues.apache.org/jira/browse/HADOOP-12823
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
>
> HADOOP-12211 collects the node disks usage for Linux; this JIRA does it for 
> Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12827) WebHdfs socket timeouts should be configurable

2016-02-24 Thread Austin Donnelly (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Austin Donnelly updated HADOOP-12827:
-
Status: Open  (was: Patch Available)

> WebHdfs socket timeouts should be configurable
> --
>
> Key: HADOOP-12827
> URL: https://issues.apache.org/jira/browse/HADOOP-12827
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
> Environment: all
>Reporter: Austin Donnelly
>Assignee: Austin Donnelly
>  Labels: easyfix, newbie
> Attachments: HADOOP-12827.001.patch, HADOOP-12827.002.patch
>
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> WebHdfs client connections use sockets with fixed timeouts of 60 seconds to 
> connect, and 60 seconds for reads.
> This is a problem because I am trying to use WebHdfs to access an archive 
> storage system which can take minutes to hours to return the requested data 
> over WebHdfs.
> The fix is to add new configuration file options to allow these 60s defaults 
> to be customised in hdfs-site.xml.
> If the new configuration options are not present, the behavior is unchanged 
> from before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12827) WebHdfs socket timeouts should be configurable

2016-02-24 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-12827:

Status: Patch Available  (was: Open)

> WebHdfs socket timeouts should be configurable
> --
>
> Key: HADOOP-12827
> URL: https://issues.apache.org/jira/browse/HADOOP-12827
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
> Environment: all
>Reporter: Austin Donnelly
>Assignee: Austin Donnelly
>  Labels: easyfix, newbie
> Attachments: HADOOP-12827.001.patch, HADOOP-12827.002.patch
>
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> WebHdfs client connections use sockets with fixed timeouts of 60 seconds to 
> connect, and 60 seconds for reads.
> This is a problem because I am trying to use WebHdfs to access an archive 
> storage system which can take minutes to hours to return the requested data 
> over WebHdfs.
> The fix is to add new configuration file options to allow these 60s defaults 
> to be customised in hdfs-site.xml.
> If the new configuration options are not present, the behavior is unchanged 
> from before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12827) WebHdfs socket timeouts should be configurable

2016-02-24 Thread Austin Donnelly (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Austin Donnelly updated HADOOP-12827:
-
Attachment: HADOOP-12827.002.patch

> WebHdfs socket timeouts should be configurable
> --
>
> Key: HADOOP-12827
> URL: https://issues.apache.org/jira/browse/HADOOP-12827
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
> Environment: all
>Reporter: Austin Donnelly
>Assignee: Austin Donnelly
>  Labels: easyfix, newbie
> Attachments: HADOOP-12827.001.patch, HADOOP-12827.002.patch, 
> HADOOP-12827.002.patch
>
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> WebHdfs client connections use sockets with fixed timeouts of 60 seconds to 
> connect, and 60 seconds for reads.
> This is a problem because I am trying to use WebHdfs to access an archive 
> storage system which can take minutes to hours to return the requested data 
> over WebHdfs.
> The fix is to add new configuration file options to allow these 60s defaults 
> to be customised in hdfs-site.xml.
> If the new configuration options are not present, the behavior is unchanged 
> from before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12070) Some of the bin/hadoop subcommands are not available on Windows

2016-02-24 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15163762#comment-15163762
 ] 

Arpit Agarwal commented on HADOOP-12070:


+1 for the patch. I will commit this to trunk today. Thanks for the 
contribution [~sekikn].

For the trace and distch commands I just tested that the commands can be 
launched. Verified the behavior of the remaining commands.

xmllint is not available on branch-2 so you can either backport HADOOP-7497 or 
if you post a branch-2 patch I will commit that too.

> Some of the bin/hadoop subcommands are not available on Windows
> ---
>
> Key: HADOOP-12070
> URL: https://issues.apache.org/jira/browse/HADOOP-12070
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Kengo Seki
>Assignee: Kengo Seki
> Attachments: HADOOP-12070.001.patch
>
>
> * conftest, distch, jnipath and trace are not enabled in hadoop.cmd
> * kerbname is enabled, but does not appear in the help message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12070) Some of the bin/hadoop subcommands are not available on Windows

2016-02-24 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-12070:
---
Fix Version/s: 3.0.0

> Some of the bin/hadoop subcommands are not available on Windows
> ---
>
> Key: HADOOP-12070
> URL: https://issues.apache.org/jira/browse/HADOOP-12070
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Kengo Seki
>Assignee: Kengo Seki
> Fix For: 3.0.0
>
> Attachments: HADOOP-12070.001.patch
>
>
> * conftest, distch, jnipath and trace are not enabled in hadoop.cmd
> * kerbname is enabled, but does not appear in the help message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12070) Some of the bin/hadoop subcommands are not available on Windows

2016-02-24 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-12070:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to trunk.

> Some of the bin/hadoop subcommands are not available on Windows
> ---
>
> Key: HADOOP-12070
> URL: https://issues.apache.org/jira/browse/HADOOP-12070
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Kengo Seki
>Assignee: Kengo Seki
> Fix For: 3.0.0
>
> Attachments: HADOOP-12070.001.patch
>
>
> * conftest, distch, jnipath and trace are not enabled in hadoop.cmd
> * kerbname is enabled, but does not appear in the help message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-10885) Fix dead links to the javadocs of o.a.h.security.authorize

2016-02-24 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu reassigned HADOOP-10885:
-

Assignee: Yufei Gu

> Fix dead links to the javadocs of o.a.h.security.authorize
> --
>
> Key: HADOOP-10885
> URL: https://issues.apache.org/jira/browse/HADOOP-10885
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 2.6.0
>Reporter: Akira AJISAKA
>Assignee: Yufei Gu
>Priority: Minor
>  Labels: newbie
>
> In API doc ([my trunk 
> build|http://aajisaka.github.io/hadoop-project/api/index.html]), 
> {{ImpersonationProvider}} and {{DefaultImpersonationProvider}} classes are 
> linked but these documents are not generated.
> There's an inconsistency about {{@InterfaceAudience}} between package-info 
> and these classes, so these dead links are generated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12070) Some of the bin/hadoop subcommands are not available on Windows

2016-02-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15163805#comment-15163805
 ] 

Hudson commented on HADOOP-12070:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9362 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9362/])
HADOOP-12070. Some of the bin/hadoop subcommands are not available on (arp: rev 
2e76c2f751f697ed2f785038c22445251db0134c)
* hadoop-common-project/hadoop-common/src/main/bin/hadoop.cmd
* hadoop-common-project/hadoop-common/CHANGES.txt


> Some of the bin/hadoop subcommands are not available on Windows
> ---
>
> Key: HADOOP-12070
> URL: https://issues.apache.org/jira/browse/HADOOP-12070
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Kengo Seki
>Assignee: Kengo Seki
> Fix For: 3.0.0
>
> Attachments: HADOOP-12070.001.patch
>
>
> * conftest, distch, jnipath and trace are not enabled in hadoop.cmd
> * kerbname is enabled, but does not appear in the help message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12824) Collect network and disk usage on the node in Windows

2016-02-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15163896#comment-15163896
 ] 

Hadoop QA commented on HADOOP-12824:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
6s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 6s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 18s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 38s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 21s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 6s {color} | 
{color:red} hadoop-common in the patch failed with JDK v1.8.0_72. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 5s {color} | 
{color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 13s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_72 Failed junit tests | hadoop.util.TestSysInfoWindows |
| JDK v1.7.0_95 Failed junit tests | hadoop.util.TestSysInfoWindows |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/a

[jira] [Updated] (HADOOP-12716) KerberosAuthenticator#doSpnegoSequence use incorrect class to determine isKeyTab in JDK8

2016-02-24 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12716:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

+1 for the patch.  I have committed this to trunk, branch-2 and branch-2.8.  
[~xyao], thank you for the patch.  [~xiaobingo], thank you for helping with the 
code review.

> KerberosAuthenticator#doSpnegoSequence use incorrect class to determine 
> isKeyTab in JDK8
> 
>
> Key: HADOOP-12716
> URL: https://issues.apache.org/jira/browse/HADOOP-12716
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1
> Environment: Java 8
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 2.8.0
>
> Attachments: HADOOP-12716.000.patch, HADOOP-12716.001.patch, 
> HADOOP-12716.002.patch
>
>
> HADOOP-11287 and HADOOP-10786 has fixed the issue in UserGroupInformation 
> class for JDK8. 
> However, the logic in KerberosAuthenticator#doSpnegoSequence is not updated. 
> The KerberosKey.class below should be KeyTab.class for JDK8.
> {code}
> if (subject == null
>   || (subject.getPrivateCredentials(KerberosKey.class).isEmpty()
>   && 
> subject.getPrivateCredentials(KerberosTicket.class).isEmpty())) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12808) Rename the RS coder from HDFS-RAID as legacy

2016-02-24 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HADOOP-12808:
---
Summary: Rename the RS coder from HDFS-RAID as legacy  (was: HADOOP-12041 
follow-on: replace the RS coder with the new implementation)

> Rename the RS coder from HDFS-RAID as legacy
> 
>
> Key: HADOOP-12808
> URL: https://issues.apache.org/jira/browse/HADOOP-12808
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Rui Li
> Attachments: HADOOP-12808.1.patch, HADOOP-12808.2.patch, 
> HADOOP-12808.3.patch, HADOOP-12808.4.patch
>
>
> We can use this JIRA to rename the new Java coder and make it default for 
> HDFS-EC. Package-info is also needed after HADOOP-12041.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12808) Rename the RS coder from HDFS-RAID as legacy

2016-02-24 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HADOOP-12808:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: 3.0.0
Target Version/s: 3.0.0
  Status: Resolved  (was: Patch Available)

I just committed the v4 patch to trunk. Thanks Rui for the contribution!

> Rename the RS coder from HDFS-RAID as legacy
> 
>
> Key: HADOOP-12808
> URL: https://issues.apache.org/jira/browse/HADOOP-12808
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Rui Li
> Fix For: 3.0.0
>
> Attachments: HADOOP-12808.1.patch, HADOOP-12808.2.patch, 
> HADOOP-12808.3.patch, HADOOP-12808.4.patch
>
>
> We can use this JIRA to rename the new Java coder and make it default for 
> HDFS-EC. Package-info is also needed after HADOOP-12041.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12808) Rename the RS coder from HDFS-RAID as legacy

2016-02-24 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15165640#comment-15165640
 ] 

Jing Zhao commented on HADOOP-12808:


Thanks all for working on this and thanks for explaining the rename motivation. 
The legacy name makes sense to me.

> Rename the RS coder from HDFS-RAID as legacy
> 
>
> Key: HADOOP-12808
> URL: https://issues.apache.org/jira/browse/HADOOP-12808
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Rui Li
> Fix For: 3.0.0
>
> Attachments: HADOOP-12808.1.patch, HADOOP-12808.2.patch, 
> HADOOP-12808.3.patch, HADOOP-12808.4.patch
>
>
> We can use this JIRA to rename the new Java coder and make it default for 
> HDFS-EC. Package-info is also needed after HADOOP-12041.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12716) KerberosAuthenticator#doSpnegoSequence use incorrect class to determine isKeyTab in JDK8

2016-02-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15165654#comment-15165654
 ] 

Hudson commented on HADOOP-12716:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9363 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9363/])
HADOOP-12716. KerberosAuthenticator#doSpnegoSequence use incorrect class 
(cnauroth: rev d6b181c6faa56e43c9f05d2cc860a0aeb940fd90)
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosUtil.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java


> KerberosAuthenticator#doSpnegoSequence use incorrect class to determine 
> isKeyTab in JDK8
> 
>
> Key: HADOOP-12716
> URL: https://issues.apache.org/jira/browse/HADOOP-12716
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.1
> Environment: Java 8
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 2.8.0
>
> Attachments: HADOOP-12716.000.patch, HADOOP-12716.001.patch, 
> HADOOP-12716.002.patch
>
>
> HADOOP-11287 and HADOOP-10786 has fixed the issue in UserGroupInformation 
> class for JDK8. 
> However, the logic in KerberosAuthenticator#doSpnegoSequence is not updated. 
> The KerberosKey.class below should be KeyTab.class for JDK8.
> {code}
> if (subject == null
>   || (subject.getPrivateCredentials(KerberosKey.class).isEmpty()
>   && 
> subject.getPrivateCredentials(KerberosTicket.class).isEmpty())) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12808) Rename the RS coder from HDFS-RAID as legacy

2016-02-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15165653#comment-15165653
 ] 

Hudson commented on HADOOP-12808:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9363 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9363/])
HADOOP-12808. Rename the RS coder from HDFS-RAID as legacy. Contributed (zhz: 
rev efdc0070d880c7e1b778e0029a1b827ca962ce70)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/coder/TestRSErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawDecoderLegacy.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/CodecUtil.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawErasureCoderFactory.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/rawcoder/TestRSRawCoderLegacy.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/coder/TestHHXORErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util/package-info.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/rawcoder/TestRSRawCoder.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawErasureCoderFactoryLegacy.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawEncoderLegacy.java


> Rename the RS coder from HDFS-RAID as legacy
> 
>
> Key: HADOOP-12808
> URL: https://issues.apache.org/jira/browse/HADOOP-12808
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Rui Li
> Fix For: 3.0.0
>
> Attachments: HADOOP-12808.1.patch, HADOOP-12808.2.patch, 
> HADOOP-12808.3.patch, HADOOP-12808.4.patch
>
>
> We can use this JIRA to rename the new Java coder and make it default for 
> HDFS-EC. Package-info is also needed after HADOOP-12041.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12808) Rename the RS coder from HDFS-RAID as legacy

2016-02-24 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15165700#comment-15165700
 ] 

Kai Zheng commented on HADOOP-12808:


Thanks Jing and Zhe for moving on this!

> Rename the RS coder from HDFS-RAID as legacy
> 
>
> Key: HADOOP-12808
> URL: https://issues.apache.org/jira/browse/HADOOP-12808
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Rui Li
> Fix For: 3.0.0
>
> Attachments: HADOOP-12808.1.patch, HADOOP-12808.2.patch, 
> HADOOP-12808.3.patch, HADOOP-12808.4.patch
>
>
> We can use this JIRA to rename the new Java coder and make it default for 
> HDFS-EC. Package-info is also needed after HADOOP-12041.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12827) WebHdfs socket timeouts should be configurable

2016-02-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15166343#comment-15166343
 ] 

Hadoop QA commented on HADOOP-12827:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 20s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 25s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 7s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 13s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 17s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 4s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 50s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_72. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 53s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_72. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 56s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 52m 1s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:

[jira] [Updated] (HADOOP-12824) Collect network and disk usage on the node in Windows

2016-02-24 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HADOOP-12824:
-
Attachment: HADOOP-12824-v003.patch

Adding unit tests.

> Collect network and disk usage on the node in Windows
> -
>
> Key: HADOOP-12824
> URL: https://issues.apache.org/jira/browse/HADOOP-12824
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HADOOP-12824-v000.patch, HADOOP-12824-v001.patch, 
> HADOOP-12824-v002.patch, HADOOP-12824-v003.patch
>
>
> HADOOP-12210 collects the node network usage for Linux; this JIRA does it for 
> Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12838) Add metrics for LDAP group mapping resolution time

2016-02-24 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15166346#comment-15166346
 ] 

Wei-Chiu Chuang commented on HADOOP-12838:
--

Hmm. Maybe something similar is already in place?
https://issues.apache.org/jira/browse/HDFS-5220

> Add metrics for LDAP group mapping resolution time
> --
>
> Key: HADOOP-12838
> URL: https://issues.apache.org/jira/browse/HADOOP-12838
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: LDAP, metrics, supportability
>
> LDAP group mapping needs to communicate with a LDAP server. Sometime it takes 
> extremely long time to communicate and resolve group mapping. As a result, 
> system performance degrades and it is not obvious why it goes worse without 
> an in-depth investigation.
> Let's add a Metrics for LDAP group mapping and log the resolution time to 
> help debugging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12824) Collect network and disk usage on the node in Windows

2016-02-24 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HADOOP-12824:
-
Attachment: (was: HADOOP-12824-v003.patch)

> Collect network and disk usage on the node in Windows
> -
>
> Key: HADOOP-12824
> URL: https://issues.apache.org/jira/browse/HADOOP-12824
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HADOOP-12824-v000.patch, HADOOP-12824-v001.patch, 
> HADOOP-12824-v002.patch
>
>
> HADOOP-12210 collects the node network usage for Linux; this JIRA does it for 
> Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12824) Collect network and disk usage on the node in Windows

2016-02-24 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HADOOP-12824:
-
Attachment: HADOOP-12824-v003.patch

Fixed unit tests.

> Collect network and disk usage on the node in Windows
> -
>
> Key: HADOOP-12824
> URL: https://issues.apache.org/jira/browse/HADOOP-12824
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HADOOP-12824-v000.patch, HADOOP-12824-v001.patch, 
> HADOOP-12824-v002.patch, HADOOP-12824-v003.patch
>
>
> HADOOP-12210 collects the node network usage for Linux; this JIRA does it for 
> Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12824) Collect network and disk usage on the node in Windows

2016-02-24 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HADOOP-12824:
-
Attachment: (was: HADOOP-12824-v003.patch)

> Collect network and disk usage on the node in Windows
> -
>
> Key: HADOOP-12824
> URL: https://issues.apache.org/jira/browse/HADOOP-12824
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HADOOP-12824-v000.patch, HADOOP-12824-v001.patch, 
> HADOOP-12824-v002.patch
>
>
> HADOOP-12210 collects the node network usage for Linux; this JIRA does it for 
> Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12824) Collect network and disk usage on the node in Windows

2016-02-24 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HADOOP-12824:
-
Attachment: HADOOP-12824-v003.patch

Fixed unit tests.

> Collect network and disk usage on the node in Windows
> -
>
> Key: HADOOP-12824
> URL: https://issues.apache.org/jira/browse/HADOOP-12824
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HADOOP-12824-v000.patch, HADOOP-12824-v001.patch, 
> HADOOP-12824-v002.patch, HADOOP-12824-v003.patch
>
>
> HADOOP-12210 collects the node network usage for Linux; this JIRA does it for 
> Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12838) Add metrics for LDAP-specific group mapping resolution time

2016-02-24 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12838:
-
Summary: Add metrics for LDAP-specific group mapping resolution time  (was: 
Add metrics for LDAP group mapping resolution time)

> Add metrics for LDAP-specific group mapping resolution time
> ---
>
> Key: HADOOP-12838
> URL: https://issues.apache.org/jira/browse/HADOOP-12838
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: LDAP, metrics, supportability
>
> LDAP group mapping needs to communicate with a LDAP server. Sometime it takes 
> extremely long time to communicate and resolve group mapping. As a result, 
> system performance degrades and it is not obvious why it goes worse without 
> an in-depth investigation.
> Let's add a Metrics for LDAP group mapping and log the resolution time to 
> help debugging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12838) Add metrics for LDAP-specific group mapping resolution time

2016-02-24 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15166365#comment-15166365
 ] 

Wei-Chiu Chuang commented on HADOOP-12838:
--

Ok. So there's a metrics for group mapping resolution in general. But it seems 
we may need a more detailed metrics/ logging to know how many timeouts have 
resulted. Also, since a typical LDAP group resolution requires two LDAP 
queries, It may need to log the latency of both queries.

> Add metrics for LDAP-specific group mapping resolution time
> ---
>
> Key: HADOOP-12838
> URL: https://issues.apache.org/jira/browse/HADOOP-12838
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: LDAP, metrics, supportability
>
> LDAP group mapping needs to communicate with a LDAP server. Sometime it takes 
> extremely long time to communicate and resolve group mapping. As a result, 
> system performance degrades and it is not obvious why it goes worse without 
> an in-depth investigation.
> Let's add a Metrics for LDAP group mapping and log the resolution time to 
> help debugging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-10315) Log the original exception when getGroups() fail in UGI.

2016-02-24 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reassigned HADOOP-10315:
---

Assignee: Ted Yu

> Log the original exception when getGroups() fail in UGI.
> 
>
> Key: HADOOP-10315
> URL: https://issues.apache.org/jira/browse/HADOOP-10315
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.23.10, 2.2.0
>Reporter: Kihwal Lee
>Assignee: Ted Yu
> Attachments: HADOOP-10315.v1.patch
>
>
> In UserGroupInformation, getGroupNames() swallows the original exception. 
> There have been many occasions that more information on the original 
> exception could have helped.
> {code}
>   public synchronized String[] getGroupNames() {
> ensureInitialized();
> try {
>   List result = groups.getGroups(getShortUserName());
>   return result.toArray(new String[result.size()]);
> } catch (IOException ie) {
>   LOG.warn("No groups available for user " + getShortUserName());
>   return new String[0];
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10315) Log the original exception when getGroups() fail in UGI.

2016-02-24 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HADOOP-10315:

Status: Patch Available  (was: Open)

> Log the original exception when getGroups() fail in UGI.
> 
>
> Key: HADOOP-10315
> URL: https://issues.apache.org/jira/browse/HADOOP-10315
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.2.0, 0.23.10
>Reporter: Kihwal Lee
>Assignee: Ted Yu
> Attachments: HADOOP-10315.v1.patch
>
>
> In UserGroupInformation, getGroupNames() swallows the original exception. 
> There have been many occasions that more information on the original 
> exception could have helped.
> {code}
>   public synchronized String[] getGroupNames() {
> ensureInitialized();
> try {
>   List result = groups.getGroups(getShortUserName());
>   return result.toArray(new String[result.size()]);
> } catch (IOException ie) {
>   LOG.warn("No groups available for user " + getShortUserName());
>   return new String[0];
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10315) Log the original exception when getGroups() fail in UGI.

2016-02-24 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HADOOP-10315:

Attachment: HADOOP-10315.v1.patch

Recently bumped into a case with the following in region server log:
{code}
security.UserGroupInformation: No groups available for user XX
{code}

> Log the original exception when getGroups() fail in UGI.
> 
>
> Key: HADOOP-10315
> URL: https://issues.apache.org/jira/browse/HADOOP-10315
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.23.10, 2.2.0
>Reporter: Kihwal Lee
> Attachments: HADOOP-10315.v1.patch
>
>
> In UserGroupInformation, getGroupNames() swallows the original exception. 
> There have been many occasions that more information on the original 
> exception could have helped.
> {code}
>   public synchronized String[] getGroupNames() {
> ensureInitialized();
> try {
>   List result = groups.getGroups(getShortUserName());
>   return result.toArray(new String[result.size()]);
> } catch (IOException ie) {
>   LOG.warn("No groups available for user " + getShortUserName());
>   return new String[0];
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10885) Fix dead links to the javadocs of o.a.h.security.authorize

2016-02-24 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15166396#comment-15166396
 ] 

Ray Chiang commented on HADOOP-10885:
-

Did this get fixed in HADOOP-12545?

> Fix dead links to the javadocs of o.a.h.security.authorize
> --
>
> Key: HADOOP-10885
> URL: https://issues.apache.org/jira/browse/HADOOP-10885
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 2.6.0
>Reporter: Akira AJISAKA
>Assignee: Yufei Gu
>Priority: Minor
>  Labels: newbie
>
> In API doc ([my trunk 
> build|http://aajisaka.github.io/hadoop-project/api/index.html]), 
> {{ImpersonationProvider}} and {{DefaultImpersonationProvider}} classes are 
> linked but these documents are not generated.
> There's an inconsistency about {{@InterfaceAudience}} between package-info 
> and these classes, so these dead links are generated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12666) Support Microsoft Azure Data Lake - as a file system in Hadoop

2016-02-24 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15166449#comment-15166449
 ] 

Aaron Fabbri commented on HADOOP-12666:
---

[~vishwajeet.dusane] thanks for the responses. It seems you are not convinced 
on some of the synchronization bugs I pointed out.

Two hints:

- Understand why 
[ConcurrentHashMap|https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ConcurrentHashMap.html]
 adds functions like putIfAbsent().  Just because get() and put() are 
synchronized, does not make this safe: 

{noformat}
if (map.get() == null) {
Thing t = new Thing();
map.put(t);
}
{noformat}


- Use of volatile.  Given:
{noformat}
volatile byte[] data = null;
volatile int bufferOffset = 0;
{noformat}

This does not make code like this thread safe:

{noformat}
int read() {
return data[bufferOffset++] & 0xff;
}
{noformat}

The argument "we probably don't need thread safety anyways" implies you should 
just remove all synchronization.  If not needed, it would hurt performance.

If I'm wrong on any of this please call it out.  Thank you.




> Support Microsoft Azure Data Lake - as a file system in Hadoop
> --
>
> Key: HADOOP-12666
> URL: https://issues.apache.org/jira/browse/HADOOP-12666
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: HADOOP-12666-002.patch, HADOOP-12666-003.patch, 
> HADOOP-12666-004.patch, HADOOP-12666-005.patch, HADOOP-12666-006.patch, 
> HADOOP-12666-1.patch
>
>   Original Estimate: 336h
>  Time Spent: 336h
>  Remaining Estimate: 0h
>
> h2. Description
> This JIRA describes a new file system implementation for accessing Microsoft 
> Azure Data Lake Store (ADL) from within Hadoop. This would enable existing 
> Hadoop applications such has MR, HIVE, Hbase etc..,  to use ADL store as 
> input or output.
>  
> ADL is ultra-high capacity, Optimized for massive throughput with rich 
> management and security features. More details available at 
> https://azure.microsoft.com/en-us/services/data-lake-store/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12824) Collect network and disk usage on the node in Windows

2016-02-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15166471#comment-15166471
 ] 

Hadoop QA commented on HADOOP-12824:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 47s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 29s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 27s 
{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 44s 
{color} | {color:red} root in the patch failed with JDK v1.8.0_72. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 44s {color} | 
{color:red} root in the patch failed with JDK v1.8.0_72. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 44s {color} 
| {color:red} root in the patch failed with JDK v1.8.0_72. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 49s 
{color} | {color:red} root in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 49s {color} | 
{color:red} root in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 49s {color} 
| {color:red} root in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 30s 
{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 19s 
{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 30s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_72. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 33s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 34m 58s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12789809/HADOOP-12824-v003.patch
 |
| JIRA Issue | HADOOP-12824 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checksty

[jira] [Updated] (HADOOP-11820) aw jira testing, ignore

2016-02-24 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Attachment: YARN-3368.patch

> aw jira testing, ignore
> ---
>
> Key: HADOOP-11820
> URL: https://issues.apache.org/jira/browse/HADOOP-11820
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: YARN-3368.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12808) Rename the RS coder from HDFS-RAID as legacy

2016-02-24 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15166513#comment-15166513
 ] 

Rui Li commented on HADOOP-12808:
-

Thanks guys for the review.

> Rename the RS coder from HDFS-RAID as legacy
> 
>
> Key: HADOOP-12808
> URL: https://issues.apache.org/jira/browse/HADOOP-12808
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Rui Li
> Fix For: 3.0.0
>
> Attachments: HADOOP-12808.1.patch, HADOOP-12808.2.patch, 
> HADOOP-12808.3.patch, HADOOP-12808.4.patch
>
>
> We can use this JIRA to rename the new Java coder and make it default for 
> HDFS-EC. Package-info is also needed after HADOOP-12041.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12824) Collect network and disk usage on the node in Windows

2016-02-24 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HADOOP-12824:
-
Attachment: (was: HADOOP-12824-v003.patch)

> Collect network and disk usage on the node in Windows
> -
>
> Key: HADOOP-12824
> URL: https://issues.apache.org/jira/browse/HADOOP-12824
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HADOOP-12824-v000.patch, HADOOP-12824-v001.patch, 
> HADOOP-12824-v002.patch
>
>
> HADOOP-12210 collects the node network usage for Linux; this JIRA does it for 
> Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12824) Collect network and disk usage on the node in Windows

2016-02-24 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HADOOP-12824:
-
Attachment: HADOOP-12824-v003.patch

Fixing parsing error.

> Collect network and disk usage on the node in Windows
> -
>
> Key: HADOOP-12824
> URL: https://issues.apache.org/jira/browse/HADOOP-12824
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HADOOP-12824-v000.patch, HADOOP-12824-v001.patch, 
> HADOOP-12824-v002.patch, HADOOP-12824-v003.patch
>
>
> HADOOP-12210 collects the node network usage for Linux; this JIRA does it for 
> Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10885) Fix dead links to the javadocs of o.a.h.security.authorize

2016-02-24 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15166545#comment-15166545
 ] 

Akira AJISAKA commented on HADOOP-10885:


Yes. Thanks Ray for pointing this out!

> Fix dead links to the javadocs of o.a.h.security.authorize
> --
>
> Key: HADOOP-10885
> URL: https://issues.apache.org/jira/browse/HADOOP-10885
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 2.6.0
>Reporter: Akira AJISAKA
>Assignee: Yufei Gu
>Priority: Minor
>  Labels: newbie
>
> In API doc ([my trunk 
> build|http://aajisaka.github.io/hadoop-project/api/index.html]), 
> {{ImpersonationProvider}} and {{DefaultImpersonationProvider}} classes are 
> linked but these documents are not generated.
> There's an inconsistency about {{@InterfaceAudience}} between package-info 
> and these classes, so these dead links are generated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-10885) Fix dead links to the javadocs of o.a.h.security.authorize

2016-02-24 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA resolved HADOOP-10885.

Resolution: Duplicate

> Fix dead links to the javadocs of o.a.h.security.authorize
> --
>
> Key: HADOOP-10885
> URL: https://issues.apache.org/jira/browse/HADOOP-10885
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 2.6.0
>Reporter: Akira AJISAKA
>Assignee: Yufei Gu
>Priority: Minor
>  Labels: newbie
>
> In API doc ([my trunk 
> build|http://aajisaka.github.io/hadoop-project/api/index.html]), 
> {{ImpersonationProvider}} and {{DefaultImpersonationProvider}} classes are 
> linked but these documents are not generated.
> There's an inconsistency about {{@InterfaceAudience}} between package-info 
> and these classes, so these dead links are generated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11820) aw jira testing, ignore

2016-02-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15166557#comment-15166557
 ] 

Hadoop QA commented on HADOOP-11820:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s {color} 
| {color:red} HADOOP-11820 does not apply to YARN-3368. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12789831/YARN-3368.patch |
| JIRA Issue | HADOOP-11820 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8712/console |
| Powered by | Apache Yetus 0.2.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> aw jira testing, ignore
> ---
>
> Key: HADOOP-11820
> URL: https://issues.apache.org/jira/browse/HADOOP-11820
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: YARN-3368.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HADOOP-11820) aw jira testing, ignore

2016-02-24 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s {color} 
| {color:red} HADOOP-11820 does not apply to YARN-3368. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12789831/YARN-3368.patch |
| JIRA Issue | HADOOP-11820 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8712/console |
| Powered by | Apache Yetus 0.2.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.

)

> aw jira testing, ignore
> ---
>
> Key: HADOOP-11820
> URL: https://issues.apache.org/jira/browse/HADOOP-11820
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: Y1.patch, YARN-3368.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11820) aw jira testing, ignore

2016-02-24 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Attachment: Y1.patch

> aw jira testing, ignore
> ---
>
> Key: HADOOP-11820
> URL: https://issues.apache.org/jira/browse/HADOOP-11820
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: Y1.patch, YARN-3368.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10315) Log the original exception when getGroups() fail in UGI.

2016-02-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15166596#comment-15166596
 ] 

Hadoop QA commented on HADOOP-10315:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 12s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 49s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 8s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 55s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 43s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 24s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m 46s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12789816/HADOOP-10315.v1.patch 
|
| JIRA Issue | HADOOP-10315 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8ad4ba00a837 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/pers

[jira] [Commented] (HADOOP-11820) aw jira testing, ignore

2016-02-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15166603#comment-15166603
 ] 

Hadoop QA commented on HADOOP-11820:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} HADOOP-11820 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12789842/Y1.patch |
| JIRA Issue | HADOOP-11820 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8714/console |
| Powered by | Apache Yetus 0.2.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> aw jira testing, ignore
> ---
>
> Key: HADOOP-11820
> URL: https://issues.apache.org/jira/browse/HADOOP-11820
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
> Attachments: Y1.patch, YARN-3368.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12824) Collect network and disk usage on the node in Windows

2016-02-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15166646#comment-15166646
 ] 

Hadoop QA commented on HADOOP-12824:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
54s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 59s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 42s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 58s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 5m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 56s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 8s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 22s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 15s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12789835/HADOOP-12824-v003.patch
 |
| JIRA Issue | HADOOP-12824 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux cc205c417183 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 

[jira] [Updated] (HADOOP-12826) Rename the new Java coder and make it default

2016-02-24 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HADOOP-12826:

Attachment: HADOOP-12826.1.patch

> Rename the new Java coder and make it default
> -
>
> Key: HADOOP-12826
> URL: https://issues.apache.org/jira/browse/HADOOP-12826
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Rui Li
> Attachments: HADOOP-12826.1.patch
>
>
> Break the renaming into 2 parts according to the discussion 
> [here|https://issues.apache.org/jira/browse/HADOOP-12808?focusedCommentId=15152819&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15152819].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12826) Rename the new Java coder and make it default

2016-02-24 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15166655#comment-15166655
 ] 

Rui Li commented on HADOOP-12826:
-

cc [~zhz], [~drankye]

> Rename the new Java coder and make it default
> -
>
> Key: HADOOP-12826
> URL: https://issues.apache.org/jira/browse/HADOOP-12826
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Rui Li
> Attachments: HADOOP-12826.1.patch
>
>
> Break the renaming into 2 parts according to the discussion 
> [here|https://issues.apache.org/jira/browse/HADOOP-12808?focusedCommentId=15152819&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15152819].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12826) Rename the new Java coder and make it default

2016-02-24 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HADOOP-12826:

Status: Patch Available  (was: Open)

> Rename the new Java coder and make it default
> -
>
> Key: HADOOP-12826
> URL: https://issues.apache.org/jira/browse/HADOOP-12826
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Rui Li
> Attachments: HADOOP-12826.1.patch
>
>
> Break the renaming into 2 parts according to the discussion 
> [here|https://issues.apache.org/jira/browse/HADOOP-12808?focusedCommentId=15152819&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15152819].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12835) RollingFileSystemSink can throw an NPE on non-secure clusters

2016-02-24 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HADOOP-12835:
--
Attachment: (was: HADOOP-12835.001.patch)

> RollingFileSystemSink can throw an NPE on non-secure clusters
> -
>
> Key: HADOOP-12835
> URL: https://issues.apache.org/jira/browse/HADOOP-12835
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>
> If the sink init fails (such as because the HDFS cluster isn't running) on a 
> non-secure cluster, the init will throw an NPE because of missing properties.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12835) RollingFileSystemSink can throw an NPE on non-secure clusters

2016-02-24 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HADOOP-12835:
--
Attachment: HADOOP-12835.001.patch

Bumping Jenkins

> RollingFileSystemSink can throw an NPE on non-secure clusters
> -
>
> Key: HADOOP-12835
> URL: https://issues.apache.org/jira/browse/HADOOP-12835
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HADOOP-12835.001.patch
>
>
> If the sink init fails (such as because the HDFS cluster isn't running) on a 
> non-secure cluster, the init will throw an NPE because of missing properties.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12826) Rename the new Java coder and make it default

2016-02-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15166753#comment-15166753
 ] 

Hadoop QA commented on HADOOP-12826:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 34s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 36s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 34s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 35s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} hadoop-common-project/hadoop-common: patch 
generated 0 new + 2 unchanged - 1 fixed = 2 total (was 3) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 42s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 8s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 23s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12789857/HADOOP-12826.1.patch |
| JIRA Issue | HADOOP-12826 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4260dffb5637 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | tru

[jira] [Updated] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-02-24 Thread LingZhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LingZhou updated HADOOP-12756:
--
Attachment: HCFS User manual.md
OSS integration.pdf

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: shimingfei
>Assignee: shimingfei
> Attachments: HCFS User manual.md, OSS integration.pdf, OSS 
> integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12444) Consider implementing lazy seek in S3AInputStream

2016-02-24 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-12444:
--
Attachment: HADOOP-12444.3.patch

Thanks [~thodemoor].  Attaching the revised patch.  I will upload the test 
report  shortly.

There are the 2 tests which fail in both master and with-patch.

AWS tests without patch (“mvn clean package” from 
hadoop/hadoop-tools/hadoop-aws):
==



Results :


Failed tests:
  TestS3Credentials.noSecretShouldThrow Expected exception: 
java.lang.IllegalArgumentException
  TestS3Credentials.noAccessIdShouldThrow Expected exception: 
java.lang.IllegalArgumentException

Tests in error:
  
TestS3AContractRootDir>AbstractContractRootDirectoryTest.testListEmptyRootDirectory:134
 » FileNotFound
  TestS3AConfiguration.TestAutomaticProxyPortSelection:138 » AmazonS3 Forbidden 
...

Tests run: 220, Failures: 2, Errors: 2, Skipped: 6


AWS tests with patch


Results :


Failed tests:
  TestS3Credentials.noSecretShouldThrow Expected exception: 
java.lang.IllegalArgumentException
  TestS3Credentials.noAccessIdShouldThrow Expected exception: 
java.lang.IllegalArgumentException

Tests in error:
  
TestS3AContractRootDir>AbstractContractRootDirectoryTest.testListEmptyRootDirectory:134
 » FileNotFound
  TestS3AConfiguration.TestAutomaticProxyPortSelection:138 » AmazonS3 Forbidden 
...

Tests run: 220, Failures: 2, Errors: 2, Skipped: 6


{noformat}
Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 8.75 sec <<< 
FAILURE! - in org.apache.hadoop.fs.contract.s3a.TestS3AContractRootDir
testListEmptyRootDirectory(org.apache.hadoop.fs.contract.s3a.TestS3AContractRootDir)
  Time elapsed: 1.633 sec  <<< ERROR!
java.io.FileNotFoundException: No such file or directory: /
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:1000)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:738)
at 
org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testListEmptyRootDirectory(AbstractContractRootDirectoryTest.java:134)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)

TestAutomaticProxyPortSelection(org.apache.hadoop.fs.s3a.TestS3AConfiguration)  
Time elapsed: 620.356 sec  <<< ERROR!
com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon 
S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: null)
at 
com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1182)
at 
com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:770)
at 
com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3785)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3738)
at 
com.amazonaws.services.s3.AmazonS3Client.listMultipartUploads(AmazonS3Client.java:2796)
at 
com.amazonaws.services.s3.transfer.TransferManager.abortMultipartUploads(TransferManager.java:1217)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:313)
at 
org.apache.hadoop.fs.s3a.S3ATestUtils.createTestFileSystem(S3ATestUtils.java:51)
at 
org.apache.hadoop.fs.s3a.TestS3AConfiguration.TestAutomaticProxyPortSelection(TestS3AConfiguration.java:138)
{noformat}

> Consider implementing lazy seek in S3AInputStream
> -
>
> Key: HADOOP-12444
> URL: https://issues.apache.org/jira/browse/HADOOP-12444
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.1
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
> Attachments: HADOOP-124

[jira] [Updated] (HADOOP-12444) Consider implementing lazy seek in S3AInputStream

2016-02-24 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-12444:
--
Attachment: hadoop-aws-test-reports.tar.gz

Uploading AWS test report for reference.

> Consider implementing lazy seek in S3AInputStream
> -
>
> Key: HADOOP-12444
> URL: https://issues.apache.org/jira/browse/HADOOP-12444
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.1
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
> Attachments: HADOOP-12444.1.patch, HADOOP-12444.2.patch, 
> HADOOP-12444.3.patch, HADOOP-12444.WIP.patch, hadoop-aws-test-reports.tar.gz
>
>
> - Currently, "read(long position, byte[] buffer, int offset, int length)" is 
> not implemented in S3AInputStream (unlike DFSInputStream). So, 
> "readFully(long position, byte[] buffer, int offset, int length)" in 
> S3AInputStream goes through the default implementation of seek(), read(), 
> seek() in FSInputStream. 
> - However, seek() in S3AInputStream involves re-opening of connection to S3 
> everytime 
> (https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L115).
>   
> - It would be good to consider having a lazy seek implementation to reduce 
> connection overheads to S3. (e.g Presto implements lazy seek. 
> https://github.com/facebook/presto/blob/master/presto-hive/src/main/java/com/facebook/presto/hive/PrestoS3FileSystem.java#L623)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12837) FileStatus.getModificationTime not working on S3

2016-02-24 Thread Jagdish Kewat (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15166797#comment-15166797
 ] 

Jagdish Kewat commented on HADOOP-12837:


Thanks [~cnauroth] for sharing the details. Yes, it is referring to a 
directory. Would it be possible for you to suggest some workaround since there 
are no plans to fix it as you mentioned. 

> FileStatus.getModificationTime not working on S3
> 
>
> Key: HADOOP-12837
> URL: https://issues.apache.org/jira/browse/HADOOP-12837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Jagdish Kewat
>
> Hi Team,
> We have observed an issue with the FileStatus.getModificationTime() API on S3 
> filesystem. The method always returns 0.
> I googled for this however couldn't find any solution as such which would fit 
> in my scheme of things. S3FileStatus seems to be an option however I would be 
> using this API on HDFS as well as S3 both so can't go for it.
> I tried to run the job on:
> * Release label:emr-4.2.0
> * Hadoop distribution:Amazon 2.6.0
> * Hadoop Common jar: hadoop-common-2.6.0.jar
> Please advise if any patch or fix available for this.
> Thanks,
> Jagdish



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12711) Remove dependency on commons-httpclient for ServletUtil

2016-02-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15166836#comment-15166836
 ] 

Hadoop QA commented on HADOOP-12711:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
20s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 33s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 1s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 46s 
{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} branch-2 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 39s 
{color} | {color:red} hadoop-common-project/hadoop-common in branch-2 has 5 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 3s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 6s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} hadoop-common-project/hadoop-common: patch 
generated 0 new + 10 unchanged - 4 fixed = 10 total (was 14) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 61 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 7s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 53s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 24s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_95 Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:babe025 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12789373/HADOOP-12711-branch-2.002.patch
 |
| JIRA Issue | HADOOP-12711 |
| Option

[jira] [Commented] (HADOOP-12835) RollingFileSystemSink can throw an NPE on non-secure clusters

2016-02-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15166856#comment-15166856
 ] 

Hadoop QA commented on HADOOP-12835:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
1s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 15s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 55s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 58s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 58s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 19s 
{color} | {color:red} hadoop-common-project/hadoop-common: patch generated 4 
new + 0 unchanged - 0 fixed = 4 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 58s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 33s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 46s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_95 Failed junit tests | hadoop.fs.TestFsShellReturnCode |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12789868/HADOOP-12835.001.patch
 |
| JIRA Issue | HADOOP-12835 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux fc99300915af 3.13.0-36-lowlatenc

[jira] [Created] (HADOOP-12839) hadoop-minikdc Missing artifact org.apache.directory.jdbm:apacheds-jdbm1:bundle:2.0.0-M2

2016-02-24 Thread liguirong (JIRA)
liguirong created HADOOP-12839:
--

 Summary: hadoop-minikdc  Missing artifact 
org.apache.directory.jdbm:apacheds-jdbm1:bundle:2.0.0-M2
 Key: HADOOP-12839
 URL: https://issues.apache.org/jira/browse/HADOOP-12839
 Project: Hadoop Common
  Issue Type: Bug
Reporter: liguirong






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)