[jira] [Updated] (HADOOP-13065) Add a new interface for retrieving FS and FC Statistics

2016-05-05 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13065:
---
Attachment: HADOOP-13065.010.patch

Thanks [~cmccabe] for the comment. The v10 patch deprecates the getStatistics() 
API, and fixes the simple checkstyle warning.

One quick question is that, some of the storage statistics classes (e.g. 
{{GlobalStorageStatistics}} are annotated as {{Stable}}, do we have to be a bit 
more conservative by making them {{Unstable}} before ultimately removing the 
Statistics?

As follow-on work,
# We can move the rack-awareness read bytes to a separate storage statistics as 
it's only used by HDFS
# We can remove Statistics API, but keep the thread local implementation in  
{{FileSystemStorageStatistics}} class.

I will update the previously filed jiras [HADOOP-13032] and [HADOOP-13031] 
accordingly after this patch is in the trunk.

> Add a new interface for retrieving FS and FC Statistics
> ---
>
> Key: HADOOP-13065
> URL: https://issues.apache.org/jira/browse/HADOOP-13065
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Ram Venkatesh
>Assignee: Mingliang Liu
> Attachments: HADOOP-13065-007.patch, HADOOP-13065.008.patch, 
> HADOOP-13065.009.patch, HADOOP-13065.010.patch, HDFS-10175.000.patch, 
> HDFS-10175.001.patch, HDFS-10175.002.patch, HDFS-10175.003.patch, 
> HDFS-10175.004.patch, HDFS-10175.005.patch, HDFS-10175.006.patch, 
> TestStatisticsOverhead.java
>
>
> Currently FileSystem.Statistics exposes the following statistics:
> BytesRead
> BytesWritten
> ReadOps
> LargeReadOps
> WriteOps
> These are in-turn exposed as job counters by MapReduce and other frameworks. 
> There is logic within DfsClient to map operations to these counters that can 
> be confusing, for instance, mkdirs counts as a writeOp.
> Proposed enhancement:
> Add a statistic for each DfsClient operation including create, append, 
> createSymlink, delete, exists, mkdirs, rename and expose them as new 
> properties on the Statistics object. The operation-specific counters can be 
> used for analyzing the load imposed by a particular job on HDFS. 
> For example, we can use them to identify jobs that end up creating a large 
> number of files.
> Once this information is available in the Statistics object, the app 
> frameworks like MapReduce can expose them as additional counters to be 
> aggregated and recorded as part of job summary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13091) DistCp masks potential CRC check failures

2016-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273618#comment-15273618
 ] 

Hadoop QA commented on HADOOP-13091:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
51s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 13s 
{color} | {color:red} hadoop-tools/hadoop-distcp: The patch generated 1 new + 
60 unchanged - 1 fixed = 61 total (was 61) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 31s 
{color} | {color:green} hadoop-distcp in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 6s 
{color} | {color:green} hadoop-distcp in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 1s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12802592/HADOOP-13091.003.patch
 |
| JIRA Issue | HADOOP-13091 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 75689f5c2fb9 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8d48266 |

[jira] [Commented] (HADOOP-13065) Add a new interface for retrieving FS and FC Statistics

2016-05-05 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273605#comment-15273605
 ] 

Colin Patrick McCabe commented on HADOOP-13065:
---

Thanks for the reviews.

bq. in FileSystem.getStatistics(), For performance, you could try using 
ConcurrentMap for the map, and only if it is not present create the objects and 
call putIfAbsent() (or a synchronized block create and update the maps (with a 
second lookup there to eliminate the small race condition). This will eliminate 
the sync point on a simple lookup when the entry exists.

Hmm.  I don't think that we really need to optimize this function.  When using 
the new API, the only time this function gets called is when a new FileSystem 
object is created, which should be very rare.

bq. For testing a may to reset/remove an entry could be handy.

We do have some tests that zero out the existing statistics objects.  I'm not 
sure if removing the entry really gets us more coverage than we have now, since 
we know that it was created by this code path (therefore the code path was 
tested).

bq. That's said, we can firstly deprecate the FileSystem#getStatistics()?

Agree.

> Add a new interface for retrieving FS and FC Statistics
> ---
>
> Key: HADOOP-13065
> URL: https://issues.apache.org/jira/browse/HADOOP-13065
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Ram Venkatesh
>Assignee: Mingliang Liu
> Attachments: HADOOP-13065-007.patch, HADOOP-13065.008.patch, 
> HADOOP-13065.009.patch, HDFS-10175.000.patch, HDFS-10175.001.patch, 
> HDFS-10175.002.patch, HDFS-10175.003.patch, HDFS-10175.004.patch, 
> HDFS-10175.005.patch, HDFS-10175.006.patch, TestStatisticsOverhead.java
>
>
> Currently FileSystem.Statistics exposes the following statistics:
> BytesRead
> BytesWritten
> ReadOps
> LargeReadOps
> WriteOps
> These are in-turn exposed as job counters by MapReduce and other frameworks. 
> There is logic within DfsClient to map operations to these counters that can 
> be confusing, for instance, mkdirs counts as a writeOp.
> Proposed enhancement:
> Add a statistic for each DfsClient operation including create, append, 
> createSymlink, delete, exists, mkdirs, rename and expose them as new 
> properties on the Statistics object. The operation-specific counters can be 
> used for analyzing the load imposed by a particular job on HDFS. 
> For example, we can use them to identify jobs that end up creating a large 
> number of files.
> Once this information is available in the Statistics object, the app 
> frameworks like MapReduce can expose them as additional counters to be 
> aggregated and recorded as part of job summary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13091) DistCp masks potential CRC check failures

2016-05-05 Thread Lin Yiqun (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273598#comment-15273598
 ] 

Lin Yiqun commented on HADOOP-13091:


[~cnauroth], thanks for your reply. I agree with your comment. In addition, the 
jenkins seem hung. Upload the v003 patch again.

> DistCp masks potential CRC check failures
> -
>
> Key: HADOOP-13091
> URL: https://issues.apache.org/jira/browse/HADOOP-13091
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Elliot West
>Assignee: Lin Yiqun
> Attachments: HDFS-10338.001.patch, HDFS-10338.002.patch
>
>
> There appear to be edge cases whereby CRC checks may be circumvented when 
> requests for checksums from the source or target file system fail. In this 
> event CRCs could differ between the source and target and yet the DistCp copy 
> would succeed, even when the 'skip CRC check' option is not being used.
> The code in question is contained in the method 
> [{{org.apache.hadoop.tools.util.DistCpUtils#checksumsAreEqual(...)}}|https://github.com/apache/hadoop/blob/release-2.7.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java#L457]
> Specifically this code block suggests that if there is a failure when trying 
> to read the source or target checksum then the method will return {{true}} 
> (i.e.  the checksums are equal), implying that the check succeeded. In actual 
> fact we just failed to obtain the checksum and could not perform the check.
> {code}
> try {
>   sourceChecksum = sourceChecksum != null ? sourceChecksum : 
> sourceFS.getFileChecksum(source);
>   targetChecksum = targetFS.getFileChecksum(target);
> } catch (IOException e) {
>   LOG.error("Unable to retrieve checksum for " + source + " or "
> + target, e);
> }
> return (sourceChecksum == null || targetChecksum == null ||
>   sourceChecksum.equals(targetChecksum));
> {code}
> I believe that at the very least the caught {{IOException}} should be 
> re-thrown. If this is not deemed desirable then I believe an option 
> ({{--strictCrc}}?) should be added to enforce a strict check where we require 
> that both the source and target CRCs are retrieved, are not null, and are 
> then compared for equality. If for any reason either of the CRCs retrievals 
> fail then an exception is thrown.
> Clearly some {{FileSystems}} do not support CRCs and invocations to 
> {{FileSystem.getFileChecksum(...)}} return {{null}} in these instances. I 
> would suggest that these should fail a strict CRC check to prevent users 
> developing a false sense of security in their copy pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13091) DistCp masks potential CRC check failures

2016-05-05 Thread Lin Yiqun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lin Yiqun updated HADOOP-13091:
---
Attachment: HADOOP-13091.003.patch

> DistCp masks potential CRC check failures
> -
>
> Key: HADOOP-13091
> URL: https://issues.apache.org/jira/browse/HADOOP-13091
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Elliot West
>Assignee: Lin Yiqun
> Attachments: HADOOP-13091.003.patch, HDFS-10338.001.patch, 
> HDFS-10338.002.patch
>
>
> There appear to be edge cases whereby CRC checks may be circumvented when 
> requests for checksums from the source or target file system fail. In this 
> event CRCs could differ between the source and target and yet the DistCp copy 
> would succeed, even when the 'skip CRC check' option is not being used.
> The code in question is contained in the method 
> [{{org.apache.hadoop.tools.util.DistCpUtils#checksumsAreEqual(...)}}|https://github.com/apache/hadoop/blob/release-2.7.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java#L457]
> Specifically this code block suggests that if there is a failure when trying 
> to read the source or target checksum then the method will return {{true}} 
> (i.e.  the checksums are equal), implying that the check succeeded. In actual 
> fact we just failed to obtain the checksum and could not perform the check.
> {code}
> try {
>   sourceChecksum = sourceChecksum != null ? sourceChecksum : 
> sourceFS.getFileChecksum(source);
>   targetChecksum = targetFS.getFileChecksum(target);
> } catch (IOException e) {
>   LOG.error("Unable to retrieve checksum for " + source + " or "
> + target, e);
> }
> return (sourceChecksum == null || targetChecksum == null ||
>   sourceChecksum.equals(targetChecksum));
> {code}
> I believe that at the very least the caught {{IOException}} should be 
> re-thrown. If this is not deemed desirable then I believe an option 
> ({{--strictCrc}}?) should be added to enforce a strict check where we require 
> that both the source and target CRCs are retrieved, are not null, and are 
> then compared for equality. If for any reason either of the CRCs retrievals 
> fail then an exception is thrown.
> Clearly some {{FileSystems}} do not support CRCs and invocations to 
> {{FileSystem.getFileChecksum(...)}} return {{null}} in these instances. I 
> would suggest that these should fail a strict CRC check to prevent users 
> developing a false sense of security in their copy pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13091) DistCp masks potential CRC check failures

2016-05-05 Thread Lin Yiqun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lin Yiqun updated HADOOP-13091:
---
Attachment: (was: HADOOP-13091.003.patch)

> DistCp masks potential CRC check failures
> -
>
> Key: HADOOP-13091
> URL: https://issues.apache.org/jira/browse/HADOOP-13091
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Elliot West
>Assignee: Lin Yiqun
> Attachments: HADOOP-13091.003.patch, HDFS-10338.001.patch, 
> HDFS-10338.002.patch
>
>
> There appear to be edge cases whereby CRC checks may be circumvented when 
> requests for checksums from the source or target file system fail. In this 
> event CRCs could differ between the source and target and yet the DistCp copy 
> would succeed, even when the 'skip CRC check' option is not being used.
> The code in question is contained in the method 
> [{{org.apache.hadoop.tools.util.DistCpUtils#checksumsAreEqual(...)}}|https://github.com/apache/hadoop/blob/release-2.7.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java#L457]
> Specifically this code block suggests that if there is a failure when trying 
> to read the source or target checksum then the method will return {{true}} 
> (i.e.  the checksums are equal), implying that the check succeeded. In actual 
> fact we just failed to obtain the checksum and could not perform the check.
> {code}
> try {
>   sourceChecksum = sourceChecksum != null ? sourceChecksum : 
> sourceFS.getFileChecksum(source);
>   targetChecksum = targetFS.getFileChecksum(target);
> } catch (IOException e) {
>   LOG.error("Unable to retrieve checksum for " + source + " or "
> + target, e);
> }
> return (sourceChecksum == null || targetChecksum == null ||
>   sourceChecksum.equals(targetChecksum));
> {code}
> I believe that at the very least the caught {{IOException}} should be 
> re-thrown. If this is not deemed desirable then I believe an option 
> ({{--strictCrc}}?) should be added to enforce a strict check where we require 
> that both the source and target CRCs are retrieved, are not null, and are 
> then compared for equality. If for any reason either of the CRCs retrievals 
> fail then an exception is thrown.
> Clearly some {{FileSystems}} do not support CRCs and invocations to 
> {{FileSystem.getFileChecksum(...)}} return {{null}} in these instances. I 
> would suggest that these should fail a strict CRC check to prevent users 
> developing a false sense of security in their copy pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13098) Dynamic LogLevel setting page should accept case-insensitive log level string

2016-05-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273442#comment-15273442
 ] 

Hudson commented on HADOOP-13098:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9727 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9727/])
HADOOP-13098. Dynamic LogLevel setting page should accept (xyao: rev 
4e5e87ddd4a47dbea2b23387782e7cd47dec560e)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/log/LogLevel.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/log/TestLogLevel.java


> Dynamic LogLevel setting page should accept case-insensitive log level string
> -
>
> Key: HADOOP-13098
> URL: https://issues.apache.org/jira/browse/HADOOP-13098
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Junping Du
> Fix For: 2.8.0
>
> Attachments: HADOOP-13098-v2.patch, HADOOP-13098-v3.patch, 
> HADOOP-13098.patch
>
>
> Our current logLevel settings: http://deamon_web_service_address/logLevel 
> only accept full Upper Case string of log level that means "Debug" or "debug" 
> is treated as bad log level in setting. I think we should enhance the tools 
> to ignore upper/lower cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13008) Add XFS Filter for UIs to Hadoop Common

2016-05-05 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273430#comment-15273430
 ] 

Larry McCay commented on HADOOP-13008:
--

Hi [~appy] - I'd really like to make sure that this patch addresses the 
usecase/s that you were targeting for HADOOP-12234. If you have a chance can 
you please take a look? 

I'll be providing a new version to address review comments but it will be 
functionally and configurationally the same.


> Add XFS Filter for UIs to Hadoop Common
> ---
>
> Key: HADOOP-13008
> URL: https://issues.apache.org/jira/browse/HADOOP-13008
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 2.8.0
>
> Attachments: HADOOP-13008-001.patch
>
>
> Cross Frame Scripting (XFS) prevention for UIs can be provided through a 
> common servlet filter. This filter will set the X-Frame-Options HTTP header 
> to DENY unless configured to another valid setting.
> There are a number of UIs that could just add this to their filters as well 
> as the Yarn webapp proxy which could add it for all it's proxied UIs - if 
> appropriate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13098) Dynamic LogLevel setting page should accept case-insensitive log level string

2016-05-05 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-13098:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Thanks [~djp] for the contribution and [~liuml07] for the code review. I've 
committed the patch to trunk, branch-2 and branch-2.8.

> Dynamic LogLevel setting page should accept case-insensitive log level string
> -
>
> Key: HADOOP-13098
> URL: https://issues.apache.org/jira/browse/HADOOP-13098
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Junping Du
> Fix For: 2.8.0
>
> Attachments: HADOOP-13098-v2.patch, HADOOP-13098-v3.patch, 
> HADOOP-13098.patch
>
>
> Our current logLevel settings: http://deamon_web_service_address/logLevel 
> only accept full Upper Case string of log level that means "Debug" or "debug" 
> is treated as bad log level in setting. I think we should enhance the tools 
> to ignore upper/lower cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13098) Dynamic LogLevel setting page should accept case-insensitive log level string

2016-05-05 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-13098:

Summary: Dynamic LogLevel setting page should accept case-insensitive log 
level string  (was: Dynamic LogLevel setting page should accept log level 
string with mixing upper case and lower case)

> Dynamic LogLevel setting page should accept case-insensitive log level string
> -
>
> Key: HADOOP-13098
> URL: https://issues.apache.org/jira/browse/HADOOP-13098
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Junping Du
> Attachments: HADOOP-13098-v2.patch, HADOOP-13098-v3.patch, 
> HADOOP-13098.patch
>
>
> Our current logLevel settings: http://deamon_web_service_address/logLevel 
> only accept full Upper Case string of log level that means "Debug" or "debug" 
> is treated as bad log level in setting. I think we should enhance the tools 
> to ignore upper/lower cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12942) hadoop credential commands non-obviously use password of "none"

2016-05-05 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273414#comment-15273414
 ] 

Larry McCay commented on HADOOP-12942:
--

Hi [~yoderme] - this is looking pretty good. 

* I don't like that the warnings are displayed on commands other than create 
however. In fact, it really should only be displayed when the keystore is being 
created because it doesn't exist yet. 
However I could be convinced that they should be warned that they are adding a 
new credential to a provider that is using the default password.

* It seems that there are a couple lines with trailing whitespace in the 
command manual change as well.

I think if we can change the above we are good to go!


> hadoop credential commands non-obviously use password of "none"
> ---
>
> Key: HADOOP-12942
> URL: https://issues.apache.org/jira/browse/HADOOP-12942
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Mike Yoder
>Assignee: Mike Yoder
> Attachments: HADOOP-12942.001.patch, HADOOP-12942.002.patch, 
> HADOOP-12942.003.patch, HADOOP-12942.004.patch, HADOOP-12942.005.patch
>
>
> The "hadoop credential create" command, when using a jceks provider, defaults 
> to using the value of "none" for the password that protects the jceks file.  
> This is not obvious in the command or in documentation - to users or to other 
> hadoop developers - and leads to jceks files that essentially are not 
> protected.
> In this example, I'm adding a credential entry with name of "foo" and a value 
> specified by the password entered:
> {noformat}
> # hadoop credential create foo -provider localjceks://file/bar.jceks
> Enter password: 
> Enter password again: 
> foo has been successfully created.
> org.apache.hadoop.security.alias.LocalJavaKeyStoreProvider has been updated.
> {noformat}
> However, the password that protects the file bar.jceks is "none", and there 
> is no obvious way to change that. The practical way of supplying the password 
> at this time is something akin to
> {noformat}
> HADOOP_CREDSTORE_PASSWORD=credpass hadoop credential create --provider ...
> {noformat}
> That is, stuffing HADOOP_CREDSTORE_PASSWORD into the environment of the 
> command. 
> This is more than a documentation issue. I believe that the password ought to 
> be _required_.  We have three implementations at this point, the two 
> JavaKeystore ones and the UserCredential. The latter is "transient" which 
> does not make sense to use in this context. The former need some sort of 
> password, and it's relatively easy to envision that any non-transient 
> implementation would need a mechanism by which to protect the store that it's 
> creating.  
> The implementation gets interesting because the password in the 
> AbstractJavaKeyStoreProvider is determined in the constructor, and changing 
> it after the fact would get messy. So this probably means that the 
> CredentialProviderFactory should have another factory method like the first 
> that additionally takes the password, and an additional constructor exist in 
> all the implementations that takes the password. 
> Then we just ask for the password in getCredentialProvider() and that gets 
> passed down to via the factory to the implementation. The code does have 
> logic in the factory to try multiple providers, but I don't really see how 
> multiple providers would be rationaly be used in the command shell context.
> This issue was brought to light when a user stored credentials for a Sqoop 
> action in Oozie; upon trying to figure out where the password was coming from 
> we discovered it to be the default value of "none".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13103) Group resolution from LDAP may fail on javax.naming.ServiceUnavailableException

2016-05-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273328#comment-15273328
 ] 

Hudson commented on HADOOP-13103:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9726 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9726/])
HADOOP-13103 Group resolution from LDAP may fail on (szetszwo: rev 
f305d9c0f64fd7d085f01eaae2154ef13b05b197)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/LdapGroupsMapping.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestLdapGroupsMapping.java


> Group resolution from LDAP may fail on 
> javax.naming.ServiceUnavailableException
> ---
>
> Key: HADOOP-13103
> URL: https://issues.apache.org/jira/browse/HADOOP-13103
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Fix For: 2.7.3
>
> Attachments: c13103_20160505.patch, c13103_20160505b.patch
>
>
> According to the 
> [javadoc|https://docs.oracle.com/javase/7/docs/api/javax/naming/ServiceUnavailableException.html],
>  ServiceUnavailableException is thrown when attempting to communicate with a 
> directory or naming service and that service is not available. It might be 
> unavailable for different reasons. For example, the server might be too busy 
> to service the request, or the server might not be registered to service any 
> requests, etc.
> We should retry on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12701) Run checkstyle on test source files

2016-05-05 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273297#comment-15273297
 ] 

John Zhuge commented on HADOOP-12701:
-

[~andrew.wang] Maven checkstyle plugin does not run on test source files by 
default.

> Run checkstyle on test source files
> ---
>
> Key: HADOOP-12701
> URL: https://issues.apache.org/jira/browse/HADOOP-12701
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-12701.001.patch
>
>
> Test source files are not checked by checkstyle because Maven checkstyle 
> plugin parameter *includeTestSourceDirectory* is *false* by default.
> Propose to enable checkstyle on test source files in order to improve the 
> quality of unit tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13028) add low level counter metrics for S3A; use in read performance tests

2016-05-05 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273296#comment-15273296
 ] 

Chris Nauroth commented on HADOOP-13028:


[~ste...@apache.org], the changes in patch v009 look good to me.  I think this 
is close to being complete.

There was an earlier round of feedback from me that has not yet been addressed. 
 This was small nitpicky stuff, nothing as tricky as the actual seek logic.  
Here is a direct link to that comment.

https://issues.apache.org/jira/browse/HADOOP-13028?focusedCommentId=15267400=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15267400

> add low level counter metrics for S3A; use in read performance tests
> 
>
> Key: HADOOP-13028
> URL: https://issues.apache.org/jira/browse/HADOOP-13028
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, metrics
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13028-001.patch, HADOOP-13028-002.patch, 
> HADOOP-13028-004.patch, HADOOP-13028-005.patch, HADOOP-13028-006.patch, 
> HADOOP-13028-007.patch, HADOOP-13028-008.patch, HADOOP-13028-009.patch, 
> HADOOP-13028-branch-2-008.patch, HADOOP-13028-branch-2-009.patch, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt
>
>
> against S3 (and other object stores), opening connections can be expensive, 
> closing connections may be expensive (a sign of a regression). 
> S3A FS and individual input streams should have counters of the # of 
> open/close/failure+reconnect operations, timers of how long things take. This 
> can be used downstream to measure efficiency of the code (how often 
> connections are being made), connection reliability, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12701) Run checkstyle on test source files

2016-05-05 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273295#comment-15273295
 ] 

John Zhuge commented on HADOOP-12701:
-

@andrew wang Maven checkstyle plugin does not run on test source files by 
default.

> Run checkstyle on test source files
> ---
>
> Key: HADOOP-12701
> URL: https://issues.apache.org/jira/browse/HADOOP-12701
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-12701.001.patch
>
>
> Test source files are not checked by checkstyle because Maven checkstyle 
> plugin parameter *includeTestSourceDirectory* is *false* by default.
> Propose to enable checkstyle on test source files in order to improve the 
> quality of unit tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13103) Group resolution from LDAP may fail on javax.naming.ServiceUnavailableException

2016-05-05 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-13103:
-
   Resolution: Fixed
Fix Version/s: 2.7.3
   Status: Resolved  (was: Patch Available)

Thanks Chris for reviewing the patches.

I have committed this.

> Group resolution from LDAP may fail on 
> javax.naming.ServiceUnavailableException
> ---
>
> Key: HADOOP-13103
> URL: https://issues.apache.org/jira/browse/HADOOP-13103
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Fix For: 2.7.3
>
> Attachments: c13103_20160505.patch, c13103_20160505b.patch
>
>
> According to the 
> [javadoc|https://docs.oracle.com/javase/7/docs/api/javax/naming/ServiceUnavailableException.html],
>  ServiceUnavailableException is thrown when attempting to communicate with a 
> directory or naming service and that service is not available. It might be 
> unavailable for different reasons. For example, the server might be too busy 
> to service the request, or the server might not be registered to service any 
> requests, etc.
> We should retry on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13102) Update GroupsMapping documentation to reflect the new changes

2016-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273258#comment-15273258
 ] 

Hadoop QA commented on HADOOP-13102:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 27s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12802528/HADOOP-13102-001.patch
 |
| JIRA Issue | HADOOP-13102 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 4b70e33acab3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d8faf47 |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9297/console |
| Powered by | Apache Yetus 0.3.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update GroupsMapping documentation to reflect the new changes
> -
>
> Key: HADOOP-13102
> URL: https://issues.apache.org/jira/browse/HADOOP-13102
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.8.0
>Reporter: Anu Engineer
>Assignee: Esther Kundin
>  Labels: docs, ldap
> Fix For: 2.8.0
>
> Attachments: HADOOP-13102-001.patch
>
>
> We need to update this section in the groupsMapping.md
> {noformat}
> Line 84:
> The implementation does not attempt to resolve group hierarchies. Therefore, 
> a user must be an explicit member of a group object
> in order to be considered a member.
> {noformat} 
> With changes in Hadoop-12291 this is no longer true since we will have the 
> ability to walk the group hierarchies.
> We also should modify this line 
> {noformat}
> Line :  81
> It is possible to set a maximum time limit when searching and awaiting a 
> result.
> Set `hadoop.security.group.mapping.ldap.directory.search.timeout` to 0 if 
> infinite wait period is desired. Default is 10,000 milliseconds (10 seconds).
> {noformat}
> we might want to document how the new settings affects the timeout.
> and also add the new settings into this doc.
> {noformat}
>  hadoop.security.group.mapping.ldap.search.group.hierarchy.levels
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13103) Group resolution from LDAP may fail on javax.naming.ServiceUnavailableException

2016-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273255#comment-15273255
 ] 

Hadoop QA commented on HADOOP-13103:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 57s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 47s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 49s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 128 unchanged - 6 fixed = 128 total (was 134) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 23s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 13s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 35s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12802519/c13103_20160505b.patch
 |
| JIRA Issue | HADOOP-13103 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux cfe50d546dfd 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git 

[jira] [Commented] (HADOOP-13051) Add Glob unit test for special characters

2016-05-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273198#comment-15273198
 ] 

Hudson commented on HADOOP-13051:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9725 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9725/])
HADOOP-13051. Test for special characters in path being respected during 
(raviprak: rev d8faf47f32c7ace6ceeb55bbb584c2dbab38902f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestGlobPaths.java


> Add Glob unit test for special characters
> -
>
> Key: HADOOP-13051
> URL: https://issues.apache.org/jira/browse/HADOOP-13051
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs
>Affects Versions: 3.0.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HADOOP-13051.000.patch
>
>
> On {{branch-2}}, the below is the (incorrect) behaviour today, where paths 
> with special characters get dropped during globStatus calls:
> {code}
> bin/hdfs dfs -mkdir /foo
> bin/hdfs dfs -touchz /foo/foo1
> bin/hdfs dfs -touchz $'/foo/foo1\r'
> bin/hdfs dfs -ls '/foo/*'
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1^M
> bin/hdfs dfs -ls '/foo/*'
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1
> {code}
> Whereas trunk has the right behaviour, subtly fixed via the pattern library 
> change of HADOOP-12436:
> {code}
> bin/hdfs dfs -mkdir /foo
> bin/hdfs dfs -touchz /foo/foo1
> bin/hdfs dfs -touchz $'/foo/foo1\r'
> bin/hdfs dfs -ls '/foo/*'
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1^M
> bin/hdfs dfs -ls '/foo/*'
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1^M
> {code}
> (I've placed a ^M explicitly to indicate presence of the intentional hidden 
> character)
> We should still add a simple test-case to cover this situation for future 
> regressions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-8065) distcp should have an option to compress data while copying.

2016-05-05 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273160#comment-15273160
 ] 

Ravi Prakash commented on HADOOP-8065:
--

Thanks Suraj!
In CopyMapper, you are declaring {{codec}}, assigning it a value and then never 
using it. Are you sure you need those changes? Maybe you are missing some part 
of the patch? I am looking at 
[HADOOP-8065-trunk_2016-04-29-4.patch|https://issues.apache.org/jira/secure/attachment/12801507/HADOOP-8065-trunk_2016-04-29-4.patch]

To enable compression during transit is a MUCH bigger Epic. We may have to 
change 
[FileSystem|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L769],
 and 
[BlockSender|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java]
 amongst others (on the datanode side). A lot more people will also have an 
opinion on it and its probably a multi-month effort. Also, striped blocks may 
make it more complicated. People may argue that users should compress and 
decompress at the application level. It'd just be way more complicated than 
what we are trying to do here.  I suggest we tackle that after this problem

> distcp should have an option to compress data while copying.
> 
>
> Key: HADOOP-8065
> URL: https://issues.apache.org/jira/browse/HADOOP-8065
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 0.20.2
>Reporter: Suresh Antony
>Assignee: Suraj Nayak
>Priority: Minor
>  Labels: distcp
> Fix For: 0.20.2
>
> Attachments: HADOOP-8065-trunk_2015-11-03.patch, 
> HADOOP-8065-trunk_2015-11-04.patch, HADOOP-8065-trunk_2016-04-29-4.patch, 
> patch.distcp.2012-02-10
>
>
> We would like compress the data while transferring from our source system to 
> target system. One way to do this is to write a map/reduce job to compress 
> that after/before being transferred. This looks inefficient. 
> Since distcp already reading writing data it would be better if it can 
> accomplish while doing this. 
> Flip side of this is that distcp -update option can not check file size 
> before copying data. It can only check for the existence of file. 
> So I propose if -compress option is given then file size is not checked.
> Also when we copy file appropriate extension needs to be added to file 
> depending on compression type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12866) add a subcommand for gridmix

2016-05-05 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273139#comment-15273139
 ] 

Allen Wittenauer commented on HADOOP-12866:
---

OK, looks like it is.

But the documentation fix is slightly incomplete.  This whole section should 
just get removed since hadoop gridmix takes care of it all:

{code}
GridMix expects certain library *JARs* to be present in  the *CLASSPATH*.
One simple way to run GridMix is to use `hadoop jar` command to run it.
You also need to add the JAR of Rumen to classpath for both of client and tasks
as example shown below.

```
HADOOP_CLASSPATH=$HADOOP_HOME/share/hadoop/tools/lib/hadoop-rumen-2.5.1.jar \
  $HADOOP_HOME/bin/hadoop jar 
$HADOOP_HOME/share/hadoop/tools/lib/hadoop-gridmix-2.5.1.jar \
-libjars $HADOOP_HOME/share/hadoop/tools/lib/hadoop-rumen-2.5.1.jar \
[-generate ] [-users ]  
```
{code}

> add a subcommand for gridmix
> 
>
> Key: HADOOP-12866
> URL: https://issues.apache.org/jira/browse/HADOOP-12866
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Kai Sasaki
> Attachments: HADOOP-12866.01.patch
>
>
> gridmix shouldn't require a raw java command line to run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13079) Add dfs -ls -q to print special character as question mark

2016-05-05 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13079:

Summary: Add dfs -ls -q to print special character as question mark  (was: 
dfs -ls -q should print special characters as ?)

> Add dfs -ls -q to print special character as question mark
> --
>
> Key: HADOOP-13079
> URL: https://issues.apache.org/jira/browse/HADOOP-13079
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> Add option {{-q}} to "hdfs dfs -ls" to print non-printable characters as "?". 
> Non-printable characters are defined by 
> [isprint(3)|http://linux.die.net/man/3/isprint] according to the current 
> locale.
> Default to {{-q}} behavior on terminal; otherwise, print raw characters. See 
> the difference in these 2 command lines:
> * {{hadoop fs -ls /dir}}
> * {{hadoop fs -ls /dir | od -c}}
> In C, {{isatty(STDOUT_FILENO)}} is used to find out whether the output is a 
> terminal. Since Java doesn't have {{isatty}}, I will use JNI to call C 
> {{isatty()}} because the closest test {{System.console() == null}} does not 
> work in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13079) dfs -ls -q should print special characters as ?

2016-05-05 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13079:

Summary: dfs -ls -q should print special characters as ?  (was: dfs -ls -q 
prints non-printable characters)

> dfs -ls -q should print special characters as ?
> ---
>
> Key: HADOOP-13079
> URL: https://issues.apache.org/jira/browse/HADOOP-13079
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> Add option {{-q}} to "hdfs dfs -ls" to print non-printable characters as "?". 
> Non-printable characters are defined by 
> [isprint(3)|http://linux.die.net/man/3/isprint] according to the current 
> locale.
> Default to {{-q}} behavior on terminal; otherwise, print raw characters. See 
> the difference in these 2 command lines:
> * {{hadoop fs -ls /dir}}
> * {{hadoop fs -ls /dir | od -c}}
> In C, {{isatty(STDOUT_FILENO)}} is used to find out whether the output is a 
> terminal. Since Java doesn't have {{isatty}}, I will use JNI to call C 
> {{isatty()}} because the closest test {{System.console() == null}} does not 
> work in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13051) Add Glob unit test for special characters

2016-05-05 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273115#comment-15273115
 ] 

Ravi Prakash commented on HADOOP-13051:
---

Thanks for the patch Harsh and the review John! Committed to trunk.

> Add Glob unit test for special characters
> -
>
> Key: HADOOP-13051
> URL: https://issues.apache.org/jira/browse/HADOOP-13051
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs
>Affects Versions: 3.0.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Minor
> Attachments: HADOOP-13051.000.patch
>
>
> On {{branch-2}}, the below is the (incorrect) behaviour today, where paths 
> with special characters get dropped during globStatus calls:
> {code}
> bin/hdfs dfs -mkdir /foo
> bin/hdfs dfs -touchz /foo/foo1
> bin/hdfs dfs -touchz $'/foo/foo1\r'
> bin/hdfs dfs -ls '/foo/*'
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1^M
> bin/hdfs dfs -ls '/foo/*'
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1
> {code}
> Whereas trunk has the right behaviour, subtly fixed via the pattern library 
> change of HADOOP-12436:
> {code}
> bin/hdfs dfs -mkdir /foo
> bin/hdfs dfs -touchz /foo/foo1
> bin/hdfs dfs -touchz $'/foo/foo1\r'
> bin/hdfs dfs -ls '/foo/*'
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1^M
> bin/hdfs dfs -ls '/foo/*'
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1^M
> {code}
> (I've placed a ^M explicitly to indicate presence of the intentional hidden 
> character)
> We should still add a simple test-case to cover this situation for future 
> regressions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13051) Add Glob unit test for special characters

2016-05-05 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HADOOP-13051:
--
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

> Add Glob unit test for special characters
> -
>
> Key: HADOOP-13051
> URL: https://issues.apache.org/jira/browse/HADOOP-13051
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs
>Affects Versions: 3.0.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HADOOP-13051.000.patch
>
>
> On {{branch-2}}, the below is the (incorrect) behaviour today, where paths 
> with special characters get dropped during globStatus calls:
> {code}
> bin/hdfs dfs -mkdir /foo
> bin/hdfs dfs -touchz /foo/foo1
> bin/hdfs dfs -touchz $'/foo/foo1\r'
> bin/hdfs dfs -ls '/foo/*'
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1^M
> bin/hdfs dfs -ls '/foo/*'
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1
> {code}
> Whereas trunk has the right behaviour, subtly fixed via the pattern library 
> change of HADOOP-12436:
> {code}
> bin/hdfs dfs -mkdir /foo
> bin/hdfs dfs -touchz /foo/foo1
> bin/hdfs dfs -touchz $'/foo/foo1\r'
> bin/hdfs dfs -ls '/foo/*'
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1^M
> bin/hdfs dfs -ls '/foo/*'
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1^M
> {code}
> (I've placed a ^M explicitly to indicate presence of the intentional hidden 
> character)
> We should still add a simple test-case to cover this situation for future 
> regressions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13099) Glob should return files with special characters in name

2016-05-05 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13099:

Summary: Glob should return files with special characters in name  (was: 
Globbing does not return file whose name has nonprintable character)

> Glob should return files with special characters in name
> 
>
> Key: HADOOP-13099
> URL: https://issues.apache.org/jira/browse/HADOOP-13099
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Fix For: 3.0.0
>
>
> In a directory, create a file with a name containing non-printable character, 
> e.g., '\r'.  {{dfs -ls dir}} can list such file, but {{dfs -ls dir/*}} can 
> not.
> {noformat}
> $ hdfs dfs -touchz /tmp/test/abc
> $ hdfs dfs -touchz $'/tmp/test/abc\rdef'
> $ hdfs dfs -ls /tmp/test
> Found 2 items
> -rw-r--r--   3 systest supergroup  0 2016-05-05 01:35 /tmp/test/abc
> def-r--r--   3 systest supergroup  0 2016-05-05 01:36 /tmp/test/abc
> $ hdfs dfs -ls /tmp/test | od -c
> 000   F   o   u   n   d   2   i   t   e   m   s  \n   -   r
> 020   w   -   r   -   -   r   -   -   3   s   y   s
> 040   t   e   s   t   s   u   p   e   r   g   r   o   u   p
> 060   0   2   0   1   6   -
> 100   0   5   -   0   5   0   1   :   3   5   /   t   m   p
> 120   /   t   e   s   t   /   a   b   c  \n   -   r   w   -   r   -
> 140   -   r   -   -   3   s   y   s   t   e   s   t
> 160   s   u   p   e   r   g   r   o   u   p
> 200   0   2   0   1   6   -   0   5   -   0
> 220   5   0   1   :   3   6   /   t   m   p   /   t   e   s
> 240   t   /   a   b   c  \r   d   e   f  \n
> 252
> $ hdfs dfs -ls /tmp/test/*
> -rw-r--r--   3 systest supergroup  0 2016-05-05 01:35 /tmp/test/abc
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13099) Globbing does not return file whose name has nonprintable character

2016-05-05 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge resolved HADOOP-13099.
-
  Resolution: Duplicate
   Fix Version/s: 3.0.0
Target Version/s:   (was: 2.8.0)

> Globbing does not return file whose name has nonprintable character
> ---
>
> Key: HADOOP-13099
> URL: https://issues.apache.org/jira/browse/HADOOP-13099
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Fix For: 3.0.0
>
>
> In a directory, create a file with a name containing non-printable character, 
> e.g., '\r'.  {{dfs -ls dir}} can list such file, but {{dfs -ls dir/*}} can 
> not.
> {noformat}
> $ hdfs dfs -touchz /tmp/test/abc
> $ hdfs dfs -touchz $'/tmp/test/abc\rdef'
> $ hdfs dfs -ls /tmp/test
> Found 2 items
> -rw-r--r--   3 systest supergroup  0 2016-05-05 01:35 /tmp/test/abc
> def-r--r--   3 systest supergroup  0 2016-05-05 01:36 /tmp/test/abc
> $ hdfs dfs -ls /tmp/test | od -c
> 000   F   o   u   n   d   2   i   t   e   m   s  \n   -   r
> 020   w   -   r   -   -   r   -   -   3   s   y   s
> 040   t   e   s   t   s   u   p   e   r   g   r   o   u   p
> 060   0   2   0   1   6   -
> 100   0   5   -   0   5   0   1   :   3   5   /   t   m   p
> 120   /   t   e   s   t   /   a   b   c  \n   -   r   w   -   r   -
> 140   -   r   -   -   3   s   y   s   t   e   s   t
> 160   s   u   p   e   r   g   r   o   u   p
> 200   0   2   0   1   6   -   0   5   -   0
> 220   5   0   1   :   3   6   /   t   m   p   /   t   e   s
> 240   t   /   a   b   c  \r   d   e   f  \n
> 252
> $ hdfs dfs -ls /tmp/test/*
> -rw-r--r--   3 systest supergroup  0 2016-05-05 01:35 /tmp/test/abc
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13051) Add Glob unit test for special characters

2016-05-05 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13051:

Summary: Add Glob unit test for special characters  (was: Test for special 
characters in path being respected during globPaths)

> Add Glob unit test for special characters
> -
>
> Key: HADOOP-13051
> URL: https://issues.apache.org/jira/browse/HADOOP-13051
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs
>Affects Versions: 3.0.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Minor
> Attachments: HADOOP-13051.000.patch
>
>
> On {{branch-2}}, the below is the (incorrect) behaviour today, where paths 
> with special characters get dropped during globStatus calls:
> {code}
> bin/hdfs dfs -mkdir /foo
> bin/hdfs dfs -touchz /foo/foo1
> bin/hdfs dfs -touchz $'/foo/foo1\r'
> bin/hdfs dfs -ls '/foo/*'
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1^M
> bin/hdfs dfs -ls '/foo/*'
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1
> {code}
> Whereas trunk has the right behaviour, subtly fixed via the pattern library 
> change of HADOOP-12436:
> {code}
> bin/hdfs dfs -mkdir /foo
> bin/hdfs dfs -touchz /foo/foo1
> bin/hdfs dfs -touchz $'/foo/foo1\r'
> bin/hdfs dfs -ls '/foo/*'
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1^M
> bin/hdfs dfs -ls '/foo/*'
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1^M
> {code}
> (I've placed a ^M explicitly to indicate presence of the intentional hidden 
> character)
> We should still add a simple test-case to cover this situation for future 
> regressions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13102) Update GroupsMapping documentation to reflect the new changes

2016-05-05 Thread Esther Kundin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esther Kundin updated HADOOP-13102:
---
Attachment: HADOOP-13102-001.patch

> Update GroupsMapping documentation to reflect the new changes
> -
>
> Key: HADOOP-13102
> URL: https://issues.apache.org/jira/browse/HADOOP-13102
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.8.0
>Reporter: Anu Engineer
>Assignee: Esther Kundin
>  Labels: docs, ldap
> Fix For: 2.8.0
>
> Attachments: HADOOP-13102-001.patch
>
>
> We need to update this section in the groupsMapping.md
> {noformat}
> Line 84:
> The implementation does not attempt to resolve group hierarchies. Therefore, 
> a user must be an explicit member of a group object
> in order to be considered a member.
> {noformat} 
> With changes in Hadoop-12291 this is no longer true since we will have the 
> ability to walk the group hierarchies.
> We also should modify this line 
> {noformat}
> Line :  81
> It is possible to set a maximum time limit when searching and awaiting a 
> result.
> Set `hadoop.security.group.mapping.ldap.directory.search.timeout` to 0 if 
> infinite wait period is desired. Default is 10,000 milliseconds (10 seconds).
> {noformat}
> we might want to document how the new settings affects the timeout.
> and also add the new settings into this doc.
> {noformat}
>  hadoop.security.group.mapping.ldap.search.group.hierarchy.levels
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13103) Group resolution from LDAP may fail on javax.naming.ServiceUnavailableException

2016-05-05 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13103:
---
Hadoop Flags: Reviewed

> Group resolution from LDAP may fail on 
> javax.naming.ServiceUnavailableException
> ---
>
> Key: HADOOP-13103
> URL: https://issues.apache.org/jira/browse/HADOOP-13103
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: c13103_20160505.patch, c13103_20160505b.patch
>
>
> According to the 
> [javadoc|https://docs.oracle.com/javase/7/docs/api/javax/naming/ServiceUnavailableException.html],
>  ServiceUnavailableException is thrown when attempting to communicate with a 
> directory or naming service and that service is not available. It might be 
> unavailable for different reasons. For example, the server might be too busy 
> to service the request, or the server might not be registered to service any 
> requests, etc.
> We should retry on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13102) Update GroupsMapping documentation to reflect the new changes

2016-05-05 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273093#comment-15273093
 ] 

Anu Engineer commented on HADOOP-13102:
---

+1, (Non-binding). Thanks for providing this patch.

> Update GroupsMapping documentation to reflect the new changes
> -
>
> Key: HADOOP-13102
> URL: https://issues.apache.org/jira/browse/HADOOP-13102
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.8.0
>Reporter: Anu Engineer
>Assignee: Esther Kundin
>  Labels: docs, ldap
> Fix For: 2.8.0
>
> Attachments: HADOOP-13102-001.patch
>
>
> We need to update this section in the groupsMapping.md
> {noformat}
> Line 84:
> The implementation does not attempt to resolve group hierarchies. Therefore, 
> a user must be an explicit member of a group object
> in order to be considered a member.
> {noformat} 
> With changes in Hadoop-12291 this is no longer true since we will have the 
> ability to walk the group hierarchies.
> We also should modify this line 
> {noformat}
> Line :  81
> It is possible to set a maximum time limit when searching and awaiting a 
> result.
> Set `hadoop.security.group.mapping.ldap.directory.search.timeout` to 0 if 
> infinite wait period is desired. Default is 10,000 milliseconds (10 seconds).
> {noformat}
> we might want to document how the new settings affects the timeout.
> and also add the new settings into this doc.
> {noformat}
>  hadoop.security.group.mapping.ldap.search.group.hierarchy.levels
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-05-05 Thread Esther Kundin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273092#comment-15273092
 ] 

Esther Kundin commented on HADOOP-12291:


I have tested the change independently on a real LDAP server.

> Add support for nested groups in LdapGroupsMapping
> --
>
> Key: HADOOP-12291
> URL: https://issues.apache.org/jira/browse/HADOOP-12291
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Gautam Gopalakrishnan
>Assignee: Esther Kundin
>  Labels: features, patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, 
> HADOOP-12291.003.patch, HADOOP-12291.004.patch
>
>
> When using {{LdapGroupsMapping}} with Hadoop, nested groups are not 
> supported. So for example if user {{jdoe}} is part of group A which is a 
> member of group B, the group mapping currently returns only group A.
> Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and 
> SSSD (or similar tools) but would be good to have this feature as part of 
> {{LdapGroupsMapping}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13103) Group resolution from LDAP may fail on javax.naming.ServiceUnavailableException

2016-05-05 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273090#comment-15273090
 ] 

Chris Nauroth commented on HADOOP-13103:


+1 for the patch, pending a fresh pre-commit run.  [~szetszwo], thank you for 
the patch.

> Group resolution from LDAP may fail on 
> javax.naming.ServiceUnavailableException
> ---
>
> Key: HADOOP-13103
> URL: https://issues.apache.org/jira/browse/HADOOP-13103
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: c13103_20160505.patch, c13103_20160505b.patch
>
>
> According to the 
> [javadoc|https://docs.oracle.com/javase/7/docs/api/javax/naming/ServiceUnavailableException.html],
>  ServiceUnavailableException is thrown when attempting to communicate with a 
> directory or naming service and that service is not available. It might be 
> unavailable for different reasons. For example, the server might be too busy 
> to service the request, or the server might not be registered to service any 
> requests, etc.
> We should retry on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13102) Update GroupsMapping documentation to reflect the new changes

2016-05-05 Thread Esther Kundin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esther Kundin updated HADOOP-13102:
---
Labels: docs ldap  (was: )
Status: Patch Available  (was: In Progress)

> Update GroupsMapping documentation to reflect the new changes
> -
>
> Key: HADOOP-13102
> URL: https://issues.apache.org/jira/browse/HADOOP-13102
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.8.0
>Reporter: Anu Engineer
>Assignee: Esther Kundin
>  Labels: docs, ldap
> Fix For: 2.8.0
>
> Attachments: HADOOP-13102-001.patch
>
>
> We need to update this section in the groupsMapping.md
> {noformat}
> Line 84:
> The implementation does not attempt to resolve group hierarchies. Therefore, 
> a user must be an explicit member of a group object
> in order to be considered a member.
> {noformat} 
> With changes in Hadoop-12291 this is no longer true since we will have the 
> ability to walk the group hierarchies.
> We also should modify this line 
> {noformat}
> Line :  81
> It is possible to set a maximum time limit when searching and awaiting a 
> result.
> Set `hadoop.security.group.mapping.ldap.directory.search.timeout` to 0 if 
> infinite wait period is desired. Default is 10,000 milliseconds (10 seconds).
> {noformat}
> we might want to document how the new settings affects the timeout.
> and also add the new settings into this doc.
> {noformat}
>  hadoop.security.group.mapping.ldap.search.group.hierarchy.levels
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13051) Test for special characters in path being respected during globPaths

2016-05-05 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273075#comment-15273075
 ] 

John Zhuge commented on HADOOP-13051:
-

+1 LGTM. Very useful patch. Could some one commit?

> Test for special characters in path being respected during globPaths
> 
>
> Key: HADOOP-13051
> URL: https://issues.apache.org/jira/browse/HADOOP-13051
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs
>Affects Versions: 3.0.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Minor
> Attachments: HADOOP-13051.000.patch
>
>
> On {{branch-2}}, the below is the (incorrect) behaviour today, where paths 
> with special characters get dropped during globStatus calls:
> {code}
> bin/hdfs dfs -mkdir /foo
> bin/hdfs dfs -touchz /foo/foo1
> bin/hdfs dfs -touchz $'/foo/foo1\r'
> bin/hdfs dfs -ls '/foo/*'
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1^M
> bin/hdfs dfs -ls '/foo/*'
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1
> {code}
> Whereas trunk has the right behaviour, subtly fixed via the pattern library 
> change of HADOOP-12436:
> {code}
> bin/hdfs dfs -mkdir /foo
> bin/hdfs dfs -touchz /foo/foo1
> bin/hdfs dfs -touchz $'/foo/foo1\r'
> bin/hdfs dfs -ls '/foo/*'
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1^M
> bin/hdfs dfs -ls '/foo/*'
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1^M
> {code}
> (I've placed a ^M explicitly to indicate presence of the intentional hidden 
> character)
> We should still add a simple test-case to cover this situation for future 
> regressions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13051) Test for special characters in path being respected during globPaths

2016-05-05 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13051:

Hadoop Flags: Reviewed

> Test for special characters in path being respected during globPaths
> 
>
> Key: HADOOP-13051
> URL: https://issues.apache.org/jira/browse/HADOOP-13051
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs
>Affects Versions: 3.0.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Minor
> Attachments: HADOOP-13051.000.patch
>
>
> On {{branch-2}}, the below is the (incorrect) behaviour today, where paths 
> with special characters get dropped during globStatus calls:
> {code}
> bin/hdfs dfs -mkdir /foo
> bin/hdfs dfs -touchz /foo/foo1
> bin/hdfs dfs -touchz $'/foo/foo1\r'
> bin/hdfs dfs -ls '/foo/*'
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1^M
> bin/hdfs dfs -ls '/foo/*'
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1
> {code}
> Whereas trunk has the right behaviour, subtly fixed via the pattern library 
> change of HADOOP-12436:
> {code}
> bin/hdfs dfs -mkdir /foo
> bin/hdfs dfs -touchz /foo/foo1
> bin/hdfs dfs -touchz $'/foo/foo1\r'
> bin/hdfs dfs -ls '/foo/*'
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1^M
> bin/hdfs dfs -ls '/foo/*'
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1
> -rw-r--r--   3 harsh supergroup  0 2016-04-22 17:35 /foo/foo1^M
> {code}
> (I've placed a ^M explicitly to indicate presence of the intentional hidden 
> character)
> We should still add a simple test-case to cover this situation for future 
> regressions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13103) Group resolution from LDAP may fail on javax.naming.ServiceUnavailableException

2016-05-05 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-13103:
-
Attachment: c13103_20160505b.patch

c13103_20160505b.patch: fixes test failure.

> Group resolution from LDAP may fail on 
> javax.naming.ServiceUnavailableException
> ---
>
> Key: HADOOP-13103
> URL: https://issues.apache.org/jira/browse/HADOOP-13103
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: c13103_20160505.patch, c13103_20160505b.patch
>
>
> According to the 
> [javadoc|https://docs.oracle.com/javase/7/docs/api/javax/naming/ServiceUnavailableException.html],
>  ServiceUnavailableException is thrown when attempting to communicate with a 
> directory or naming service and that service is not available. It might be 
> unavailable for different reasons. For example, the server might be too busy 
> to service the request, or the server might not be registered to service any 
> requests, etc.
> We should retry on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-13102) Update GroupsMapping documentation to reflect the new changes

2016-05-05 Thread Esther Kundin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-13102 started by Esther Kundin.
--
> Update GroupsMapping documentation to reflect the new changes
> -
>
> Key: HADOOP-13102
> URL: https://issues.apache.org/jira/browse/HADOOP-13102
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.8.0
>Reporter: Anu Engineer
>Assignee: Esther Kundin
> Fix For: 2.8.0
>
>
> We need to update this section in the groupsMapping.md
> {noformat}
> Line 84:
> The implementation does not attempt to resolve group hierarchies. Therefore, 
> a user must be an explicit member of a group object
> in order to be considered a member.
> {noformat} 
> With changes in Hadoop-12291 this is no longer true since we will have the 
> ability to walk the group hierarchies.
> We also should modify this line 
> {noformat}
> Line :  81
> It is possible to set a maximum time limit when searching and awaiting a 
> result.
> Set `hadoop.security.group.mapping.ldap.directory.search.timeout` to 0 if 
> infinite wait period is desired. Default is 10,000 milliseconds (10 seconds).
> {noformat}
> we might want to document how the new settings affects the timeout.
> and also add the new settings into this doc.
> {noformat}
>  hadoop.security.group.mapping.ldap.search.group.hierarchy.levels
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13103) Group resolution from LDAP may fail on javax.naming.ServiceUnavailableException

2016-05-05 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15273003#comment-15273003
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-13103:
--

> ...  The old code had one attempt outside the while loop and then up to 3 
> more attempts inside the while loop. With this patch, all attempts are 
> collapsed into a single for loop with up to 3 iterations. ...

Good observation.  I intentionally make such change so that it can fail fast. 
For the exception cases I saw, the first try may fail with various reasons 
(such as connection timeout).  Then, the second try always success since it 
reestablishes the connection.  The third try is almost redundant but let's give 
it a try.  It is unlikely that the third try will success after the first and 
the second tries have failed.)  The fourth try is just wasting time.

Will update the patch to fix the unit test.

> Group resolution from LDAP may fail on 
> javax.naming.ServiceUnavailableException
> ---
>
> Key: HADOOP-13103
> URL: https://issues.apache.org/jira/browse/HADOOP-13103
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: c13103_20160505.patch
>
>
> According to the 
> [javadoc|https://docs.oracle.com/javase/7/docs/api/javax/naming/ServiceUnavailableException.html],
>  ServiceUnavailableException is thrown when attempting to communicate with a 
> directory or naming service and that service is not available. It might be 
> unavailable for different reasons. For example, the server might be too busy 
> to service the request, or the server might not be registered to service any 
> requests, etc.
> We should retry on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12864) bin/rcc doesn't work on trunk

2016-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272999#comment-15272999
 ] 

Hadoop QA commented on HADOOP-12864:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
11s {color} | {color:green} The patch generated 0 new + 93 unchanged - 1 fixed 
= 93 total (was 94) {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 
8s {color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 44s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 40s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 42s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12802500/HADOOP-12864.00.patch 
|
| JIRA Issue | HADOOP-12864 |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux 39f4fc162d5a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / bb62e05 |
| shellcheck | v0.4.3 |
| JDK v1.7.0_95  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9295/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9295/console |
| Powered by | Apache Yetus 0.3.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> bin/rcc doesn't work on trunk
> -
>
> Key: HADOOP-12864
> URL: https://issues.apache.org/jira/browse/HADOOP-12864
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-12864.00.patch
>
>
> When o.a.h.record was moved, bin/rcc was never updated to pull those classes 
> from the streaming jar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13103) Group resolution from LDAP may fail on javax.naming.ServiceUnavailableException

2016-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272969#comment-15272969
 ] 

Hadoop QA commented on HADOOP-13103:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
46s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 43s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 35s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 41s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 39s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 128 unchanged - 6 fixed = 128 total (was 134) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 26s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 45s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 23s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | hadoop.security.TestLdapGroupsMapping |
| JDK v1.7.0_95 Failed junit tests | hadoop.security.TestLdapGroupsMapping |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12802489/c13103_20160505.patch 
|
| JIRA Issue | HADOOP-13103 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  

[jira] [Commented] (HADOOP-13103) Group resolution from LDAP may fail on javax.naming.ServiceUnavailableException

2016-05-05 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272965#comment-15272965
 ] 

Chris Nauroth commented on HADOOP-13103:


The patch looks good to me overall.  I just have one question.  The old code 
had one attempt outside the {{while}} loop and then up to 3 more attempts 
inside the {{while}} loop.  With this patch, all attempts are collapsed into a 
single {{for}} loop with up to 3 iterations.  I am wondering if 
{{RECONNECT_RETRY_COUNT}} should be increased to 4 to preserve the old behavior 
of "4 total attempts".  Let me know your thoughts.

> Group resolution from LDAP may fail on 
> javax.naming.ServiceUnavailableException
> ---
>
> Key: HADOOP-13103
> URL: https://issues.apache.org/jira/browse/HADOOP-13103
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: c13103_20160505.patch
>
>
> According to the 
> [javadoc|https://docs.oracle.com/javase/7/docs/api/javax/naming/ServiceUnavailableException.html],
>  ServiceUnavailableException is thrown when attempting to communicate with a 
> directory or naming service and that service is not available. It might be 
> unavailable for different reasons. For example, the server might be too busy 
> to service the request, or the server might not be registered to service any 
> requests, etc.
> We should retry on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13010) Refactor raw erasure coders

2016-05-05 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272936#comment-15272936
 ] 

Colin Patrick McCabe commented on HADOOP-13010:
---

Thanks, guys.  Sorry for the delay in reviewing.  We've been busy.

{{CodecUtil.java}}: there are a LOT of functions here for creating 
{{RawErasureEncoder}} objects.
We've got:
{code}
createRSRawEncoder(Configuration conf, int numDataUnits, int numParityUnits, 
String codec)
createRSRawEncoder(Configuration conf, int numDataUnit, int numParityUnit)
createRSRawEncoder(Configuration conf, String codec, ErasureCoderOptions 
coderOptions)
createRSRawEncoder(Configuration conf, ErasureCoderOptions coderOptions)
createXORRawEncoder(Configuration conf, ErasureCoderOptions coderOptions)
createXORRawEncoder(Configuration conf, int numDataUnits, int numParityUnits)
createRawEncoder(Configuration conf, String rawCoderFactoryKey, 
ErasureCoderOptions coderOptions)
{code}

Plus a similar number of functions for creating decoders.  Why do we have to 
have so many functions?  Surely the codec, numParityUnits, numDataUnits, 
whether it is XOR or not, etc. etc. should just be included in 
ErasureCoderOptions.  Then we could just have one function:
{code}
createRawEncoder(Configuration conf, ErasureCoderOptions coderOptions)
{code}

On a related note, why does each particular type of encoder need its own 
factory?  It seems like we just need a static function for each encoder type 
that takes a Configuration and ErasureCoderOptions, and we're good to go.  We 
can locate these static functions via reflection.

{code}
  protected void doDecode(DecodingState decodingState, byte[][] inputs,
  int[] inputOffsets, int[] erasedIndexes,
  byte[][] outputs, int[] outputOffsets) {
{code}
Can we just include the inputs, inputOffsets, erasedIndexes, outputs, 
outputOffsets in {{DecodingState}}?

> Refactor raw erasure coders
> ---
>
> Key: HADOOP-13010
> URL: https://issues.apache.org/jira/browse/HADOOP-13010
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-13010-v1.patch, HADOOP-13010-v2.patch, 
> HADOOP-13010-v3.patch, HADOOP-13010-v4.patch, HADOOP-13010-v5.patch
>
>
> This will refactor raw erasure coders according to some comments received so 
> far.
> * As discussed in HADOOP-11540 and suggested by [~cmccabe], better not to 
> rely class inheritance to reuse the codes, instead they can be moved to some 
> utility.
> * Suggested by [~jingzhao] somewhere quite some time ago, better to have a 
> state holder to keep some checking results for later reuse during an 
> encode/decode call.
> This would not get rid of some inheritance levels as doing so isn't clear yet 
> for the moment and also incurs big impact. I do wish the end result by this 
> refactoring will make all the levels more clear and easier to follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13103) Group resolution from LDAP may fail on javax.naming.ServiceUnavailableException

2016-05-05 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272933#comment-15272933
 ] 

Chris Nauroth commented on HADOOP-13103:


Sounds good.  I filed HADOOP-13105.

> Group resolution from LDAP may fail on 
> javax.naming.ServiceUnavailableException
> ---
>
> Key: HADOOP-13103
> URL: https://issues.apache.org/jira/browse/HADOOP-13103
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: c13103_20160505.patch
>
>
> According to the 
> [javadoc|https://docs.oracle.com/javase/7/docs/api/javax/naming/ServiceUnavailableException.html],
>  ServiceUnavailableException is thrown when attempting to communicate with a 
> directory or naming service and that service is not available. It might be 
> unavailable for different reasons. For example, the server might be too busy 
> to service the request, or the server might not be registered to service any 
> requests, etc.
> We should retry on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13105) Support timeouts in LDAP queries in LdapGroupsMapping.

2016-05-05 Thread Chris Nauroth (JIRA)
Chris Nauroth created HADOOP-13105:
--

 Summary: Support timeouts in LDAP queries in LdapGroupsMapping.
 Key: HADOOP-13105
 URL: https://issues.apache.org/jira/browse/HADOOP-13105
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Chris Nauroth


{{LdapGroupsMapping}} currently does not set timeouts on the LDAP queries.  
This can create a risk of a very long/infinite wait on a connection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13105) Support timeouts in LDAP queries in LdapGroupsMapping.

2016-05-05 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272932#comment-15272932
 ] 

Chris Nauroth commented on HADOOP-13105:


This document discusses the JNDI API calls that can be used to set timeouts.  I 
think we'd want the actual timeout values to be configurable.

https://docs.oracle.com/javase/tutorial/jndi/newstuff/readtimeout.html


> Support timeouts in LDAP queries in LdapGroupsMapping.
> --
>
> Key: HADOOP-13105
> URL: https://issues.apache.org/jira/browse/HADOOP-13105
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Chris Nauroth
>
> {{LdapGroupsMapping}} currently does not set timeouts on the LDAP queries.  
> This can create a risk of a very long/infinite wait on a connection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13084) Fix ASF License warnings in branch-2.7

2016-05-05 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272907#comment-15272907
 ] 

Akira AJISAKA commented on HADOOP-13084:


The fix looks good to me. Would you fix the indents in the following code?
{code}
// need a whitespace
  
.gitattributes
.gitignore
.git/**
.idea/**
**/build/** // tab should be removed
**/patchprocess/** // need 4 whitespaces
  // need a whitespace
// need a whitespace
{code}

> Fix ASF License warnings in branch-2.7
> --
>
> Key: HADOOP-13084
> URL: https://issues.apache.org/jira/browse/HADOOP-13084
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-10248-branch-2.7.patch
>
>
> Please have a look following PreCommit build on branch-2.7.
> https://builds.apache.org/job/PreCommit-HDFS-Build/15036/artifact/patchprocess/patch-asflicense-problems.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13103) Group resolution from LDAP may fail on javax.naming.ServiceUnavailableException

2016-05-05 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272905#comment-15272905
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-13103:
--

Hi [~cnauroth], please file a separated JIRA for the timeouts.  Thanks.

> Group resolution from LDAP may fail on 
> javax.naming.ServiceUnavailableException
> ---
>
> Key: HADOOP-13103
> URL: https://issues.apache.org/jira/browse/HADOOP-13103
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: c13103_20160505.patch
>
>
> According to the 
> [javadoc|https://docs.oracle.com/javase/7/docs/api/javax/naming/ServiceUnavailableException.html],
>  ServiceUnavailableException is thrown when attempting to communicate with a 
> directory or naming service and that service is not available. It might be 
> unavailable for different reasons. For example, the server might be too busy 
> to service the request, or the server might not be registered to service any 
> requests, etc.
> We should retry on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13082) RawLocalFileSystem does not fail when moving file to a non-existing directory

2016-05-05 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272899#comment-15272899
 ] 

Chris Nauroth commented on HADOOP-13082:


bq. I do not really understand this. "rename-creates-dest-dirs" are set 
correctly now in either in localfs.xml or rawlocal.xml. I can't find 
contract-test-options.

I think the issue may be that the failure is in {{FileSystemContractBaseTest}}. 
 This is a "contract" test suite, but it predates the larger contract test 
framework that Steve wrote, so it doesn't respect the contract options like 
rename-creates-dest-dirs.

Short-term, I think skipping via a subclass override makes sense.  Longer term, 
perhaps we need to look at consolidating all test suites named "Contract" back 
into the larger contract test framework.  There is likely some redundancy 
between the test cases, and eliminating the redundancy might improve test 
execution times.

As far as this JIRA, I agree with resolving it as Later.  I don't anticipate 
being able to change this behavior of {{RawLocalFileSystem}} any time soon 
because of the downstream ecosystem dependencies.

Echoing Steve's comment, thank you very much for digging into this.  Semantic 
differences across the different file systems are tricky issues.

> RawLocalFileSystem does not fail when moving file to a non-existing directory
> -
>
> Key: HADOOP-13082
> URL: https://issues.apache.org/jira/browse/HADOOP-13082
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.23.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>
> FileSystemContractBaseTest#testRenameFileMoveToNonExistentDirectory: creates 
> a file then move it to a non-existing directory. It should fail but it will 
> not (with RawLocalFileSystem) because in RawLocalFileSystem#rename(Path, 
> Path) method we have a fallback behavior that accomplishes the rename by a 
> full copy. The full copy will create the new directory and copy the file 
> there.
> I see two possible solutions here:
> # Remove the fallback full copy behavior
> # Before full cp we should check whether the parent directory exists or not. 
> If not return false an do not do the full copy.
> The fallback logic was added by 
> [HADOOP-9805|https://issues.apache.org/jira/browse/HADOOP-9805].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12864) bin/rcc doesn't work on trunk

2016-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12864:
--
Hadoop Flags: Incompatible change
Release Note: The rcc command has been removed.

> bin/rcc doesn't work on trunk
> -
>
> Key: HADOOP-12864
> URL: https://issues.apache.org/jira/browse/HADOOP-12864
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-12864.00.patch
>
>
> When o.a.h.record was moved, bin/rcc was never updated to pull those classes 
> from the streaming jar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-12864) bin/rcc doesn't work on trunk

2016-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reassigned HADOOP-12864:
-

Assignee: Allen Wittenauer

> bin/rcc doesn't work on trunk
> -
>
> Key: HADOOP-12864
> URL: https://issues.apache.org/jira/browse/HADOOP-12864
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-12864.00.patch
>
>
> When o.a.h.record was moved, bin/rcc was never updated to pull those classes 
> from the streaming jar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12864) bin/rcc doesn't work on trunk

2016-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12864:
--
Attachment: HADOOP-12864.00.patch

-00:
* just remove rcc

> bin/rcc doesn't work on trunk
> -
>
> Key: HADOOP-12864
> URL: https://issues.apache.org/jira/browse/HADOOP-12864
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-12864.00.patch
>
>
> When o.a.h.record was moved, bin/rcc was never updated to pull those classes 
> from the streaming jar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12864) bin/rcc doesn't work on trunk

2016-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12864:
--
Status: Patch Available  (was: Open)

> bin/rcc doesn't work on trunk
> -
>
> Key: HADOOP-12864
> URL: https://issues.apache.org/jira/browse/HADOOP-12864
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-12864.00.patch
>
>
> When o.a.h.record was moved, bin/rcc was never updated to pull those classes 
> from the streaming jar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13104) Document meaning of all file system contract options used in the contract tests.

2016-05-05 Thread Chris Nauroth (JIRA)
Chris Nauroth created HADOOP-13104:
--

 Summary: Document meaning of all file system contract options used 
in the contract tests.
 Key: HADOOP-13104
 URL: https://issues.apache.org/jira/browse/HADOOP-13104
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Chris Nauroth


The file system contract tests support a set of contract options.  Each file 
system's specific test suite subclasses may set these options, and they alter 
the behavior of the tests to account for the inevitable differences in 
semantics in some implementations.  The {{ContractOptions}} interface already 
has JavaDocs describing the behavior of the options.  This issue proposes 
propagating similar documentation up to the public site.  This also will be 
valuable as a source of information for understanding differences in semantics 
at a high level, i.e. POSIX vs. HDFS vs. S3A.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13077) Handle special characters in passwords in httpfs.sh

2016-05-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272886#comment-15272886
 ] 

Hudson commented on HADOOP-13077:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9723 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9723/])
HADOOP-13077. Handle special characters in passwords in httpfs.sh (Xiao (aw: 
rev 35cf503149d68d33ee4e20e3e57f9afa69aef7f5)
* hadoop-common-project/hadoop-common/src/test/scripts/hadoop_escape_chars.bats
* hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/sbin/httpfs.sh
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
* hadoop-common-project/hadoop-kms/src/main/sbin/kms.sh


> Handle special characters in passwords in httpfs.sh
> ---
>
> Key: HADOOP-13077
> URL: https://issues.apache.org/jira/browse/HADOOP-13077
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Fix For: 3.0.0
>
> Attachments: HADOOP-13077-repro.tar.gz, HADOOP-13077.01.patch, 
> HADOOP-13077.02.patch, HADOOP-13077.03.patch
>
>
> As [~aw] pointed out in [this 
> comment|https://issues.apache.org/jira/browse/HADOOP-13030?focusedCommentId=15262439=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15262439],
>  need a similar fix to this script.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13079) dfs -ls -q prints non-printable characters

2016-05-05 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272881#comment-15272881
 ] 

Allen Wittenauer commented on HADOOP-13079:
---

bq. even cause your xterm to run arbitrary code by abusing escape sequences.

Does it matter what command was used to generate the sequence?  No, of course 
not which makes it a hole in xterm, not ls.  

> dfs -ls -q prints non-printable characters
> --
>
> Key: HADOOP-13079
> URL: https://issues.apache.org/jira/browse/HADOOP-13079
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> Add option {{-q}} to "hdfs dfs -ls" to print non-printable characters as "?". 
> Non-printable characters are defined by 
> [isprint(3)|http://linux.die.net/man/3/isprint] according to the current 
> locale.
> Default to {{-q}} behavior on terminal; otherwise, print raw characters. See 
> the difference in these 2 command lines:
> * {{hadoop fs -ls /dir}}
> * {{hadoop fs -ls /dir | od -c}}
> In C, {{isatty(STDOUT_FILENO)}} is used to find out whether the output is a 
> terminal. Since Java doesn't have {{isatty}}, I will use JNI to call C 
> {{isatty()}} because the closest test {{System.console() == null}} does not 
> work in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13028) add low level counter metrics for S3A; use in read performance tests

2016-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272880#comment-15272880
 ] 

Hadoop QA commented on HADOOP-13028:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 13s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
37s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 36s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 25s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
5s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 15s 
{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 4s 
{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
0s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 14s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 27s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 27s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 4s 
{color} | {color:red} root: The patch generated 16 new + 49 unchanged - 40 
fixed = 65 total (was 89) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 50 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
31s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 12s 
{color} | {color:red} hadoop-aws in the patch failed with JDK v1.8.0_91. 
{color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 14s 
{color} | {color:red} hadoop-tools_hadoop-aws-jdk1.7.0_95 with JDK v1.7.0_95 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 30s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 12s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 29s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 13s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_95. 
{color} |
| 

[jira] [Commented] (HADOOP-13091) DistCp masks potential CRC check failures

2016-05-05 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272862#comment-15272862
 ] 

Chris Nauroth commented on HADOOP-13091:


[~linyiqun], actually, I didn't intend to suggest any further changes on the 
exception handling logic right now.  I just wanted to point out that I'd like 
to see this patch tested against S3A and WASB, because "offload to cloud" has 
become a fairly common use case for DistCp.  I have some tests I can run once 
the feedback on the patch is settled.

> DistCp masks potential CRC check failures
> -
>
> Key: HADOOP-13091
> URL: https://issues.apache.org/jira/browse/HADOOP-13091
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Elliot West
>Assignee: Lin Yiqun
> Attachments: HADOOP-13091.003.patch, HDFS-10338.001.patch, 
> HDFS-10338.002.patch
>
>
> There appear to be edge cases whereby CRC checks may be circumvented when 
> requests for checksums from the source or target file system fail. In this 
> event CRCs could differ between the source and target and yet the DistCp copy 
> would succeed, even when the 'skip CRC check' option is not being used.
> The code in question is contained in the method 
> [{{org.apache.hadoop.tools.util.DistCpUtils#checksumsAreEqual(...)}}|https://github.com/apache/hadoop/blob/release-2.7.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java#L457]
> Specifically this code block suggests that if there is a failure when trying 
> to read the source or target checksum then the method will return {{true}} 
> (i.e.  the checksums are equal), implying that the check succeeded. In actual 
> fact we just failed to obtain the checksum and could not perform the check.
> {code}
> try {
>   sourceChecksum = sourceChecksum != null ? sourceChecksum : 
> sourceFS.getFileChecksum(source);
>   targetChecksum = targetFS.getFileChecksum(target);
> } catch (IOException e) {
>   LOG.error("Unable to retrieve checksum for " + source + " or "
> + target, e);
> }
> return (sourceChecksum == null || targetChecksum == null ||
>   sourceChecksum.equals(targetChecksum));
> {code}
> I believe that at the very least the caught {{IOException}} should be 
> re-thrown. If this is not deemed desirable then I believe an option 
> ({{--strictCrc}}?) should be added to enforce a strict check where we require 
> that both the source and target CRCs are retrieved, are not null, and are 
> then compared for equality. If for any reason either of the CRCs retrievals 
> fail then an exception is thrown.
> Clearly some {{FileSystems}} do not support CRCs and invocations to 
> {{FileSystem.getFileChecksum(...)}} return {{null}} in these instances. I 
> would suggest that these should fail a strict CRC check to prevent users 
> developing a false sense of security in their copy pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13101) TestDNS#{testDefaultDnsServer,testNullDnsServer} failed intermittently

2016-05-05 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13101:
---
Summary: TestDNS#{testDefaultDnsServer,testNullDnsServer} failed 
intermittently  (was: TestDNS.testDefaultDnsServer failed intermittently.)

> TestDNS#{testDefaultDnsServer,testNullDnsServer} failed intermittently
> --
>
> Key: HADOOP-13101
> URL: https://issues.apache.org/jira/browse/HADOOP-13101
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Xiaoyu Yao
>
> The test failed intermittently on 
> [Jenkins|https://builds.apache.org/job/PreCommit-HDFS-Build/15366/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_91.txt]
>  with the following error.
> {code}
> Failed tests: 
>   TestDNS.testDefaultDnsServer:134 
> Expected: is "dd12a7999c74"
>  but: was "localhost"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13101) TestDNS.testDefaultDnsServer failed intermittently.

2016-05-05 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272851#comment-15272851
 ] 

Mingliang Liu commented on HADOOP-13101:


Thanks for filing this. I see {{TestDNS#testNullDnsServer}} is also failing 
(e.g.[PreCommit-HADOOP-Build/9285|https://builds.apache.org/job/PreCommit-HADOOP-Build/9285/testReport/org.apache.hadoop.net/TestDNS/testNullDnsServer/]),
 and seems related. Hope we can fix them together.

Error Message
{code}
Expected: is "localhost"
 but: was "8b7e00deed59"
{code}

Stacktrace
{code}
java.lang.AssertionError: 
Expected: is "localhost"
 but: was "8b7e00deed59"
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
at org.junit.Assert.assertThat(Assert.java:865)
at org.junit.Assert.assertThat(Assert.java:832)
at org.apache.hadoop.net.TestDNS.testNullDnsServer(TestDNS.java:124)
{code}
Standard Output
{code}
2016-05-05 06:36:16,013 WARN  net.DNS (DNS.java:getHosts(268)) - Unable to 
determine hostname for interface lo
{code}

> TestDNS.testDefaultDnsServer failed intermittently.
> ---
>
> Key: HADOOP-13101
> URL: https://issues.apache.org/jira/browse/HADOOP-13101
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Xiaoyu Yao
>
> The test failed intermittently on 
> [Jenkins|https://builds.apache.org/job/PreCommit-HDFS-Build/15366/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_91.txt]
>  with the following error.
> {code}
> Failed tests: 
>   TestDNS.testDefaultDnsServer:134 
> Expected: is "dd12a7999c74"
>  but: was "localhost"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13103) Group resolution from LDAP may fail on javax.naming.ServiceUnavailableException

2016-05-05 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272848#comment-15272848
 ] 

Chris Nauroth commented on HADOOP-13103:


Hello [~szetszwo].

Something that has been bugging me for a while is that we also don't support 
timeouts on these LDAP connections.  This page has documentation on how to set 
the timeouts through the JNDI APIs.

https://docs.oracle.com/javase/tutorial/jndi/newstuff/readtimeout.html

Would you be interested in doing timeouts as part of this patch, or do you 
prefer if I file a separate JIRA for timeouts?  I'm fine either way, and I can 
help code review whatever you decide to do here.

Thanks!


> Group resolution from LDAP may fail on 
> javax.naming.ServiceUnavailableException
> ---
>
> Key: HADOOP-13103
> URL: https://issues.apache.org/jira/browse/HADOOP-13103
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: c13103_20160505.patch
>
>
> According to the 
> [javadoc|https://docs.oracle.com/javase/7/docs/api/javax/naming/ServiceUnavailableException.html],
>  ServiceUnavailableException is thrown when attempting to communicate with a 
> directory or naming service and that service is not available. It might be 
> unavailable for different reasons. For example, the server might be too busy 
> to service the request, or the server might not be registered to service any 
> requests, etc.
> We should retry on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13064) LineReader reports incorrect number of bytes read resulting in correctness issues using LineRecordReader

2016-05-05 Thread Joe Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Ellis resolved HADOOP-13064.

Resolution: Duplicate

> LineReader reports incorrect number of bytes read resulting in correctness 
> issues using LineRecordReader
> 
>
> Key: HADOOP-13064
> URL: https://issues.apache.org/jira/browse/HADOOP-13064
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Joe Ellis
>Priority: Critical
> Attachments: LineReaderTest.java
>
>
> The specific issue we were seeing with LineReader is that when we pass in 
> '\r\n' as the line delimiter the number of bytes that it claims to have read 
> is less than what it actually read. We narrowed this down to only happening 
> when the delimiter is split across the internal buffer boundary, so if 
> fillbuffer fills with "row\r" and the next call fills with "\n" then the 
> number of bytes reported would be 4 rather than 5.
> This results in correctness issues in LineRecordReader because if this off by 
> one issue is seen enough times when reading a split then it will continue to 
> read records past its split boundary, resulting in records appearing to come 
> from multiple splits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13064) LineReader reports incorrect number of bytes read resulting in correctness issues using LineRecordReader

2016-05-05 Thread Joe Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272842#comment-15272842
 ] 

Joe Ellis commented on HADOOP-13064:


Yeah just bumped to 2.7.2 and my broken test passed. We can close this out.

> LineReader reports incorrect number of bytes read resulting in correctness 
> issues using LineRecordReader
> 
>
> Key: HADOOP-13064
> URL: https://issues.apache.org/jira/browse/HADOOP-13064
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Joe Ellis
>Priority: Critical
> Attachments: LineReaderTest.java
>
>
> The specific issue we were seeing with LineReader is that when we pass in 
> '\r\n' as the line delimiter the number of bytes that it claims to have read 
> is less than what it actually read. We narrowed this down to only happening 
> when the delimiter is split across the internal buffer boundary, so if 
> fillbuffer fills with "row\r" and the next call fills with "\n" then the 
> number of bytes reported would be 4 rather than 5.
> This results in correctness issues in LineRecordReader because if this off by 
> one issue is seen enough times when reading a split then it will continue to 
> read records past its split boundary, resulting in records appearing to come 
> from multiple splits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13103) Group resolution from LDAP may fail on javax.naming.ServiceUnavailableException

2016-05-05 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-13103:
-
Status: Patch Available  (was: Open)

> Group resolution from LDAP may fail on 
> javax.naming.ServiceUnavailableException
> ---
>
> Key: HADOOP-13103
> URL: https://issues.apache.org/jira/browse/HADOOP-13103
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: c13103_20160505.patch
>
>
> According to the 
> [javadoc|https://docs.oracle.com/javase/7/docs/api/javax/naming/ServiceUnavailableException.html],
>  ServiceUnavailableException is thrown when attempting to communicate with a 
> directory or naming service and that service is not available. It might be 
> unavailable for different reasons. For example, the server might be too busy 
> to service the request, or the server might not be registered to service any 
> requests, etc.
> We should retry on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13103) Group resolution from LDAP may fail on javax.naming.ServiceUnavailableException

2016-05-05 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-13103:
-
Attachment: c13103_20160505.patch

c13103_20160505.patch: 1st patch.

> Group resolution from LDAP may fail on 
> javax.naming.ServiceUnavailableException
> ---
>
> Key: HADOOP-13103
> URL: https://issues.apache.org/jira/browse/HADOOP-13103
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: c13103_20160505.patch
>
>
> According to the 
> [javadoc|https://docs.oracle.com/javase/7/docs/api/javax/naming/ServiceUnavailableException.html],
>  ServiceUnavailableException is thrown when attempting to communicate with a 
> directory or naming service and that service is not available. It might be 
> unavailable for different reasons. For example, the server might be too busy 
> to service the request, or the server might not be registered to service any 
> requests, etc.
> We should retry on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13098) Dynamic LogLevel setting page should accept log level string with mixing upper case and lower case

2016-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272815#comment-15272815
 ] 

Hadoop QA commented on HADOOP-13098:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 6s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 10s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 55s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 53s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 20s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 54s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 49s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | hadoop.net.TestDNS |
| JDK v1.7.0_95 Failed junit tests | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12802461/HADOOP-13098-v3.patch 
|
| JIRA Issue | HADOOP-13098 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 072d8734cf1c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Updated] (HADOOP-12234) Web UI Framable Page

2016-05-05 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-12234:
-
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

As discussed in the comments of this JIRA, we will move forward with 
HADOOP-13008 for this functionality. I'm setting this as a duplicate and as 
superceded by HADOOP-13008.

> Web UI Framable Page
> 
>
> Key: HADOOP-12234
> URL: https://issues.apache.org/jira/browse/HADOOP-12234
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
> Attachments: HADOOP-12234-v2-master.patch, 
> HADOOP-12234-v3-master.patch, HADOOP-12234.patch
>
>
> The web UIs do not include the "X-Frame-Options" header to prevent the pages 
> from being framed from another site.  
> Reference:
> https://www.owasp.org/index.php/Clickjacking
> https://www.owasp.org/index.php/Clickjacking_Defense_Cheat_Sheet
> https://developer.mozilla.org/en-US/docs/Web/HTTP/X-Frame-Options



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13079) dfs -ls -q prints non-printable characters

2016-05-05 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272802#comment-15272802
 ] 

Colin Patrick McCabe commented on HADOOP-13079:
---

bq. It's not a security bug for the reasons you think it's a security bug. 
After all, wc, find, du, ... tons of other UNIX commands will happily print out 
terminal escape sequences with no option to turn them off. It is, however, 
problematic for traditional ftpd implementations since it's a great way to 
inject buffer overflows and then get root on a remote server.

This behavior is exploitable.  That makes it a security bug, even if lots of 
traditional UNIX commands have it.

Just because a behavior is traditional doesn't mean it's right.  There was a 
time when UNIX programs used {{gets()}} everywhere.  When the world became a 
less trusting place, they had to be fixed not to do that.  We should understand 
the motivations behind historical decisions before blindly copying them.

bq. ... and my answer is the same as it was almost a decade ago, in some HDFS 
JIRA somewhere, where a related topic came up before: HDFS would be better 
served by having a limit on what consists of a legal file and directory name. 
With an unlimited namespace, it's impossible to test against and impossible to 
protect every scenario in which oddball characters show up. What's legal in one 
locale may not be legal in another.

That's a very good suggestion.  I think we should tackle that for Hadoop 3.

bq. Also, are you prepared to file a CVE for every single time Hadoop prints 
out a directory or file name to the screen? There are probably hundreds if not 
thousands of places, obvious ones like 'fs -count' and less obvious ones like 
'yarn logs'. This is a 'tilting at windmills' problem. It is MUCH better to 
have ls blow up than be taken by surprise by something else later on.

The problem is, {{ls}} isn't necessarily going to "blow up," just display 
something odd, or even cause your xterm to run arbitrary code by abusing escape 
sequences.

> dfs -ls -q prints non-printable characters
> --
>
> Key: HADOOP-13079
> URL: https://issues.apache.org/jira/browse/HADOOP-13079
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> Add option {{-q}} to "hdfs dfs -ls" to print non-printable characters as "?". 
> Non-printable characters are defined by 
> [isprint(3)|http://linux.die.net/man/3/isprint] according to the current 
> locale.
> Default to {{-q}} behavior on terminal; otherwise, print raw characters. See 
> the difference in these 2 command lines:
> * {{hadoop fs -ls /dir}}
> * {{hadoop fs -ls /dir | od -c}}
> In C, {{isatty(STDOUT_FILENO)}} is used to find out whether the output is a 
> terminal. Since Java doesn't have {{isatty}}, I will use JNI to call C 
> {{isatty()}} because the closest test {{System.console() == null}} does not 
> work in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13103) Group resolution from LDAP may fail on javax.naming.ServiceUnavailableException

2016-05-05 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-13103:


 Summary: Group resolution from LDAP may fail on 
javax.naming.ServiceUnavailableException
 Key: HADOOP-13103
 URL: https://issues.apache.org/jira/browse/HADOOP-13103
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor


According to the 
[javadoc|https://docs.oracle.com/javase/7/docs/api/javax/naming/ServiceUnavailableException.html],
 ServiceUnavailableException is thrown when attempting to communicate with a 
directory or naming service and that service is not available. It might be 
unavailable for different reasons. For example, the server might be too busy to 
service the request, or the server might not be registered to service any 
requests, etc.

We should retry on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13077) Handle special characters in passwords in httpfs.sh

2016-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-13077:
--
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

looks fine.

committed to trunk.

thanks!

> Handle special characters in passwords in httpfs.sh
> ---
>
> Key: HADOOP-13077
> URL: https://issues.apache.org/jira/browse/HADOOP-13077
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Fix For: 3.0.0
>
> Attachments: HADOOP-13077-repro.tar.gz, HADOOP-13077.01.patch, 
> HADOOP-13077.02.patch, HADOOP-13077.03.patch
>
>
> As [~aw] pointed out in [this 
> comment|https://issues.apache.org/jira/browse/HADOOP-13030?focusedCommentId=15262439=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15262439],
>  need a similar fix to this script.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13008) Add XFS Filter for UIs to Hadoop Common

2016-05-05 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272801#comment-15272801
 ] 

Larry McCay commented on HADOOP-13008:
--

Thanks, [~cnauroth]!
I'll take care of those and provide a new revision.

> Add XFS Filter for UIs to Hadoop Common
> ---
>
> Key: HADOOP-13008
> URL: https://issues.apache.org/jira/browse/HADOOP-13008
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 2.8.0
>
> Attachments: HADOOP-13008-001.patch
>
>
> Cross Frame Scripting (XFS) prevention for UIs can be provided through a 
> common servlet filter. This filter will set the X-Frame-Options HTTP header 
> to DENY unless configured to another valid setting.
> There are a number of UIs that could just add this to their filters as well 
> as the Yarn webapp proxy which could add it for all it's proxied UIs - if 
> appropriate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13008) Add XFS Filter for UIs to Hadoop Common

2016-05-05 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272793#comment-15272793
 ] 

Chris Nauroth commented on HADOOP-13008:


Hello [~lmccay].  This looks good.  Here are just a few comments:
# I think for completeness, there are a few other relevant methods that 
{{XFrameOptionsResponseWrapper}} needs to override: {{addDateHeader}}, 
{{addIntHeader}}, {{setDateHeader}} and {{setIntHeader}}.  All of those should 
disallow altering X-Frame-Options.
# Check indentation level on the {{super}} call here.
{code}
public XFrameOptionsResponseWrapper(HttpServletResponse response) {
super(response);
}
{code}
# I generally prefer that tests just let exceptions propagate instead of 
catching and calling {{fail}}, unless the test specifically covers an error 
case and needs to verify the right kind of exception was thrown.  If there is a 
test failure, letting the exception propagate will show the full stack trace in 
the JUnit report, and that's often helpful for diagnosis.


> Add XFS Filter for UIs to Hadoop Common
> ---
>
> Key: HADOOP-13008
> URL: https://issues.apache.org/jira/browse/HADOOP-13008
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 2.8.0
>
> Attachments: HADOOP-13008-001.patch
>
>
> Cross Frame Scripting (XFS) prevention for UIs can be provided through a 
> common servlet filter. This filter will set the X-Frame-Options HTTP header 
> to DENY unless configured to another valid setting.
> There are a number of UIs that could just add this to their filters as well 
> as the Yarn webapp proxy which could add it for all it's proxied UIs - if 
> appropriate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12866) add a subcommand for gridmix

2016-05-05 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272775#comment-15272775
 ] 

Allen Wittenauer commented on HADOOP-12866:
---

{code}
+hadoop_add_to_classpath_tools hadoop-rumen
{code}

Is that actually required?  

> add a subcommand for gridmix
> 
>
> Key: HADOOP-12866
> URL: https://issues.apache.org/jira/browse/HADOOP-12866
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Kai Sasaki
> Attachments: HADOOP-12866.01.patch
>
>
> gridmix shouldn't require a raw java command line to run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-12930) [Umbrella] Dynamic subcommands for hadoop shell scripts

2016-05-05 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15199481#comment-15199481
 ] 

Allen Wittenauer edited comment on HADOOP-12930 at 5/5/16 6:14 PM:
---

It is extremely desirable to be able to add subcommands to the hadoop, etc, 
commands dynamically. There are several reasons to do this:

* Enable local variants of subcommands without worrying about breaking existing 
scripts.  For example, distcp has historically been replaced with local 
versions for various reasons.
* Allows for greater testing capabilities
* Possibility of 3rd party/external-to-hadoop being allowed to add capabilities 
and take advantage of the rich shell environment

Enabling this is relatively trivial:
* look for a function defined with a given pattern that matches the subcommand 
passed via the CLI
* if that function exists, execute it with passed parameters
* if that function doesn't exist, continue on with our normal processing

In order to accomplish this, a few things need to take place:
* re-arrange the existing commands a bit for legibility/flexibility
* make some shell-local globals to be 'safe' globals that can span past their 
borders
* define an API by which 3rd parties may add and override existing commands
* get HADOOP-12857 committed for some pre-work


was (Author: aw):
It is extremely desirable to be able to add subcommands to the hadoop, etc, 
commands dynamically. There are several reasons to do this:

* Enable local variants of subcommands without worrying about breaking existing 
scripts.  For example, distcp has historically been replaced with local 
versions for various reasons.
* Allows for greater testing capabilities
* Possibility of 3rd party/external-to-hadoop being allowed to add capabilities 
and take advantage of the rich shell environment

In order to accomplish this, a few things need to take place:
* re-arrange the existing commands a bit for legibility/flexibility
* make some shell-local globals to be 'safe' globals that can span past their 
borders
* define an API by which 3rd parties may add and override existing commands
* get HADOOP-12857 committed for some pre-work

> [Umbrella] Dynamic subcommands for hadoop shell scripts
> ---
>
> Key: HADOOP-12930
> URL: https://issues.apache.org/jira/browse/HADOOP-12930
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Critical
>
> Umbrella for converting hadoop, hdfs, mapred, and yarn to allow for dynamic 
> subcommands. See first comment for more details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12930) [Umbrella] Dynamic subcommands for hadoop shell scripts

2016-05-05 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272755#comment-15272755
 ] 

Allen Wittenauer commented on HADOOP-12930:
---

Just to copy what I sent to common-dev here:

===


When the sub-projects re-merged, maven work was done, whatever, the 
shell scripts for MR and YARN were placed (effectively) outside of the normal 
maven hierarchy.  In order to add unit tests to the shell scripts for these 
sub-projects, it means effectively turning hadoop-yarn-project/hadoop-yarn and 
hadoop-mapreduce-project into “real” modules so that mvn test works as 
expected.   Doing so will likely have some surprising consequences, such as 
anyone who modifies java code and the shell code in a patch will trigger _all_ 
of the unit tests in yarn.

I think we have four options:

a) Continue forward turning these into real modules with src directories, etc 
and we live with the consequences

b) Move the related bits into an existing module, making them similar to HDFS, 
common, tools

c) Move the related bits into a new module, using the layout that maven really 
really wants

d) Skip the unit tests; we don’t have them now

This is clearly more work than what I really wanted to cover in this 
branch, but given that there was a specific request to add unit test code for 
this functionality, I’m sort of stuck here.

Thoughts?

===

> [Umbrella] Dynamic subcommands for hadoop shell scripts
> ---
>
> Key: HADOOP-12930
> URL: https://issues.apache.org/jira/browse/HADOOP-12930
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Critical
>
> Umbrella for converting hadoop, hdfs, mapred, and yarn to allow for dynamic 
> subcommands. See first comment for more details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13099) Globbing does not return file whose name has nonprintable character

2016-05-05 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272744#comment-15272744
 ] 

John Zhuge commented on HADOOP-13099:
-

An old HADOOP-7222 "Inconsistent behavior when passing a path with special 
characters as literals to some FsShell commands" seems related.

> Globbing does not return file whose name has nonprintable character
> ---
>
> Key: HADOOP-13099
> URL: https://issues.apache.org/jira/browse/HADOOP-13099
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> In a directory, create a file with a name containing non-printable character, 
> e.g., '\r'.  {{dfs -ls dir}} can list such file, but {{dfs -ls dir/*}} can 
> not.
> {noformat}
> $ hdfs dfs -touchz /tmp/test/abc
> $ hdfs dfs -touchz $'/tmp/test/abc\rdef'
> $ hdfs dfs -ls /tmp/test
> Found 2 items
> -rw-r--r--   3 systest supergroup  0 2016-05-05 01:35 /tmp/test/abc
> def-r--r--   3 systest supergroup  0 2016-05-05 01:36 /tmp/test/abc
> $ hdfs dfs -ls /tmp/test | od -c
> 000   F   o   u   n   d   2   i   t   e   m   s  \n   -   r
> 020   w   -   r   -   -   r   -   -   3   s   y   s
> 040   t   e   s   t   s   u   p   e   r   g   r   o   u   p
> 060   0   2   0   1   6   -
> 100   0   5   -   0   5   0   1   :   3   5   /   t   m   p
> 120   /   t   e   s   t   /   a   b   c  \n   -   r   w   -   r   -
> 140   -   r   -   -   3   s   y   s   t   e   s   t
> 160   s   u   p   e   r   g   r   o   u   p
> 200   0   2   0   1   6   -   0   5   -   0
> 220   5   0   1   :   3   6   /   t   m   p   /   t   e   s
> 240   t   /   a   b   c  \r   d   e   f  \n
> 252
> $ hdfs dfs -ls /tmp/test/*
> -rw-r--r--   3 systest supergroup  0 2016-05-05 01:35 /tmp/test/abc
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13028) add low level counter metrics for S3A; use in read performance tests

2016-05-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13028:

Status: Patch Available  (was: Open)

> add low level counter metrics for S3A; use in read performance tests
> 
>
> Key: HADOOP-13028
> URL: https://issues.apache.org/jira/browse/HADOOP-13028
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, metrics
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13028-001.patch, HADOOP-13028-002.patch, 
> HADOOP-13028-004.patch, HADOOP-13028-005.patch, HADOOP-13028-006.patch, 
> HADOOP-13028-007.patch, HADOOP-13028-008.patch, HADOOP-13028-009.patch, 
> HADOOP-13028-branch-2-008.patch, HADOOP-13028-branch-2-009.patch, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt
>
>
> against S3 (and other object stores), opening connections can be expensive, 
> closing connections may be expensive (a sign of a regression). 
> S3A FS and individual input streams should have counters of the # of 
> open/close/failure+reconnect operations, timers of how long things take. This 
> can be used downstream to measure efficiency of the code (how often 
> connections are being made), connection reliability, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13098) Dynamic LogLevel setting page should accept log level string with mixing upper case and lower case

2016-05-05 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272726#comment-15272726
 ] 

Mingliang Liu commented on HADOOP-13098:


+1 (non-binding)

> Dynamic LogLevel setting page should accept log level string with mixing 
> upper case and lower case
> --
>
> Key: HADOOP-13098
> URL: https://issues.apache.org/jira/browse/HADOOP-13098
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Junping Du
> Attachments: HADOOP-13098-v2.patch, HADOOP-13098-v3.patch, 
> HADOOP-13098.patch
>
>
> Our current logLevel settings: http://deamon_web_service_address/logLevel 
> only accept full Upper Case string of log level that means "Debug" or "debug" 
> is treated as bad log level in setting. I think we should enhance the tools 
> to ignore upper/lower cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13028) add low level counter metrics for S3A; use in read performance tests

2016-05-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13028:

Attachment: HADOOP-13028-branch-2-009.patch

> add low level counter metrics for S3A; use in read performance tests
> 
>
> Key: HADOOP-13028
> URL: https://issues.apache.org/jira/browse/HADOOP-13028
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, metrics
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13028-001.patch, HADOOP-13028-002.patch, 
> HADOOP-13028-004.patch, HADOOP-13028-005.patch, HADOOP-13028-006.patch, 
> HADOOP-13028-007.patch, HADOOP-13028-008.patch, HADOOP-13028-009.patch, 
> HADOOP-13028-branch-2-008.patch, HADOOP-13028-branch-2-009.patch, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt
>
>
> against S3 (and other object stores), opening connections can be expensive, 
> closing connections may be expensive (a sign of a regression). 
> S3A FS and individual input streams should have counters of the # of 
> open/close/failure+reconnect operations, timers of how long things take. This 
> can be used downstream to measure efficiency of the code (how often 
> connections are being made), connection reliability, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13028) add low level counter metrics for S3A; use in read performance tests

2016-05-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13028:

Status: Open  (was: Patch Available)

> add low level counter metrics for S3A; use in read performance tests
> 
>
> Key: HADOOP-13028
> URL: https://issues.apache.org/jira/browse/HADOOP-13028
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, metrics
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13028-001.patch, HADOOP-13028-002.patch, 
> HADOOP-13028-004.patch, HADOOP-13028-005.patch, HADOOP-13028-006.patch, 
> HADOOP-13028-007.patch, HADOOP-13028-008.patch, HADOOP-13028-009.patch, 
> HADOOP-13028-branch-2-008.patch, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt
>
>
> against S3 (and other object stores), opening connections can be expensive, 
> closing connections may be expensive (a sign of a regression). 
> S3A FS and individual input streams should have counters of the # of 
> open/close/failure+reconnect operations, timers of how long things take. This 
> can be used downstream to measure efficiency of the code (how often 
> connections are being made), connection reliability, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13098) Dynamic LogLevel setting page should accept log level string with mixing upper case and lower case

2016-05-05 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272703#comment-15272703
 ] 

Xiaoyu Yao commented on HADOOP-13098:
-

+1 for Patch v3 pending Jenkins. I've opened HADOOP-13101 for the TestDNS 
issue. This is just another instance of it.

> Dynamic LogLevel setting page should accept log level string with mixing 
> upper case and lower case
> --
>
> Key: HADOOP-13098
> URL: https://issues.apache.org/jira/browse/HADOOP-13098
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Junping Du
> Attachments: HADOOP-13098-v2.patch, HADOOP-13098-v3.patch, 
> HADOOP-13098.patch
>
>
> Our current logLevel settings: http://deamon_web_service_address/logLevel 
> only accept full Upper Case string of log level that means "Debug" or "debug" 
> is treated as bad log level in setting. I think we should enhance the tools 
> to ignore upper/lower cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13098) Dynamic LogLevel setting page should accept log level string with mixing upper case and lower case

2016-05-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-13098:

Attachment: HADOOP-13098-v3.patch

Thanks for reviewing this, [~xyao]! Fix this silly issue in v3 patch. Also, a 
UT failure reported by Mr. Jenkins is not related to patch but an environment 
issue.

> Dynamic LogLevel setting page should accept log level string with mixing 
> upper case and lower case
> --
>
> Key: HADOOP-13098
> URL: https://issues.apache.org/jira/browse/HADOOP-13098
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Junping Du
> Attachments: HADOOP-13098-v2.patch, HADOOP-13098-v3.patch, 
> HADOOP-13098.patch
>
>
> Our current logLevel settings: http://deamon_web_service_address/logLevel 
> only accept full Upper Case string of log level that means "Debug" or "debug" 
> is treated as bad log level in setting. I think we should enhance the tools 
> to ignore upper/lower cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13098) Dynamic LogLevel setting page should accept log level string with mixing upper case and lower case

2016-05-05 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272639#comment-15272639
 ] 

Xiaoyu Yao commented on HADOOP-13098:
-

Patch v2 looks good to me. Only one minor issue in the test as shown below. +1 
after that is addressed. 

{code}
46Assert.assertNotEquals("Get default Log Level which shouldn't be 
ERROR.",
47Level.ERROR.equals(log.getEffectiveLevel()));
{code}

should be 

46Assert.assertNotEquals("Get default Log Level which shouldn't be 
ERROR.",
47Level.ERROR, log.getEffectiveLevel());

> Dynamic LogLevel setting page should accept log level string with mixing 
> upper case and lower case
> --
>
> Key: HADOOP-13098
> URL: https://issues.apache.org/jira/browse/HADOOP-13098
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Junping Du
> Attachments: HADOOP-13098-v2.patch, HADOOP-13098.patch
>
>
> Our current logLevel settings: http://deamon_web_service_address/logLevel 
> only accept full Upper Case string of log level that means "Debug" or "debug" 
> is treated as bad log level in setting. I think we should enhance the tools 
> to ignore upper/lower cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13098) Dynamic LogLevel setting page should accept log level string with mixing upper case and lower case

2016-05-05 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272639#comment-15272639
 ] 

Xiaoyu Yao edited comment on HADOOP-13098 at 5/5/16 5:05 PM:
-

Patch v2 looks good to me. Only one minor issue in the test as shown below. +1 
after that is addressed. 

{code}
46Assert.assertNotEquals("Get default Log Level which shouldn't be 
ERROR.",
47Level.ERROR.equals(log.getEffectiveLevel()));
{code}

should be 
{code}
46Assert.assertNotEquals("Get default Log Level which shouldn't be 
ERROR.",
47Level.ERROR, log.getEffectiveLevel());
{code}


was (Author: xyao):
Patch v2 looks good to me. Only one minor issue in the test as shown below. +1 
after that is addressed. 

{code}
46Assert.assertNotEquals("Get default Log Level which shouldn't be 
ERROR.",
47Level.ERROR.equals(log.getEffectiveLevel()));
{code}

should be 

46Assert.assertNotEquals("Get default Log Level which shouldn't be 
ERROR.",
47Level.ERROR, log.getEffectiveLevel());

> Dynamic LogLevel setting page should accept log level string with mixing 
> upper case and lower case
> --
>
> Key: HADOOP-13098
> URL: https://issues.apache.org/jira/browse/HADOOP-13098
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Junping Du
> Attachments: HADOOP-13098-v2.patch, HADOOP-13098.patch
>
>
> Our current logLevel settings: http://deamon_web_service_address/logLevel 
> only accept full Upper Case string of log level that means "Debug" or "debug" 
> is treated as bad log level in setting. I think we should enhance the tools 
> to ignore upper/lower cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13102) Update GroupsMapping documentation to reflect the new changes

2016-05-05 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HADOOP-13102:
--
Description: 
We need to update this section in the groupsMapping.md
{noformat}
Line 84:

The implementation does not attempt to resolve group hierarchies. Therefore, a 
user must be an explicit member of a group object
in order to be considered a member.
{noformat} 

With changes in Hadoop-12291 this is no longer true since we will have the 
ability to walk the group hierarchies.

We also should modify this line 
{noformat}
Line :  81
It is possible to set a maximum time limit when searching and awaiting a result.
Set `hadoop.security.group.mapping.ldap.directory.search.timeout` to 0 if 
infinite wait period is desired. Default is 10,000 milliseconds (10 seconds).
{noformat}

we might want to document how the new settings affects the timeout.

and also add the new settings into this doc.
{noformat}
 hadoop.security.group.mapping.ldap.search.group.hierarchy.levels
{noformat}



  was:
We need to update this section in the groupsMapping.md
{noformat}
Line 84:

The implementation does not attempt to resolve group hierarchies. Therefore, a 
user must be an explicit member of a group object
in order to be considered a member.
{noformat} 

With changes in Hadoop-12291 this is no longer true since we will have the 
ability to walk the group hierarchies.

We also should modify this line 
{noformat}
Line :  81
It is possible to set a maximum time limit when searching and awaiting a result.
Set `hadoop.security.group.mapping.ldap.directory.search.timeout` to 0 if 
infinite wait period is desired. Default is 10,000 milliseconds (10 seconds).
{noformat}

we might want to document how the new settings affects the timeoout.

and also add the new settings into this doc.
{noformat}
 hadoop.security.group.mapping.ldap.search.group.hierarchy.levels
{noformat}




> Update GroupsMapping documentation to reflect the new changes
> -
>
> Key: HADOOP-13102
> URL: https://issues.apache.org/jira/browse/HADOOP-13102
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.8.0
>Reporter: Anu Engineer
>Assignee: Esther Kundin
> Fix For: 2.8.0
>
>
> We need to update this section in the groupsMapping.md
> {noformat}
> Line 84:
> The implementation does not attempt to resolve group hierarchies. Therefore, 
> a user must be an explicit member of a group object
> in order to be considered a member.
> {noformat} 
> With changes in Hadoop-12291 this is no longer true since we will have the 
> ability to walk the group hierarchies.
> We also should modify this line 
> {noformat}
> Line :  81
> It is possible to set a maximum time limit when searching and awaiting a 
> result.
> Set `hadoop.security.group.mapping.ldap.directory.search.timeout` to 0 if 
> infinite wait period is desired. Default is 10,000 milliseconds (10 seconds).
> {noformat}
> we might want to document how the new settings affects the timeout.
> and also add the new settings into this doc.
> {noformat}
>  hadoop.security.group.mapping.ldap.search.group.hierarchy.levels
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-05-05 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272608#comment-15272608
 ] 

Wei-Chiu Chuang commented on HADOOP-12291:
--

Looks good to me. Thanks for the contribution, [~ekundin].
Has this been tested against a real LDAP server? Like Active Directory server 
or Apache Directive Service.

I have a patch available for unit-testing LdapGroupsMapping using 
ActiveDirectory service (HADOOP-8145), but with the ongoing change to replace 
MiniKdc with Kerby, I'm not sure if I should re-implement it using Kerby.

> Add support for nested groups in LdapGroupsMapping
> --
>
> Key: HADOOP-12291
> URL: https://issues.apache.org/jira/browse/HADOOP-12291
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Gautam Gopalakrishnan
>Assignee: Esther Kundin
>  Labels: features, patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, 
> HADOOP-12291.003.patch, HADOOP-12291.004.patch
>
>
> When using {{LdapGroupsMapping}} with Hadoop, nested groups are not 
> supported. So for example if user {{jdoe}} is part of group A which is a 
> member of group B, the group mapping currently returns only group A.
> Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and 
> SSSD (or similar tools) but would be good to have this feature as part of 
> {{LdapGroupsMapping}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-05-05 Thread Esther Kundin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272585#comment-15272585
 ] 

Esther Kundin commented on HADOOP-12291:


You're welcome, and it was a pleasure working with you!

> Add support for nested groups in LdapGroupsMapping
> --
>
> Key: HADOOP-12291
> URL: https://issues.apache.org/jira/browse/HADOOP-12291
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Gautam Gopalakrishnan
>Assignee: Esther Kundin
>  Labels: features, patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, 
> HADOOP-12291.003.patch, HADOOP-12291.004.patch
>
>
> When using {{LdapGroupsMapping}} with Hadoop, nested groups are not 
> supported. So for example if user {{jdoe}} is part of group A which is a 
> member of group B, the group mapping currently returns only group A.
> Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and 
> SSSD (or similar tools) but would be good to have this feature as part of 
> {{LdapGroupsMapping}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-05-05 Thread Esther Kundin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272586#comment-15272586
 ] 

Esther Kundin commented on HADOOP-12291:


I got it.

> Add support for nested groups in LdapGroupsMapping
> --
>
> Key: HADOOP-12291
> URL: https://issues.apache.org/jira/browse/HADOOP-12291
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Gautam Gopalakrishnan
>Assignee: Esther Kundin
>  Labels: features, patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, 
> HADOOP-12291.003.patch, HADOOP-12291.004.patch
>
>
> When using {{LdapGroupsMapping}} with Hadoop, nested groups are not 
> supported. So for example if user {{jdoe}} is part of group A which is a 
> member of group B, the group mapping currently returns only group A.
> Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and 
> SSSD (or similar tools) but would be good to have this feature as part of 
> {{LdapGroupsMapping}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13102) Update GroupsMapping documentation to reflect the new changes

2016-05-05 Thread Esther Kundin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esther Kundin reassigned HADOOP-13102:
--

Assignee: Esther Kundin

> Update GroupsMapping documentation to reflect the new changes
> -
>
> Key: HADOOP-13102
> URL: https://issues.apache.org/jira/browse/HADOOP-13102
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.8.0
>Reporter: Anu Engineer
>Assignee: Esther Kundin
> Fix For: 2.8.0
>
>
> We need to update this section in the groupsMapping.md
> {noformat}
> Line 84:
> The implementation does not attempt to resolve group hierarchies. Therefore, 
> a user must be an explicit member of a group object
> in order to be considered a member.
> {noformat} 
> With changes in Hadoop-12291 this is no longer true since we will have the 
> ability to walk the group hierarchies.
> We also should modify this line 
> {noformat}
> Line :  81
> It is possible to set a maximum time limit when searching and awaiting a 
> result.
> Set `hadoop.security.group.mapping.ldap.directory.search.timeout` to 0 if 
> infinite wait period is desired. Default is 10,000 milliseconds (10 seconds).
> {noformat}
> we might want to document how the new settings affects the timeoout.
> and also add the new settings into this doc.
> {noformat}
>  hadoop.security.group.mapping.ldap.search.group.hierarchy.levels
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-05-05 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272575#comment-15272575
 ] 

Anu Engineer commented on HADOOP-12291:
---

[~ekundin] My apologies for not catching this earlier. But we need to modify 
the documentation for this feature too. I have filed HADOOP-13102 as a 
documentation JIRA. You can assign it to yourself or send it to me.

> Add support for nested groups in LdapGroupsMapping
> --
>
> Key: HADOOP-12291
> URL: https://issues.apache.org/jira/browse/HADOOP-12291
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Gautam Gopalakrishnan
>Assignee: Esther Kundin
>  Labels: features, patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, 
> HADOOP-12291.003.patch, HADOOP-12291.004.patch
>
>
> When using {{LdapGroupsMapping}} with Hadoop, nested groups are not 
> supported. So for example if user {{jdoe}} is part of group A which is a 
> member of group B, the group mapping currently returns only group A.
> Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and 
> SSSD (or similar tools) but would be good to have this feature as part of 
> {{LdapGroupsMapping}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12911) Upgrade Hadoop MiniKDC with Kerby

2016-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272574#comment-15272574
 ] 

Hadoop QA commented on HADOOP-12911:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 41s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 36s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
4s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 32s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
33s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
46s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 4s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 7s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 32s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 38s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 5m 38s {color} 
| {color:red} root-jdk1.8.0_91 with JDK v1.8.0_91 generated 1 new + 663 
unchanged - 0 fixed = 664 total (was 663) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 36s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 6m 36s {color} 
| {color:red} root-jdk1.7.0_95 with JDK v1.7.0_95 generated 1 new + 672 
unchanged - 0 fixed = 673 total (was 672) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
5s {color} | {color:green} root: The patch generated 0 new + 86 unchanged - 15 
fixed = 86 total (was 101) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 35s 
{color} | {color:red} hadoop-common-project/hadoop-minikdc generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 5s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 6s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 8s 
{color} | {color:green} hadoop-project in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit 

[jira] [Created] (HADOOP-13102) Update GroupsMapping documentation to reflect the new changes

2016-05-05 Thread Anu Engineer (JIRA)
Anu Engineer created HADOOP-13102:
-

 Summary: Update GroupsMapping documentation to reflect the new 
changes
 Key: HADOOP-13102
 URL: https://issues.apache.org/jira/browse/HADOOP-13102
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.8.0
Reporter: Anu Engineer
 Fix For: 2.8.0


We need to update this section in the groupsMapping.md
{noformat}
Line 84:

The implementation does not attempt to resolve group hierarchies. Therefore, a 
user must be an explicit member of a group object
in order to be considered a member.
{noformat} 

With changes in Hadoop-12291 this is no longer true since we will have the 
ability to walk the group hierarchies.

We also should modify this line 
{noformat}
Line :  81
It is possible to set a maximum time limit when searching and awaiting a result.
Set `hadoop.security.group.mapping.ldap.directory.search.timeout` to 0 if 
infinite wait period is desired. Default is 10,000 milliseconds (10 seconds).
{noformat}

we might want to document how the new settings affects the timeoout.

and also add the new settings into this doc.
{noformat}
 hadoop.security.group.mapping.ldap.search.group.hierarchy.levels
{noformat}





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13099) Globbing does not return file whose name has nonprintable character

2016-05-05 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272566#comment-15272566
 ] 

John Zhuge commented on HADOOP-13099:
-

Thanks [~qwertymaniac]. Isn't this a dup of HADOOP-12436 which fixed the issue 
as a side effect? HADOOP-13051 only added a unit test.

> Globbing does not return file whose name has nonprintable character
> ---
>
> Key: HADOOP-13099
> URL: https://issues.apache.org/jira/browse/HADOOP-13099
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> In a directory, create a file with a name containing non-printable character, 
> e.g., '\r'.  {{dfs -ls dir}} can list such file, but {{dfs -ls dir/*}} can 
> not.
> {noformat}
> $ hdfs dfs -touchz /tmp/test/abc
> $ hdfs dfs -touchz $'/tmp/test/abc\rdef'
> $ hdfs dfs -ls /tmp/test
> Found 2 items
> -rw-r--r--   3 systest supergroup  0 2016-05-05 01:35 /tmp/test/abc
> def-r--r--   3 systest supergroup  0 2016-05-05 01:36 /tmp/test/abc
> $ hdfs dfs -ls /tmp/test | od -c
> 000   F   o   u   n   d   2   i   t   e   m   s  \n   -   r
> 020   w   -   r   -   -   r   -   -   3   s   y   s
> 040   t   e   s   t   s   u   p   e   r   g   r   o   u   p
> 060   0   2   0   1   6   -
> 100   0   5   -   0   5   0   1   :   3   5   /   t   m   p
> 120   /   t   e   s   t   /   a   b   c  \n   -   r   w   -   r   -
> 140   -   r   -   -   3   s   y   s   t   e   s   t
> 160   s   u   p   e   r   g   r   o   u   p
> 200   0   2   0   1   6   -   0   5   -   0
> 220   5   0   1   :   3   6   /   t   m   p   /   t   e   s
> 240   t   /   a   b   c  \r   d   e   f  \n
> 252
> $ hdfs dfs -ls /tmp/test/*
> -rw-r--r--   3 systest supergroup  0 2016-05-05 01:35 /tmp/test/abc
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-05-05 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272556#comment-15272556
 ] 

Anu Engineer commented on HADOOP-12291:
---

The v4 patch looks excellent. Thank you for the update and this contribution. 
+1, (Non-Binding)

> Add support for nested groups in LdapGroupsMapping
> --
>
> Key: HADOOP-12291
> URL: https://issues.apache.org/jira/browse/HADOOP-12291
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Gautam Gopalakrishnan
>Assignee: Esther Kundin
>  Labels: features, patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, 
> HADOOP-12291.003.patch, HADOOP-12291.004.patch
>
>
> When using {{LdapGroupsMapping}} with Hadoop, nested groups are not 
> supported. So for example if user {{jdoe}} is part of group A which is a 
> member of group B, the group mapping currently returns only group A.
> Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and 
> SSSD (or similar tools) but would be good to have this feature as part of 
> {{LdapGroupsMapping}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13101) TestDNS.testDefaultDnsServer failed intermittently.

2016-05-05 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HADOOP-13101:
---

 Summary: TestDNS.testDefaultDnsServer failed intermittently.
 Key: HADOOP-13101
 URL: https://issues.apache.org/jira/browse/HADOOP-13101
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Reporter: Xiaoyu Yao


The test failed intermittently on 
[Jenkins|https://builds.apache.org/job/PreCommit-HDFS-Build/15366/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.8.0_91.txt]
 with the following error.

{code}
Failed tests: 
  TestDNS.testDefaultDnsServer:134 
Expected: is "dd12a7999c74"
 but: was "localhost"
{code}





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13098) Dynamic LogLevel setting page should accept log level string with mixing upper case and lower case

2016-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272525#comment-15272525
 ] 

Hadoop QA commented on HADOOP-13098:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 49s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
6s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 39s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 43s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 3s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 46s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 6s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 58s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 81m 4s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12802425/HADOOP-13098-v2.patch 
|
| JIRA Issue | HADOOP-13098 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7085ec054f37 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d0da132 |
| 

[jira] [Commented] (HADOOP-13028) add low level counter metrics for S3A; use in read performance tests

2016-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15272509#comment-15272509
 ] 

Hadoop QA commented on HADOOP-13028:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s {color} 
| {color:red} HADOOP-13028 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12802441/HADOOP-13028-009.patch
 |
| JIRA Issue | HADOOP-13028 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9291/console |
| Powered by | Apache Yetus 0.3.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> add low level counter metrics for S3A; use in read performance tests
> 
>
> Key: HADOOP-13028
> URL: https://issues.apache.org/jira/browse/HADOOP-13028
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, metrics
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13028-001.patch, HADOOP-13028-002.patch, 
> HADOOP-13028-004.patch, HADOOP-13028-005.patch, HADOOP-13028-006.patch, 
> HADOOP-13028-007.patch, HADOOP-13028-008.patch, HADOOP-13028-009.patch, 
> HADOOP-13028-branch-2-008.patch, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt
>
>
> against S3 (and other object stores), opening connections can be expensive, 
> closing connections may be expensive (a sign of a regression). 
> S3A FS and individual input streams should have counters of the # of 
> open/close/failure+reconnect operations, timers of how long things take. This 
> can be used downstream to measure efficiency of the code (how often 
> connections are being made), connection reliability, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13028) add low level counter metrics for S3A; use in read performance tests

2016-05-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13028:

Status: Patch Available  (was: Open)

> add low level counter metrics for S3A; use in read performance tests
> 
>
> Key: HADOOP-13028
> URL: https://issues.apache.org/jira/browse/HADOOP-13028
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, metrics
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13028-001.patch, HADOOP-13028-002.patch, 
> HADOOP-13028-004.patch, HADOOP-13028-005.patch, HADOOP-13028-006.patch, 
> HADOOP-13028-007.patch, HADOOP-13028-008.patch, HADOOP-13028-009.patch, 
> HADOOP-13028-branch-2-008.patch, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt
>
>
> against S3 (and other object stores), opening connections can be expensive, 
> closing connections may be expensive (a sign of a regression). 
> S3A FS and individual input streams should have counters of the # of 
> open/close/failure+reconnect operations, timers of how long things take. This 
> can be used downstream to measure efficiency of the code (how often 
> connections are being made), connection reliability, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13028) add low level counter metrics for S3A; use in read performance tests

2016-05-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13028:

Attachment: HADOOP-13028-009.patch

Patch 009: address chris's comments on patch 008;

* input range is always to EOF
* count up the bytes skipped backwards too (correcting for case)

Also: 
* switch to simple long types for counters; adjust findbugs accordingly
* allow for an empty source file option to disable the tests.
* document how to point input performance tests at different files/skip the 
tests
entirely


> add low level counter metrics for S3A; use in read performance tests
> 
>
> Key: HADOOP-13028
> URL: https://issues.apache.org/jira/browse/HADOOP-13028
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, metrics
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13028-001.patch, HADOOP-13028-002.patch, 
> HADOOP-13028-004.patch, HADOOP-13028-005.patch, HADOOP-13028-006.patch, 
> HADOOP-13028-007.patch, HADOOP-13028-008.patch, HADOOP-13028-009.patch, 
> HADOOP-13028-branch-2-008.patch, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt
>
>
> against S3 (and other object stores), opening connections can be expensive, 
> closing connections may be expensive (a sign of a regression). 
> S3A FS and individual input streams should have counters of the # of 
> open/close/failure+reconnect operations, timers of how long things take. This 
> can be used downstream to measure efficiency of the code (how often 
> connections are being made), connection reliability, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13028) add low level counter metrics for S3A; use in read performance tests

2016-05-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13028:

Status: Open  (was: Patch Available)

> add low level counter metrics for S3A; use in read performance tests
> 
>
> Key: HADOOP-13028
> URL: https://issues.apache.org/jira/browse/HADOOP-13028
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, metrics
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13028-001.patch, HADOOP-13028-002.patch, 
> HADOOP-13028-004.patch, HADOOP-13028-005.patch, HADOOP-13028-006.patch, 
> HADOOP-13028-007.patch, HADOOP-13028-008.patch, 
> HADOOP-13028-branch-2-008.patch, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt
>
>
> against S3 (and other object stores), opening connections can be expensive, 
> closing connections may be expensive (a sign of a regression). 
> S3A FS and individual input streams should have counters of the # of 
> open/close/failure+reconnect operations, timers of how long things take. This 
> can be used downstream to measure efficiency of the code (how often 
> connections are being made), connection reliability, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >