[jira] [Assigned] (HADOOP-7088) JMX Bean that exposes version and build information

2015-05-29 Thread surendra singh lilhore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

surendra singh lilhore reassigned HADOOP-7088:
--

Assignee: surendra singh lilhore

> JMX Bean that exposes version and build information
> ---
>
> Key: HADOOP-7088
> URL: https://issues.apache.org/jira/browse/HADOOP-7088
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 0.23.0
>Reporter: Dmytro Molkov
>Assignee: surendra singh lilhore
>Priority: Minor
>
> It would be great if each daemon in the cluster had a JMX bean that would 
> expose the build version, hadoop version and information of this sort.
> This makes it easier for cluster management tools to identify versions of the 
> software running on different components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12044) test-patch should handle module movement

2015-05-29 Thread Sean Busbey (JIRA)
Sean Busbey created HADOOP-12044:


 Summary: test-patch should handle module movement
 Key: HADOOP-12044
 URL: https://issues.apache.org/jira/browse/HADOOP-12044
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Reporter: Sean Busbey


test-patch reported a variety of errors against HADOOP-11804 caused by the fact 
that a module moved



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11929) add test-patch plugin points for customizing build layout

2015-05-29 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14565828#comment-14565828
 ] 

Sean Busbey commented on HADOOP-11929:
--

{quote}
The whole premise of this change is that we should spend more human time 
building and maintaining a complex explicit dependency management 
infrastructure to save CPU time on the build cluster.
{quote}

I think there's been a misunderstanding. The point of this particular patch is 
just to enable test-patch users specify when they need to do something special. 
The most obvious use case would be when particular profiles are used (either 
specific to a project or for different test checks). The one I personally care 
about is the use of a single git repo for multiple related projects (i.e. 
NIFI-577 or the case of the maven-parent-pom project).

The example personality (hadoop.sh) is just Allen moving the long-standing 
hadoop logic quirks into a single location so we can more easily maintain them. 
In the process he has found and fixed some errors, but that's strictly a side 
effect.

> add test-patch plugin points for customizing build layout
> -
>
> Key: HADOOP-11929
> URL: https://issues.apache.org/jira/browse/HADOOP-11929
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Assignee: Allen Wittenauer
>Priority: Minor
> Attachments: HADOOP-11929.00.patch, HADOOP-11929.01.patch, 
> HADOOP-11929.02.patch, HADOOP-11929.03.patch, hadoop.sh
>
>
> Sean Busbey and I had a chat about this at the Bug Bash. Here's the proposal:
>   * Introduce the concept of a 'personality module'.
>   * There can be only one personality.
>   * Personalities provide a single function that takes as input the name of 
> the test current being processed
>   * This function uses two other built-in functions to define two queues: 
> maven module name and profiles to use against those maven module names
>   * If something needs to be compiled prior to this test (but not actually 
> tested), the personality will be responsible for doing that compilation
> In hadoop, the classic example is hadoop-hdfs needs common compiled with the 
> native bits. So prior to the javac tests, the personality would check 
> CHANGED_MODULES, see hadoop-hdfs, and compile common w/ -Pnative prior to 
> letting test-patch.sh do the work in hadoop-hdfs. Another example is our lack 
> of test coverage of various native bits. Since these require profiles to be 
> defined prior to compilation, the personality could see that something 
> touches native code, set the appropriate profile, and let test-patch.sh be on 
> its way.
> One way to think of it is some higher order logic on top of the automated 
> 'figure out what modules and what tests to run' functions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12036) Consolidate all of the cmake extensions in one directory

2015-05-29 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14565793#comment-14565793
 ] 

Allen Wittenauer commented on HADOOP-12036:
---

bq. On the other hand, everyone agrees that having a nightly build on Solaris / 
BSD / whatever will enforce portability.

I think it's funny that you argued against depending upon the nightly for Linux 
builds just a few hours ago in another JIRA.

> Consolidate all of the cmake extensions in one directory
> 
>
> Key: HADOOP-12036
> URL: https://issues.apache.org/jira/browse/HADOOP-12036
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Allen Wittenauer
>Assignee: Alan Burlison
> Attachments: prototype01.txt
>
>
> Rather than have a half-dozen redefinitions, custom extensions, etc, we 
> should move them all to one location so that the cmake environment is 
> consistent between the various native components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12036) Consolidate all of the cmake extensions in one directory

2015-05-29 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14565779#comment-14565779
 ] 

Colin Patrick McCabe commented on HADOOP-12036:
---

bq. What about the test_bulk_crc32 app for example, is that multithreaded? If 
it's OK to use -pthread on that then I think -pthread can safely be added to 
the global flags, as you suggest.

Some test programs aren't multithreaded, but there is no harm in passing 
{{\-pthread}} in all cases.  The overhead is extremely minimal, and it brings 
the test programs closer to what they're supposed to test (which is real hadoop 
performance / correctness in a multi-threaded environment.)

The main thing to keep in mind is that we don't want JNI linked into programs 
unless they actually use JNI.  But that shouldn't be an issue based on my 
reading of the patch.

bq. I've put in a modification that allows you to preset a variable, 
GCC_SHARED_FLAGS. If it's not set it currently defaults to "-g -O2 -Wall", if 
it is OK to do so I'll change that to "-g -O2 -Wall -pthread 
-D_LARGEFILE_SOURCE", then the hadoop-yarn-project CMakeLists.txt can simply 
set it appropriately before including HadoopCommon.cmake

Doesn't that mean that if you change {{-O2}} to {{-O3}} in the common code, 
yarn will be unaffected?  I would prefer to override only the thing that needs 
to be different, aka {{_LARGEFILE_SOURCE}}.

bq. Sorry to sound like a stuck record but you still haven't given me what I 
consider to be a satisfactory reason why _GNU_SOURCE should be set as a 
compiler command-line flag, even on Linux. Just to be completely clear, I'm not 
intending to just remove it and have stuff blow up on Linux, I'm intending to 
go through all the source and add the necessary #define/#undef blocks so that 
all the Linux-specific code continues to work. Stopping this practice will help 
keep the source portable, as I've explained at length in HADOOP-11997.

Everyone agrees that not setting _GNU_SOURCE (or setting it in a more limited 
way) *won't* enforce portability.  For example, you still get the non-POSIX 
definition of {{strerror_r}}.  You still have all the actually non-portable 
things in the places they are needed (i.e. cgroups, etc.).

On the other hand, everyone agrees that having a nightly build on Solaris / BSD 
/ whatever *will* enforce portability.

>From experience, trying to conditionally set {{_GNU_SOURCE}} is a huge pain in 
>the rear for everyone, leads to mysterious compile errors, and makes life 
>difficult for new contributors.  There's just no point.

If you put the energy and effort that is going into worrying about 
{{_GNU_SOURCE}} into creating a Solaris or BSD nightly build, you would have a 
guarantee that portability would not regress.  It is a waste of time to worry 
about {{_GNU_SOURCE}}, although if you want to avoid setting it on Solaris 
(just to keep things "tidy") that is fine.

> Consolidate all of the cmake extensions in one directory
> 
>
> Key: HADOOP-12036
> URL: https://issues.apache.org/jira/browse/HADOOP-12036
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Allen Wittenauer
>Assignee: Alan Burlison
> Attachments: prototype01.txt
>
>
> Rather than have a half-dozen redefinitions, custom extensions, etc, we 
> should move them all to one location so that the cmake environment is 
> consistent between the various native components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11927) Fix "undefined reference to dlopen" error when compiling libhadooppipes

2015-05-29 Thread Xianyin Xin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14565741#comment-14565741
 ] 

Xianyin Xin commented on HADOOP-11927:
--

Thanks [~cmccabe]! And thanks [~trtrmitya] and [~cnauroth] for your valuable 
comments!

> Fix "undefined reference to dlopen" error when compiling libhadooppipes
> ---
>
> Key: HADOOP-11927
> URL: https://issues.apache.org/jira/browse/HADOOP-11927
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, native, tools
> Environment: SUSE Linux Enterprise Server 11 SP3  (x86_64)
>Reporter: Xianyin Xin
>Assignee: Xianyin Xin
> Fix For: 2.8.0
>
> Attachments: HADOOP-11927-001.patch, HADOOP-11927-002.patch
>
>
> When compile hadoop with native support, we encounter compile error that 
> "undefined reference to `dlopen'" when link libcrypto. We'd better link libdl 
> explicitly in CMakeList of hadoop-pips.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12043) Display warning if defaultFs is not set when running fs commands.

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14565715#comment-14565715
 ] 

Hudson commented on HADOOP-12043:
-

FAILURE: Integrated in Hadoop-trunk-Commit #7930 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7930/])
HADOOP-12043. Display warning if defaultFs is not set when running fs commands. 
Contributed by Eddy Xu. (wang: rev 374ddd9f9ea43b0e730a7baab3e51e6893d40420)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Ls.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestLs.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/FsCommand.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java


> Display warning if defaultFs is not set when running fs commands.
> -
>
> Key: HADOOP-12043
> URL: https://issues.apache.org/jira/browse/HADOOP-12043
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-8322.000.patch, HDFS-8322.001.patch, 
> HDFS-8322.002.patch, HDFS-8322.003.patch, HDFS-8322.003.patch, 
> HDFS-8322.004.patch, HDFS-8322.005.patch, HDFS-8322.006.patch
>
>
> Using {{LocalFileSystem}} is rarely the intention of running {{hadoop fs 
> -ls}}.
> This JIRA proposes displaying a warning message if hadoop fs -ls is showing 
> the local filesystem or using default fs.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12043) Display warning if defaultFs is not set when running fs commands.

2015-05-29 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14565703#comment-14565703
 ] 

Lei (Eddy) Xu commented on HADOOP-12043:


Thanks for reviews and suggestions, [~andrew.wang] and [~aw]


> Display warning if defaultFs is not set when running fs commands.
> -
>
> Key: HADOOP-12043
> URL: https://issues.apache.org/jira/browse/HADOOP-12043
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-8322.000.patch, HDFS-8322.001.patch, 
> HDFS-8322.002.patch, HDFS-8322.003.patch, HDFS-8322.003.patch, 
> HDFS-8322.004.patch, HDFS-8322.005.patch, HDFS-8322.006.patch
>
>
> Using {{LocalFileSystem}} is rarely the intention of running {{hadoop fs 
> -ls}}.
> This JIRA proposes displaying a warning message if hadoop fs -ls is showing 
> the local filesystem or using default fs.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12043) Display warning if defaultFs is not set when running fs commands.

2015-05-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12043:
-
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2, thanks Eddy for the patch, Allen for reviewing 
too.

> Display warning if defaultFs is not set when running fs commands.
> -
>
> Key: HADOOP-12043
> URL: https://issues.apache.org/jira/browse/HADOOP-12043
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-8322.000.patch, HDFS-8322.001.patch, 
> HDFS-8322.002.patch, HDFS-8322.003.patch, HDFS-8322.003.patch, 
> HDFS-8322.004.patch, HDFS-8322.005.patch, HDFS-8322.006.patch
>
>
> Using {{LocalFileSystem}} is rarely the intention of running {{hadoop fs 
> -ls}}.
> This JIRA proposes displaying a warning message if hadoop fs -ls is showing 
> the local filesystem or using default fs.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12043) Display warning if defaultFs is not set when running fs commands.

2015-05-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12043:
-
Summary: Display warning if defaultFs is not set when running fs commands.  
(was: Display warning if defaultFs is not set when running dfs commands.)

> Display warning if defaultFs is not set when running fs commands.
> -
>
> Key: HADOOP-12043
> URL: https://issues.apache.org/jira/browse/HADOOP-12043
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Attachments: HDFS-8322.000.patch, HDFS-8322.001.patch, 
> HDFS-8322.002.patch, HDFS-8322.003.patch, HDFS-8322.003.patch, 
> HDFS-8322.004.patch, HDFS-8322.005.patch, HDFS-8322.006.patch
>
>
> Using {{LocalFileSystem}} is rarely the intention of running {{hadoop fs 
> -ls}}.
> This JIRA proposes displaying a warning message if hadoop fs -ls is showing 
> the local filesystem or using default fs.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (HADOOP-12043) Display warning if defaultFs is not set when running dfs commands.

2015-05-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang moved HDFS-8322 to HADOOP-12043:


  Component/s: (was: HDFS)
   fs
 Target Version/s: 2.8.0  (was: 3.0.0, 2.8.0)
Affects Version/s: (was: 2.7.0)
   2.7.0
  Key: HADOOP-12043  (was: HDFS-8322)
  Project: Hadoop Common  (was: Hadoop HDFS)

> Display warning if defaultFs is not set when running dfs commands.
> --
>
> Key: HADOOP-12043
> URL: https://issues.apache.org/jira/browse/HADOOP-12043
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
> Attachments: HDFS-8322.000.patch, HDFS-8322.001.patch, 
> HDFS-8322.002.patch, HDFS-8322.003.patch, HDFS-8322.003.patch, 
> HDFS-8322.004.patch, HDFS-8322.005.patch, HDFS-8322.006.patch
>
>
> Using {{LocalFileSystem}} is rarely the intention of running {{hadoop fs 
> -ls}}.
> This JIRA proposes displaying a warning message if hadoop fs -ls is showing 
> the local filesystem or using default fs.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11807) add a lint mode to releasedocmaker

2015-05-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14565670#comment-14565670
 ] 

Hadoop QA commented on HADOOP-11807:


\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   0m  0s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | release audit |   0m 15s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| | |   0m 18s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12736285/HADOOP-11807.005.patch 
|
| Optional Tests |  |
| git revision | trunk / 6aec13c |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6872/console |


This message was automatically generated.

> add a lint mode to releasedocmaker
> --
>
> Key: HADOOP-11807
> URL: https://issues.apache.org/jira/browse/HADOOP-11807
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: ramtin
>Priority: Minor
> Attachments: HADOOP-11807.001.patch, HADOOP-11807.002.patch, 
> HADOOP-11807.003.patch, HADOOP-11807.004.patch, HADOOP-11807.005.patch
>
>
> * check for missing components (error)
> * check for missing assignee (error)
> * check for common version problems (warning)
> * add an error message for missing release notes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11807) add a lint mode to releasedocmaker

2015-05-29 Thread ramtin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramtin updated HADOOP-11807:

Attachment: HADOOP-11807.005.patch

New patch
- Show error/warning messages and fail when only the script is called by 
"--lint" param.
- checkVersionString accept both 2.7.0 and HDFS-7285 format.

> add a lint mode to releasedocmaker
> --
>
> Key: HADOOP-11807
> URL: https://issues.apache.org/jira/browse/HADOOP-11807
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: ramtin
>Priority: Minor
> Attachments: HADOOP-11807.001.patch, HADOOP-11807.002.patch, 
> HADOOP-11807.003.patch, HADOOP-11807.004.patch, HADOOP-11807.005.patch
>
>
> * check for missing components (error)
> * check for missing assignee (error)
> * check for common version problems (warning)
> * add an error message for missing release notes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11882) Audit all of the findbugs excludes

2015-05-29 Thread Gabor Liptak (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14565630#comment-14565630
 ] 

Gabor Liptak commented on HADOOP-11882:
---

[~aw] Would you be also interested in patches to clean up Findbugs warnings?

> Audit all of the findbugs excludes
> --
>
> Key: HADOOP-11882
> URL: https://issues.apache.org/jira/browse/HADOOP-11882
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build, test
>Reporter: Allen Wittenauer
>
> It would be worth while to verify that all of the exclusions listed still 
> make sense/are real/etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12036) Consolidate all of the cmake extensions in one directory

2015-05-29 Thread Alan Burlison (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14565616#comment-14565616
 ] 

Alan Burlison commented on HADOOP-12036:


* What about the test_bulk_crc32 app for example, is that multithreaded? If 
it's OK to use -pthread on that then I think -pthread can safely be added to 
the global flags, as you suggest.

* I've put in a modification that allows you to preset a variable, 
GCC_SHARED_FLAGS. If it's not set it currently defaults to "-g -O2 -Wall", if 
it is OK to do so I'll change that to "-g -O2 -Wall -pthread 
-D_LARGEFILE_SOURCE", then the hadoop-yarn-project CMakeLists.txt can simply 
set it appropriately before including HadoopCommon.cmake

* Sorry to sound like a stuck record but you still haven't given me what I 
consider to be a satisfactory reason why _GNU_SOURCE should be set as a 
compiler command-line flag, even on Linux. Just to be completely clear, I'm not 
intending to just remove it and have stuff blow up on Linux, I'm intending to 
go through all the source and add the necessary #define/#undef blocks so that 
all the Linux-specific code continues to work. Stopping this practice will help 
keep the source portable, as I've explained at length in HADOOP-11997.

> Consolidate all of the cmake extensions in one directory
> 
>
> Key: HADOOP-12036
> URL: https://issues.apache.org/jira/browse/HADOOP-12036
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Allen Wittenauer
>Assignee: Alan Burlison
> Attachments: prototype01.txt
>
>
> Rather than have a half-dozen redefinitions, custom extensions, etc, we 
> should move them all to one location so that the cmake environment is 
> consistent between the various native components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11885) hadoop-dist dist-layout-stitching.sh does not work with dash

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14565539#comment-14565539
 ] 

Hudson commented on HADOOP-11885:
-

FAILURE: Integrated in Hadoop-trunk-Commit #7927 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7927/])
HADOOP-11885. hadoop-dist dist-layout-stitching.sh does not work with dash. 
(wang) (andrew.wang: rev 7673d4f205b26a6a26cfc47d999ece96f3c42782)
* pom.xml
* hadoop-hdfs-project/hadoop-hdfs/pom.xml
* hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
* hadoop-yarn-project/pom.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/pom.xml
* hadoop-mapreduce-project/pom.xml
* hadoop-common-project/hadoop-kms/pom.xml
* hadoop-common-project/hadoop-common/pom.xml
* hadoop-project-dist/pom.xml
* hadoop-dist/pom.xml
* hadoop-common-project/hadoop-common/CHANGES.txt


> hadoop-dist dist-layout-stitching.sh does not work with dash
> 
>
> Key: HADOOP-11885
> URL: https://issues.apache.org/jira/browse/HADOOP-11885
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.7.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: hadoop-11885.001.patch, hadoop-11885.002.patch
>
>
> Saw this while building the EC branch, pretty sure it'll repro on trunk 
> though too.
> {noformat}
>  [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: lib: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: include: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: lib: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator
>  [exec] $ copy 
> /home/andrew/dev/hadoop/hdfs-7285/hadoop-common-project/hadoop-nfs/target/hadoop-nfs-3.0.0-SNAPSHOT
>  .
>  [exec] $ copy 
> /home/andrew/dev/hadoop/hdfs-7285/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT
>  .
>  [exec] $ copy 
> /home/andrew/dev/hadoop/hdfs-7285/hadoop-hdfs-project/hadoop-hdfs-nfs/target/hadoop-hdfs-nfs-3.0.0-SNAPSHOT
>  .
>  [exec] $ copy 
> /home/andrew/dev/hadoop/hdfs-7285/hadoop-yarn-project/target/hadoop-yarn-project-3.0.0-SNAPSHOT
>  .
>  [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator
>  [exec] $ copy 
> /home/andrew/dev/hadoop/hdfs-7285/hadoop-mapreduce-project/target/hadoop-mapreduce-3.0.0-SNAPSHOT
>  .
>  [exec] $ copy 
> /home/andrew/dev/hadoop/hdfs-7285/hadoop-tools/hadoop-tools-dist/target/hadoop-tools-dist-3.0.0-SNAPSHOT
>  .
>  [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: include: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11929) add test-patch plugin points for customizing build layout

2015-05-29 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14565529#comment-14565529
 ] 

Allen Wittenauer commented on HADOOP-11929:
---

bq.  I could be misinterpreting here, but it sounds like you are advocating 
doing less testing in the precommit build and relying more on the nightly and 
weekly builds.

Not only 'here', but pretty much the entire purpose of this JIRA.  

I'm advocating *realistic* testing in precommit.  Doing everything, all the 
time isn't realistic with a project of this size and complexity while still 
serving the community in an efficient manner.  There is a happy medium. 

bq.  If we should be doing anything, it is increasing the amount of precommit 
testing.

.. and ultimately, this patch will do that.  It will turn on a lot of the 
things that are currently off and be a key enabler for other things in the 
future.

bq. We could get into a situation where a piece of code wasn't being tested for 
months or even years (it's happened in the past...)

Given your comments thus far, let me clue you in: it's happening now, even in 
the nightly build, because we don't have everything turned on and/or it's 
inappropriate for test-patch to do it and/or it's currently impossible to do so 
(e.g., missing dependencies, missing JVMs, etc).

bq. The whole premise of this change is that we should spend more human time 
building and maintaining a complex explicit dependency management 
infrastructure to save CPU time on the build cluster.

This is when I throw my hands up.  It's very obvious at this point you have 
zero understanding of the issues and the purpose of this JIRA.


> add test-patch plugin points for customizing build layout
> -
>
> Key: HADOOP-11929
> URL: https://issues.apache.org/jira/browse/HADOOP-11929
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Assignee: Allen Wittenauer
>Priority: Minor
> Attachments: HADOOP-11929.00.patch, HADOOP-11929.01.patch, 
> HADOOP-11929.02.patch, HADOOP-11929.03.patch, hadoop.sh
>
>
> Sean Busbey and I had a chat about this at the Bug Bash. Here's the proposal:
>   * Introduce the concept of a 'personality module'.
>   * There can be only one personality.
>   * Personalities provide a single function that takes as input the name of 
> the test current being processed
>   * This function uses two other built-in functions to define two queues: 
> maven module name and profiles to use against those maven module names
>   * If something needs to be compiled prior to this test (but not actually 
> tested), the personality will be responsible for doing that compilation
> In hadoop, the classic example is hadoop-hdfs needs common compiled with the 
> native bits. So prior to the javac tests, the personality would check 
> CHANGED_MODULES, see hadoop-hdfs, and compile common w/ -Pnative prior to 
> letting test-patch.sh do the work in hadoop-hdfs. Another example is our lack 
> of test coverage of various native bits. Since these require profiles to be 
> defined prior to compilation, the personality could see that something 
> touches native code, set the appropriate profile, and let test-patch.sh be on 
> its way.
> One way to think of it is some higher order logic on top of the automated 
> 'figure out what modules and what tests to run' functions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12031) test-patch.sh should have an xml plugin

2015-05-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14565520#comment-14565520
 ] 

Hadoop QA commented on HADOOP-12031:


\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | reexec |   0m  0s | dev-support patch detected. |
| {color:blue}0{color} | pre-patch |   0m  0s | Pre-patch trunk compilation is 
healthy. |
| {color:blue}0{color} | @author |   0m  0s | Skipping @author checks as 
test-patch has been patched. |
| {color:green}+1{color} | release audit |   0m 15s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | shellcheck |   0m  5s | There were no new shellcheck 
(v0.3.3) issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| | |   0m 23s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12736247/HADOOP-12031.005.patch 
|
| Optional Tests | shellcheck |
| git revision | trunk / 7673d4f |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6871/console |


This message was automatically generated.

> test-patch.sh should have an xml plugin
> ---
>
> Key: HADOOP-12031
> URL: https://issues.apache.org/jira/browse/HADOOP-12031
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Kengo Seki
>  Labels: newbie, test-patch
> Attachments: HADOOP-12031.001.patch, HADOOP-12031.002.patch, 
> HADOOP-12031.003.patch, HADOOP-12031.004.patch, HADOOP-12031.005.patch
>
>
> HADOOP-11178 demonstrates why there is a need to verify xml files on a patch 
> change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12031) test-patch.sh should have an xml plugin

2015-05-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14565517#comment-14565517
 ] 

Hadoop QA commented on HADOOP-12031:


(!) A patch to test-patch or smart-apply-patch has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6871/console in case of 
problems.

> test-patch.sh should have an xml plugin
> ---
>
> Key: HADOOP-12031
> URL: https://issues.apache.org/jira/browse/HADOOP-12031
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Kengo Seki
>  Labels: newbie, test-patch
> Attachments: HADOOP-12031.001.patch, HADOOP-12031.002.patch, 
> HADOOP-12031.003.patch, HADOOP-12031.004.patch, HADOOP-12031.005.patch
>
>
> HADOOP-11178 demonstrates why there is a need to verify xml files on a patch 
> change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11885) hadoop-dist dist-layout-stitching.sh does not work with dash

2015-05-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-11885:
-
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Pushed to trunk and branch-2, thanks Colin and others for reviewing!

> hadoop-dist dist-layout-stitching.sh does not work with dash
> 
>
> Key: HADOOP-11885
> URL: https://issues.apache.org/jira/browse/HADOOP-11885
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.7.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: hadoop-11885.001.patch, hadoop-11885.002.patch
>
>
> Saw this while building the EC branch, pretty sure it'll repro on trunk 
> though too.
> {noformat}
>  [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: lib: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: include: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: lib: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator
>  [exec] $ copy 
> /home/andrew/dev/hadoop/hdfs-7285/hadoop-common-project/hadoop-nfs/target/hadoop-nfs-3.0.0-SNAPSHOT
>  .
>  [exec] $ copy 
> /home/andrew/dev/hadoop/hdfs-7285/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT
>  .
>  [exec] $ copy 
> /home/andrew/dev/hadoop/hdfs-7285/hadoop-hdfs-project/hadoop-hdfs-nfs/target/hadoop-hdfs-nfs-3.0.0-SNAPSHOT
>  .
>  [exec] $ copy 
> /home/andrew/dev/hadoop/hdfs-7285/hadoop-yarn-project/target/hadoop-yarn-project-3.0.0-SNAPSHOT
>  .
>  [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator
>  [exec] $ copy 
> /home/andrew/dev/hadoop/hdfs-7285/hadoop-mapreduce-project/target/hadoop-mapreduce-3.0.0-SNAPSHOT
>  .
>  [exec] $ copy 
> /home/andrew/dev/hadoop/hdfs-7285/hadoop-tools/hadoop-tools-dist/target/hadoop-tools-dist-3.0.0-SNAPSHOT
>  .
>  [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: include: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12031) test-patch.sh should have an xml plugin

2015-05-29 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12031:

Attachment: HADOOP-12031.005.patch

I made a mistake. jrunscript is still shipped with Java8 (though the internal 
engine is replaced with Nashorn), so version detection is unnecessary.

-05:

* always use jrunscript to simplify the code
* pre-check for the executable's existence
* tested on both Java 7 and 8 

I think it's ready for commit.

> test-patch.sh should have an xml plugin
> ---
>
> Key: HADOOP-12031
> URL: https://issues.apache.org/jira/browse/HADOOP-12031
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Kengo Seki
>  Labels: newbie, test-patch
> Attachments: HADOOP-12031.001.patch, HADOOP-12031.002.patch, 
> HADOOP-12031.003.patch, HADOOP-12031.004.patch, HADOOP-12031.005.patch
>
>
> HADOOP-11178 demonstrates why there is a need to verify xml files on a patch 
> change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11933) run test-patch.sh in a docker container under Jenkins

2015-05-29 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14565444#comment-14565444
 ] 

Ravi Prakash commented on HADOOP-11933:
---

I am a 0 on this change. I'd be a +1 with 
https://issues.apache.org/jira/browse/HADOOP-12024 . 
https://titanous.com/posts/docker-insecurity (although a while ago) has spooked 
me enough to be careful. I'd request another pair of eyes.

> run test-patch.sh in a docker container under Jenkins
> -
>
> Key: HADOOP-11933
> URL: https://issues.apache.org/jira/browse/HADOOP-11933
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11933.00.patch, HADOOP-11933.01.patch, 
> HADOOP-11933.02.patch, HADOOP-11933.03.patch, HADOOP-11933.04.patch, 
> HADOOP-11933.05.patch, HADOOP-11933.06.patch, HADOOP-11933.07.patch
>
>
> because of how hard it is to control the content of the Jenkins environment, 
> it would be beneficial to run it in a docker container so that we can have 
> tight control of the environment



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11929) add test-patch plugin points for customizing build layout

2015-05-29 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14565403#comment-14565403
 ] 

Colin Patrick McCabe commented on HADOOP-11929:
---

bq. Allen wrote: I think it's worthwhile pointing out that test-patch is NOT 
meant to be the nightly build. It's meant to be an extremely quick check to see 
if the patch is relatively sane. It shouldn't be catching every possible 
problem with a patch; that's what integration tests are for. Hadoop has a bad 
culture of ignoring the nightly build, but it's increasingly important to catch 
some of these potential side-effects.

Allen, I could be misinterpreting here, but it sounds like you are advocating 
doing less testing in the precommit build and relying more on the nightly and 
weekly builds.  I strongly disagree with that sentiment.  It is much easier to 
catch bad changes before they go in than to clean up after the fact.  
Diagnosing a problem in the nightly build usually requires digging through the 
git history and sometimes even bisecting.  It's more work for everyone.  Plus, 
having the bad change in during the day causes problems for other developers.

Hadoop has always had robust precommit testing, and I think that's a very good 
thing.  If we should be doing anything, it is increasing the amount of 
precommit testing.

bq. Sean wrote: Right, this patch is expressly to stop trying to figure 
everything out in a clever way and just let the project declare what needs to 
happen.

Thanks for the clarification, Sean.  I agree that allowing people to explicitly 
declare dependencies could in theory lead to a faster build.  But what happens 
when the dependency rules are wrong?  Or people move a file, or create new 
files?  It seems like there is a high potential for mistakes to be made here.  
We could get into a situation where a piece of code wasn't being tested for 
months or even years (it's happened in the past...)

The whole premise of this change is that we should spend more human time 
building and maintaining a complex explicit dependency management 
infrastructure to save CPU time on the build cluster.  But that seems backwards 
to me.  CPU cycles are cheap and only getting cheaper.  Human time is very 
expensive, and (especially for the native build) hard to get.  I could point 
you to JIRAs for fixing problems in the native code that are years old.  
Sometimes for very simple and basic things.

I also think we could explore much simpler and more robust ways of saving time 
on the precommit build.  For example, we could parallelize our {{make}} 
invocations or set up a local mirror of the Maven jars to avoid downloading 
them from offsite.

> add test-patch plugin points for customizing build layout
> -
>
> Key: HADOOP-11929
> URL: https://issues.apache.org/jira/browse/HADOOP-11929
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Assignee: Allen Wittenauer
>Priority: Minor
> Attachments: HADOOP-11929.00.patch, HADOOP-11929.01.patch, 
> HADOOP-11929.02.patch, HADOOP-11929.03.patch, hadoop.sh
>
>
> Sean Busbey and I had a chat about this at the Bug Bash. Here's the proposal:
>   * Introduce the concept of a 'personality module'.
>   * There can be only one personality.
>   * Personalities provide a single function that takes as input the name of 
> the test current being processed
>   * This function uses two other built-in functions to define two queues: 
> maven module name and profiles to use against those maven module names
>   * If something needs to be compiled prior to this test (but not actually 
> tested), the personality will be responsible for doing that compilation
> In hadoop, the classic example is hadoop-hdfs needs common compiled with the 
> native bits. So prior to the javac tests, the personality would check 
> CHANGED_MODULES, see hadoop-hdfs, and compile common w/ -Pnative prior to 
> letting test-patch.sh do the work in hadoop-hdfs. Another example is our lack 
> of test coverage of various native bits. Since these require profiles to be 
> defined prior to compilation, the personality could see that something 
> touches native code, set the appropriate profile, and let test-patch.sh be on 
> its way.
> One way to think of it is some higher order logic on top of the automated 
> 'figure out what modules and what tests to run' functions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12036) Consolidate all of the cmake extensions in one directory

2015-05-29 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14565374#comment-14565374
 ] 

Colin Patrick McCabe commented on HADOOP-12036:
---

bq. _REENTRANT is I believe better replaced by -pthread for gcc, as that 
ensures that both the appropriate preprocessor and compiler flags are both set. 
However I wasn't sure that all the native code in Hadoop was threaded so I felt 
it was better to explicitly specify this in each individual CMakeList.txt file

All the native code is threaded.  Please set either {{\-pthread}} or 
{{REENTRANT}} (it seems that -pthread simply sets {{REENTRANT}}, on gcc at 
least).

bq. -D_LARGEFILE_SOURCE seems to be deprecated according to 
http://man7.org/linux/man-pages/man7/feature_test_macros.7.html: "New programs 
should not employ this macro; defining _XOPEN_SOURCE as just described or 
defining _FILE_OFFSET_BITS with the value 64 is the preferred mechanism to 
achieve the same result". _FILE_OFFSET_BITS=64 is already in the current CFLAGS 
but I believe even that can't be made a global option, as noted in the 
CMakeLists.txt for hadoop-yarn-project: "note: can't enable -D_LARGEFILE: see 
MAPREDUCE-4258"

Ha, I filed that jira years ago.  It's sad to see it hasn't been fixed yet.

Anyway, we want large file support for every other program that gets compiled.  
It needs to be in this common code you are working on.  
{{hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt}}
 can simply set a variable which the common code will check, and disable 
large-file support if it sees.

bq. _GNU_SOURCE isn't appropriate as a compiler command-line flag, as discussed 
in HADOOP-11997

Please set {{_GNU_SOURCE}} on Linux.  On other OSes you can set whatever you 
need to get all the features of that OS.

> Consolidate all of the cmake extensions in one directory
> 
>
> Key: HADOOP-12036
> URL: https://issues.apache.org/jira/browse/HADOOP-12036
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Allen Wittenauer
>Assignee: Alan Burlison
> Attachments: prototype01.txt
>
>
> Rather than have a half-dozen redefinitions, custom extensions, etc, we 
> should move them all to one location so that the cmake environment is 
> consistent between the various native components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2015-05-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14565370#comment-14565370
 ] 

Hadoop QA commented on HADOOP-11804:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  16m  4s | Findbugs (version 3.0.0) 
appears to be broken on trunk. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:red}-1{color} | javac |   7m 32s | The applied patch generated  1  
additional warning messages. |
| {color:green}+1{color} | javadoc |   9m 40s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 13s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   0m 48s | The applied patch generated  
147 new checkstyle issues (total was 0, now 147). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   5m 55s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   0m  8s | Post-patch findbugs 
hadoop-client compilation is broken. |
| {color:red}-1{color} | findbugs |   0m 16s | Post-patch findbugs 
hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal compilation is broken. |
| {color:red}-1{color} | findbugs |   0m 24s | Post-patch findbugs 
hadoop-maven-plugins compilation is broken. |
| {color:green}+1{color} | findbugs |   0m 24s | The patch does not introduce 
any new Findbugs (version ) warnings. |
| {color:red}-1{color} | native |   0m  8s | Failed to build the native portion 
 of hadoop-common prior to running the unit tests in  hadoop-client 
hadoop-maven-plugins  hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal |
| | |  41m 24s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12736190/HADOOP-11804.3.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 7817674 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6870/artifact/patchprocess/diffJavacWarnings.txt
 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6870/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/6870/artifact/patchprocess/diffcheckstylehadoop-maven-plugins.txt
 |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6870/console |


This message was automatically generated.

> POC Hadoop Client w/o transitive dependencies
> -
>
> Key: HADOOP-11804
> URL: https://issues.apache.org/jira/browse/HADOOP-11804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11804.1.patch, HADOOP-11804.2.patch, 
> HADOOP-11804.3.patch
>
>
> make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to 
> talk with a Hadoop cluster without seeing any of the implementation 
> dependencies.
> see proposal on parent for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2015-05-29 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14565280#comment-14565280
 ] 

Sean Busbey commented on HADOOP-11804:
--

Known issue
  * some of the HBase test failures are because jaxb stuff isn't properly 
hidden in the shaded minicluster

> POC Hadoop Client w/o transitive dependencies
> -
>
> Key: HADOOP-11804
> URL: https://issues.apache.org/jira/browse/HADOOP-11804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11804.1.patch, HADOOP-11804.2.patch, 
> HADOOP-11804.3.patch
>
>
> make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to 
> talk with a Hadoop cluster without seeing any of the implementation 
> dependencies.
> see proposal on parent for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11929) add test-patch plugin points for customizing build layout

2015-05-29 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14565219#comment-14565219
 ] 

Allen Wittenauer commented on HADOOP-11929:
---

BTW: I'm not particularly worried about the extra run time from enabling the C 
code (at least, based upon [~cnauroth]'s sketch in HADOOP-11937).  I've already 
more than made up that time by trimming large chunks of pre-check out.

> add test-patch plugin points for customizing build layout
> -
>
> Key: HADOOP-11929
> URL: https://issues.apache.org/jira/browse/HADOOP-11929
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Assignee: Allen Wittenauer
>Priority: Minor
> Attachments: HADOOP-11929.00.patch, HADOOP-11929.01.patch, 
> HADOOP-11929.02.patch, HADOOP-11929.03.patch, hadoop.sh
>
>
> Sean Busbey and I had a chat about this at the Bug Bash. Here's the proposal:
>   * Introduce the concept of a 'personality module'.
>   * There can be only one personality.
>   * Personalities provide a single function that takes as input the name of 
> the test current being processed
>   * This function uses two other built-in functions to define two queues: 
> maven module name and profiles to use against those maven module names
>   * If something needs to be compiled prior to this test (but not actually 
> tested), the personality will be responsible for doing that compilation
> In hadoop, the classic example is hadoop-hdfs needs common compiled with the 
> native bits. So prior to the javac tests, the personality would check 
> CHANGED_MODULES, see hadoop-hdfs, and compile common w/ -Pnative prior to 
> letting test-patch.sh do the work in hadoop-hdfs. Another example is our lack 
> of test coverage of various native bits. Since these require profiles to be 
> defined prior to compilation, the personality could see that something 
> touches native code, set the appropriate profile, and let test-patch.sh be on 
> its way.
> One way to think of it is some higher order logic on top of the automated 
> 'figure out what modules and what tests to run' functions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11929) add test-patch plugin points for customizing build layout

2015-05-29 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14565192#comment-14565192
 ] 

Allen Wittenauer commented on HADOOP-11929:
---

I think it's worthwhile pointing out that test-patch is NOT meant to be the 
nightly build.  It's meant to be an extremely quick check to see if the patch 
is relatively sane.  It shouldn't be catching every possible problem with a 
patch; that's what integration tests are for.  Hadoop has a bad culture of 
ignoring the nightly build, but it's increasingly important to catch some of 
these potential side-effects.

> add test-patch plugin points for customizing build layout
> -
>
> Key: HADOOP-11929
> URL: https://issues.apache.org/jira/browse/HADOOP-11929
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Assignee: Allen Wittenauer
>Priority: Minor
> Attachments: HADOOP-11929.00.patch, HADOOP-11929.01.patch, 
> HADOOP-11929.02.patch, HADOOP-11929.03.patch, hadoop.sh
>
>
> Sean Busbey and I had a chat about this at the Bug Bash. Here's the proposal:
>   * Introduce the concept of a 'personality module'.
>   * There can be only one personality.
>   * Personalities provide a single function that takes as input the name of 
> the test current being processed
>   * This function uses two other built-in functions to define two queues: 
> maven module name and profiles to use against those maven module names
>   * If something needs to be compiled prior to this test (but not actually 
> tested), the personality will be responsible for doing that compilation
> In hadoop, the classic example is hadoop-hdfs needs common compiled with the 
> native bits. So prior to the javac tests, the personality would check 
> CHANGED_MODULES, see hadoop-hdfs, and compile common w/ -Pnative prior to 
> letting test-patch.sh do the work in hadoop-hdfs. Another example is our lack 
> of test coverage of various native bits. Since these require profiles to be 
> defined prior to compilation, the personality could see that something 
> touches native code, set the appropriate profile, and let test-patch.sh be on 
> its way.
> One way to think of it is some higher order logic on top of the automated 
> 'figure out what modules and what tests to run' functions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11929) add test-patch plugin points for customizing build layout

2015-05-29 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14565180#comment-14565180
 ] 

Chris Nauroth commented on HADOOP-11929:


bq. We could maybe avoid building fuse-dfs, the native mapreduce stuff in 
trunk, libhadooppipes, and libwebhdfs unless a file in there had changed. Those 
subprojects are truly self-contained so that would work.

I disagree with this statement, at least for the codebase as it stands now.  
HDFS-8346 demonstrates that libwebhdfs has a dependency on libhdfs source 
files.  There also had been an incorrect assumption about being able to call 
libhadoop code.  fuse-dfs links to native_mini_dfs, so there is a chance that a 
change in libhdfs could break it.  I'm not sure about libhadooppipes and the 
native MR stuff.  I haven't looked at them in a while.

A counter-proposal could be to break up some of these unusual dependencies, or 
more formally define a "hadoop-native-common" for the sub-projects to 
statically link to.  For now though, I filed HADOOP-11937 to request full 
native builds so that we catch regressions like HDFS-8346 during pre-commit.

> add test-patch plugin points for customizing build layout
> -
>
> Key: HADOOP-11929
> URL: https://issues.apache.org/jira/browse/HADOOP-11929
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Assignee: Allen Wittenauer
>Priority: Minor
> Attachments: HADOOP-11929.00.patch, HADOOP-11929.01.patch, 
> HADOOP-11929.02.patch, HADOOP-11929.03.patch, hadoop.sh
>
>
> Sean Busbey and I had a chat about this at the Bug Bash. Here's the proposal:
>   * Introduce the concept of a 'personality module'.
>   * There can be only one personality.
>   * Personalities provide a single function that takes as input the name of 
> the test current being processed
>   * This function uses two other built-in functions to define two queues: 
> maven module name and profiles to use against those maven module names
>   * If something needs to be compiled prior to this test (but not actually 
> tested), the personality will be responsible for doing that compilation
> In hadoop, the classic example is hadoop-hdfs needs common compiled with the 
> native bits. So prior to the javac tests, the personality would check 
> CHANGED_MODULES, see hadoop-hdfs, and compile common w/ -Pnative prior to 
> letting test-patch.sh do the work in hadoop-hdfs. Another example is our lack 
> of test coverage of various native bits. Since these require profiles to be 
> defined prior to compilation, the personality could see that something 
> touches native code, set the appropriate profile, and let test-patch.sh be on 
> its way.
> One way to think of it is some higher order logic on top of the automated 
> 'figure out what modules and what tests to run' functions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12038) SwiftNativeOutputStream should check whether a file exists or not before deleting

2015-05-29 Thread Chen He (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14565178#comment-14565178
 ] 

Chen He commented on HADOOP-12038:
--

Hi [~steve_l], thank you for the comment. I should describe it more clear.

In hadoop-openstack module, it says there is not unit test but only functional 
test since it has dependency on swift server. If there is no swift server, all 
those unit tests in hadoop-openstack module will not be executed. 

I met this issue when I copying a large file to swift server. It returns to me 
this warning because the tmp file has already been deleted. 

I will try to add a unit test following the same pattern that previous unit 
tests have. 

> SwiftNativeOutputStream should check whether a file exists or not before 
> deleting
> -
>
> Key: HADOOP-12038
> URL: https://issues.apache.org/jira/browse/HADOOP-12038
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Chen He
>Assignee: Chen He
>Priority: Minor
> Attachments: HADOOP-12038.000.patch
>
>
> 15/05/27 15:27:03 WARN snative.SwiftNativeOutputStream: Could not delete 
> /tmp/hadoop-root/output-3695386887711395289.tmp
> It should check whether the file exists or not before deleting. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11929) add test-patch plugin points for customizing build layout

2015-05-29 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14565171#comment-14565171
 ] 

Sean Busbey commented on HADOOP-11929:
--

{quote}
bq. so how is your "super-clever" dependency solver going to figure that out?

It doesn't. The user provides the ruleset. See hadoop.sh.
{quote}

Right, this patch is expressly to stop trying to figure everything out in a 
clever way and just let the project declare what needs to happen.

> add test-patch plugin points for customizing build layout
> -
>
> Key: HADOOP-11929
> URL: https://issues.apache.org/jira/browse/HADOOP-11929
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Assignee: Allen Wittenauer
>Priority: Minor
> Attachments: HADOOP-11929.00.patch, HADOOP-11929.01.patch, 
> HADOOP-11929.02.patch, HADOOP-11929.03.patch, hadoop.sh
>
>
> Sean Busbey and I had a chat about this at the Bug Bash. Here's the proposal:
>   * Introduce the concept of a 'personality module'.
>   * There can be only one personality.
>   * Personalities provide a single function that takes as input the name of 
> the test current being processed
>   * This function uses two other built-in functions to define two queues: 
> maven module name and profiles to use against those maven module names
>   * If something needs to be compiled prior to this test (but not actually 
> tested), the personality will be responsible for doing that compilation
> In hadoop, the classic example is hadoop-hdfs needs common compiled with the 
> native bits. So prior to the javac tests, the personality would check 
> CHANGED_MODULES, see hadoop-hdfs, and compile common w/ -Pnative prior to 
> letting test-patch.sh do the work in hadoop-hdfs. Another example is our lack 
> of test coverage of various native bits. Since these require profiles to be 
> defined prior to compilation, the personality could see that something 
> touches native code, set the appropriate profile, and let test-patch.sh be on 
> its way.
> One way to think of it is some higher order logic on top of the automated 
> 'figure out what modules and what tests to run' functions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11929) add test-patch plugin points for customizing build layout

2015-05-29 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14565166#comment-14565166
 ] 

Allen Wittenauer commented on HADOOP-11929:
---

bq. so how is your "super-clever" dependency solver going to figure that out?

It doesn't.  The user provides the ruleset.  See hadoop.sh.




> add test-patch plugin points for customizing build layout
> -
>
> Key: HADOOP-11929
> URL: https://issues.apache.org/jira/browse/HADOOP-11929
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Assignee: Allen Wittenauer
>Priority: Minor
> Attachments: HADOOP-11929.00.patch, HADOOP-11929.01.patch, 
> HADOOP-11929.02.patch, HADOOP-11929.03.patch, hadoop.sh
>
>
> Sean Busbey and I had a chat about this at the Bug Bash. Here's the proposal:
>   * Introduce the concept of a 'personality module'.
>   * There can be only one personality.
>   * Personalities provide a single function that takes as input the name of 
> the test current being processed
>   * This function uses two other built-in functions to define two queues: 
> maven module name and profiles to use against those maven module names
>   * If something needs to be compiled prior to this test (but not actually 
> tested), the personality will be responsible for doing that compilation
> In hadoop, the classic example is hadoop-hdfs needs common compiled with the 
> native bits. So prior to the javac tests, the personality would check 
> CHANGED_MODULES, see hadoop-hdfs, and compile common w/ -Pnative prior to 
> letting test-patch.sh do the work in hadoop-hdfs. Another example is our lack 
> of test coverage of various native bits. Since these require profiles to be 
> defined prior to compilation, the personality could see that something 
> touches native code, set the appropriate profile, and let test-patch.sh be on 
> its way.
> One way to think of it is some higher order logic on top of the automated 
> 'figure out what modules and what tests to run' functions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11929) add test-patch plugin points for customizing build layout

2015-05-29 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14565160#comment-14565160
 ] 

Colin Patrick McCabe commented on HADOOP-11929:
---

Hi [~aw], [~busbey],

We tried to be super-clever about detecting what modules to build in the past.  
It never worked.

The problem is there are hidden dependencies.  For example, if I change 
{{DomainSocketWatcher.java}}, I clearly want to build and test 
{{libhadoop.so}}, which contains the C domain socket code.  But no C files were 
changed, so how is your "super-clever" dependency solver going to figure that 
out?

Similarly I could change {{BZip2Codec.java}} and expect the native bzip code in 
{{Bzip2Compressor.c}} to be built and tested.  But again, there is no way for 
the build system to know that these are related.

Then there are even more subtle dependencies.  Let's say I make a change to a C 
file in hadoop-common.  Perhaps this changes a function that is only used in 
hadoop-hdfs-- for the sake of argument, let's say {{renameTo0}}.  But the 
hadoop-hdfs tests are not run since the dependency solver looks at the patch 
and says, "no files in hadoop-hdfs were changed, I'm done."

The only sane thing to do is to always build {{libhadoop.so}} and 
{{libhdfs.so}} no matter what, and always turn on all the options.  The options 
don't increase compilation time by any significant amount (if you don't believe 
me, benchmark it for yourself).

We could maybe avoid building fuse-dfs, the native mapreduce stuff in trunk, 
libhadooppipes, and libwebhdfs unless a file in there had changed.  Those 
subprojects are truly self-contained so that would work.  The native task stuff 
in particular is slow to compile, so that might actually be useful.  The rest 
of it I think we should just always build-- the build is flaky enough as-is.

> add test-patch plugin points for customizing build layout
> -
>
> Key: HADOOP-11929
> URL: https://issues.apache.org/jira/browse/HADOOP-11929
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Assignee: Allen Wittenauer
>Priority: Minor
> Attachments: HADOOP-11929.00.patch, HADOOP-11929.01.patch, 
> HADOOP-11929.02.patch, HADOOP-11929.03.patch, hadoop.sh
>
>
> Sean Busbey and I had a chat about this at the Bug Bash. Here's the proposal:
>   * Introduce the concept of a 'personality module'.
>   * There can be only one personality.
>   * Personalities provide a single function that takes as input the name of 
> the test current being processed
>   * This function uses two other built-in functions to define two queues: 
> maven module name and profiles to use against those maven module names
>   * If something needs to be compiled prior to this test (but not actually 
> tested), the personality will be responsible for doing that compilation
> In hadoop, the classic example is hadoop-hdfs needs common compiled with the 
> native bits. So prior to the javac tests, the personality would check 
> CHANGED_MODULES, see hadoop-hdfs, and compile common w/ -Pnative prior to 
> letting test-patch.sh do the work in hadoop-hdfs. Another example is our lack 
> of test coverage of various native bits. Since these require profiles to be 
> defined prior to compilation, the personality could see that something 
> touches native code, set the appropriate profile, and let test-patch.sh be on 
> its way.
> One way to think of it is some higher order logic on top of the automated 
> 'figure out what modules and what tests to run' functions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11885) hadoop-dist dist-layout-stitching.sh does not work with dash

2015-05-29 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14565099#comment-14565099
 ] 

Colin Patrick McCabe commented on HADOOP-11885:
---

Thanks, Andrew.  +1.

> hadoop-dist dist-layout-stitching.sh does not work with dash
> 
>
> Key: HADOOP-11885
> URL: https://issues.apache.org/jira/browse/HADOOP-11885
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.7.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>  Labels: BB2015-05-TBR
> Attachments: hadoop-11885.001.patch, hadoop-11885.002.patch
>
>
> Saw this while building the EC branch, pretty sure it'll repro on trunk 
> though too.
> {noformat}
>  [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: lib: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: include: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: lib: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator
>  [exec] $ copy 
> /home/andrew/dev/hadoop/hdfs-7285/hadoop-common-project/hadoop-nfs/target/hadoop-nfs-3.0.0-SNAPSHOT
>  .
>  [exec] $ copy 
> /home/andrew/dev/hadoop/hdfs-7285/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT
>  .
>  [exec] $ copy 
> /home/andrew/dev/hadoop/hdfs-7285/hadoop-hdfs-project/hadoop-hdfs-nfs/target/hadoop-hdfs-nfs-3.0.0-SNAPSHOT
>  .
>  [exec] $ copy 
> /home/andrew/dev/hadoop/hdfs-7285/hadoop-yarn-project/target/hadoop-yarn-project-3.0.0-SNAPSHOT
>  .
>  [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator
>  [exec] $ copy 
> /home/andrew/dev/hadoop/hdfs-7285/hadoop-mapreduce-project/target/hadoop-mapreduce-3.0.0-SNAPSHOT
>  .
>  [exec] $ copy 
> /home/andrew/dev/hadoop/hdfs-7285/hadoop-tools/hadoop-tools-dist/target/hadoop-tools-dist-3.0.0-SNAPSHOT
>  .
>  [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: bin: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: etc: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: libexec: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: sbin: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: include: unexpected operator
>  [exec] ./dist-layout-stitching.sh: 53: [: share: unexpected operator
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2015-05-29 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-11804:
-
Status: Patch Available  (was: Open)

submitting patch just to get a QA look at checkstyle / whitespace. the YARN 
thing needs to be fixed before this should be pushed.

> POC Hadoop Client w/o transitive dependencies
> -
>
> Key: HADOOP-11804
> URL: https://issues.apache.org/jira/browse/HADOOP-11804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11804.1.patch, HADOOP-11804.2.patch, 
> HADOOP-11804.3.patch
>
>
> make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to 
> talk with a Hadoop cluster without seeing any of the implementation 
> dependencies.
> see proposal on parent for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies

2015-05-29 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-11804:
-
Attachment: HADOOP-11804.3.patch

Updated patch, now that HBase master works with Hadoop branch-2 again.

-03

  * includes a local fork of MSHADE-182 (to handle java services reloction)
  * properly relocate third party references in the client-api bytecode
  * avoid relocating references to JDK packages that are in com.sun
  * avoid shaded jersey binding to javax services

Known issues right now

  * logging implementations are still relocated
  * HTrace is relocated
  ** because of how htrace is implemented, this prevents starting a span in a 
client app and having the client continue with it as a parent span.
  * some other HBase second-pass tests fail post-rebase that I am still 
tracking down
  ** Notably, all the HBase MR tests fail because the shaded minicluster YARN 
has some startup problem related to jersey

> POC Hadoop Client w/o transitive dependencies
> -
>
> Key: HADOOP-11804
> URL: https://issues.apache.org/jira/browse/HADOOP-11804
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11804.1.patch, HADOOP-11804.2.patch, 
> HADOOP-11804.3.patch
>
>
> make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to 
> talk with a Hadoop cluster without seeing any of the implementation 
> dependencies.
> see proposal on parent for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11142) Remove hdfs dfs reference from file system shell documentation

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564951#comment-14564951
 ] 

Hudson commented on HADOOP-11142:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #210 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/210/])
HADOOP-11142. Remove hdfs dfs reference from file system shell documentation 
(Kengo Seki via aw) (aw: rev cbba7d68f0c7faf2b3bab41fd1694dd626db6492)
* hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md
* hadoop-common-project/hadoop-common/CHANGES.txt


> Remove hdfs dfs reference from file system shell documentation
> --
>
> Key: HADOOP-11142
> URL: https://issues.apache.org/jira/browse/HADOOP-11142
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Jonathan Allen
>Assignee: Kengo Seki
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0
>
> Attachments: HADOOP-11142.001.patch
>
>
> The File System Shell documentation references {{hdfs dfs}} in all of the 
> examples. The FS shell is not specific about the underlying file system and 
> so shouldn't reference hdfs. The correct usage should be {{hadoop fs}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11959) WASB should configure client side socket timeout in storage client blob request options

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564950#comment-14564950
 ] 

Hudson commented on HADOOP-11959:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #210 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/210/])
HADOOP-11959. WASB should configure client side socket timeout in storage 
client blob request options. Contributed by Ivan Mitic. (cnauroth: rev 
94e7d49a6dab7e7f4e873dcca67e7fcc98e7e1f8)
* 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestBlobDataValidation.java
* hadoop-project/pom.xml
* 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
* 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/StorageInterfaceImpl.java
* 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/MockStorageInterface.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestAzureFileSystemErrorConditions.java
* 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/StorageInterface.java


> WASB should configure client side socket timeout in storage client blob 
> request options
> ---
>
> Key: HADOOP-11959
> URL: https://issues.apache.org/jira/browse/HADOOP-11959
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Ivan Mitic
>Assignee: Ivan Mitic
> Fix For: 2.8.0
>
> Attachments: HADOOP-11959.2.patch, HADOOP-11959.patch
>
>
> On clusters/jobs where {{mapred.task.timeout}} is set to a larger value, we 
> noticed that tasks can sometimes get stuck on the below stack.
> {code}
> Thread 1: (state = IN_NATIVE)
> - java.net.SocketInputStream.socketRead0(java.io.FileDescriptor, byte[], int, 
> int, int) @bci=0 (Interpreted frame)
> - java.net.SocketInputStream.read(byte[], int, int, int) @bci=87, line=152 
> (Interpreted frame)
> - java.net.SocketInputStream.read(byte[], int, int) @bci=11, line=122 
> (Interpreted frame)
> - java.io.BufferedInputStream.fill() @bci=175, line=235 (Interpreted frame)
> - java.io.BufferedInputStream.read1(byte[], int, int) @bci=44, line=275 
> (Interpreted frame)
> - java.io.BufferedInputStream.read(byte[], int, int) @bci=49, line=334 
> (Interpreted frame)
> - sun.net.www.MeteredStream.read(byte[], int, int) @bci=16, line=134 
> (Interpreted frame)
> - java.io.FilterInputStream.read(byte[], int, int) @bci=7, line=133 
> (Interpreted frame)
> - sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(byte[], 
> int, int) @bci=4, line=3053 (Interpreted frame)
> - com.microsoft.azure.storage.core.NetworkInputStream.read(byte[], int, int) 
> @bci=7, line=49 (Interpreted frame)
> - 
> com.microsoft.azure.storage.blob.CloudBlob$10.postProcessResponse(java.net.HttpURLConnection,
>  com.microsoft.azure.storage.blob.CloudBlob, com.microsoft.azure
> .storage.blob.CloudBlobClient, com.microsoft.azure.storage.OperationContext, 
> java.lang.Integer) @bci=204, line=1691 (Interpreted frame)
> - 
> com.microsoft.azure.storage.blob.CloudBlob$10.postProcessResponse(java.net.HttpURLConnection,
>  java.lang.Object, java.lang.Object, com.microsoft.azure.storage
> .OperationContext, java.lang.Object) @bci=17, line=1613 (Interpreted frame)
> - 
> com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(java.lang.Object,
>  java.lang.Object, com.microsoft.azure.storage.core.StorageRequest, com.mi
> crosoft.azure.storage.RetryPolicyFactory, 
> com.microsoft.azure.storage.OperationContext) @bci=352, line=148 (Interpreted 
> frame)
> - com.microsoft.azure.storage.blob.CloudBlob.downloadRangeInternal(long, 
> java.lang.Long, byte[], int, com.microsoft.azure.storage.AccessCondition, 
> com.microsof
> t.azure.storage.blob.BlobRequestOptions, 
> com.microsoft.azure.storage.OperationContext) @bci=131, line=1468 
> (Interpreted frame)
> - com.microsoft.azure.storage.blob.BlobInputStream.dispatchRead(int) @bci=31, 
> line=255 (Interpreted frame)
> - com.microsoft.azure.storage.blob.BlobInputStream.readInternal(byte[], int, 
> int) @bci=52, line=448 (Interpreted frame)
> - com.microsoft.azure.storage.blob.BlobInputStream.read(byte[], int, int) 
> @bci=28, line=420 (Interpreted frame)
> - java.io.BufferedInputStream.read1(byte[], int, int) @bci=39, line=273 
> (Interpreted frame)
> - java.io.BufferedInputStream.read(byte[], int, int) @bci=49, line=334 
> (Interpreted frame)
> - java.io.DataInputStream.read(byte[], int, int) @bci=7, line=149 
> (Interpreted frame)
> - 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem$NativeAzureFsInputStream.read(byte[],
>  int, int) @bci=10, line=734 (Interpreted frame)
> - java.io.BufferedInputStream.read1(byte[], int, int) @bci

[jira] [Commented] (HADOOP-11983) HADOOP_USER_CLASSPATH_FIRST works the opposite of what it is supposed to do

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564947#comment-14564947
 ] 

Hudson commented on HADOOP-11983:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #210 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/210/])
HADOOP-11983. HADOOP_USER_CLASSPATH_FIRST works the opposite of what it is 
supposed to do (Sangjin Lee via aw) (aw: rev 
08ae87f6ba334bd3442234e65b93495fe312cbb7)
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
* hadoop-common-project/hadoop-common/CHANGES.txt


> HADOOP_USER_CLASSPATH_FIRST works the opposite of what it is supposed to do
> ---
>
> Key: HADOOP-11983
> URL: https://issues.apache.org/jira/browse/HADOOP-11983
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Fix For: 3.0.0
>
> Attachments: HADOOP-11983.001.patch
>
>
> The behavior of HADOOP_USER_CLASSPATH_FIRST works the opposite of what it 
> should do. If it is not set, HADOOP_CLASSPATH is prepended. If set, it is 
> appended.
> You can easily try out by doing something like
> {noformat}
> HADOOP_CLASSPATH=/Users/alice/tmp hadoop classpath
> {noformat}
> (HADOOP_CLASSPATH should point to an existing directory)
> I think the if clause in hadoop_add_to_classpath_userpath is reversed.
> This issue seems specific to the trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12022) fix site -Pdocs -Pdist in hadoop-project-dist; cleanout remaining forrest bits

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564961#comment-14564961
 ] 

Hudson commented on HADOOP-12022:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #210 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/210/])
 HADOOP-12022. fix site -Pdocs -Pdist in hadoop-project-dist; cleanout 
remaining forrest bits (aw) (aw: rev ae1454342064c71f414d20ad0885e60a335c7420)
* hadoop-common-project/hadoop-common/pom.xml
* 
hadoop-common-project/hadoop-common/src/main/docs/changes/ChangesFancyStyle.css
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hdfsarchitecture.odg
* hadoop-common-project/hadoop-common/src/main/docs/changes/changes2html.pl
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hdfsdatanodes.gif
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/skinconf.xml
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/classes/CatalogManager.properties
* hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/architecture.gif
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/core-logo.gif
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/index.xml
* hadoop-common-project/hadoop-common/src/main/docs/src/documentation/README.txt
* hadoop-project-dist/pom.xml
* hadoop-common-project/hadoop-common/src/main/docs/status.xml
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hdfsdatanodes.png
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hadoop-logo.jpg
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/site.xml
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hdfsarchitecture.gif
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/conf/cli.xconf
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/favicon.ico
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/tabs.xml
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hdfsdatanodes.odg
* hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/common-logo.jpg
* 
hadoop-common-project/hadoop-common/src/main/docs/changes/ChangesSimpleStyle.css
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hdfsarchitecture.png
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hadoop-logo-big.jpg


> fix site -Pdocs -Pdist in hadoop-project-dist; cleanout remaining forrest bits
> --
>
> Key: HADOOP-12022
> URL: https://issues.apache.org/jira/browse/HADOOP-12022
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HADOOP-12011.00.patch, HADOOP-12011.01.patch, 
> HADOOP-12011.02.patch
>
>
> Between HDFS-8350 and probably other changes, it would appear site -Pdocs 
> -Pdist no longer works and is breaking the nightly build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-7947) Validate XMLs if a relevant tool is available, when using scripts

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564963#comment-14564963
 ] 

Hudson commented on HADOOP-7947:


SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #210 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/210/])
HADOOP-7947. Validate XMLs if a relevant tool is available, when using scripts 
(Kengo Seki via aw) (aw: rev 5df1fadf874f3f0176f6b36b8ff7317edd63770f)
* hadoop-common-project/hadoop-common/src/main/bin/hadoop
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ConfTest.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestConfTest.java


> Validate XMLs if a relevant tool is available, when using scripts
> -
>
> Key: HADOOP-7947
> URL: https://issues.apache.org/jira/browse/HADOOP-7947
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Affects Versions: 2.7.0
>Reporter: Harsh J
>Assignee: Kengo Seki
>  Labels: newbie
> Fix For: 3.0.0
>
> Attachments: HADOOP-7947.001.patch, HADOOP-7947.002.patch, 
> HADOOP-7947.003.patch, HADOOP-7947.004.patch, HADOOP-7947.005.patch
>
>
> Given that we are locked down to using only XML for configuration and most of 
> the administrators need to manage it by themselves (unless a tool that 
> manages for you is used), it would be good to also validate the provided 
> config XML (*-site.xml) files with a tool like {{xmllint}} or maybe Xerces 
> somehow, when running a command or (at least) when starting up daemons.
> We should use this only if a relevant tool is available, and optionally be 
> silent if the env. requests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11406) xargs -P is not portable

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564959#comment-14564959
 ] 

Hudson commented on HADOOP-11406:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #210 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/210/])
HADOOP-11406. xargs -P is not portable (Kengo Seki via aw) (aw: rev 
5504a261f829cf2e7b70970246bf5a55c172be84)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/conf/hadoop-user-functions.sh.example
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh


> xargs -P is not portable
> 
>
> Key: HADOOP-11406
> URL: https://issues.apache.org/jira/browse/HADOOP-11406
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0
> Environment: Solaris
> Illumos
> AIX
> ... likely others
>Reporter: Allen Wittenauer
>Assignee: Kengo Seki
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HADOOP-11406.001.patch, HADOOP-11406.002.patch, 
> HADOOP-11406.002.patch
>
>
> hadoop-functions.sh uses xargs -P in the ssh handler.  -P is a GNU extension 
> and is not available on all operating systems.  We should add some detection 
> for support and perform an appropriate action.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11894) Bump the version of Apache HTrace to 3.2.0-incubating

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564949#comment-14564949
 ] 

Hudson commented on HADOOP-11894:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #210 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/210/])
HADOOP-11894. Bump the version of Apache HTrace to 3.2.0-incubating (Masatake 
Iwasaki via Colin P. McCabe) (cmccabe: rev 
a1140959dab3f35accbd6c66abfa14f94ff7dcec)
* hadoop-project/pom.xml
* hadoop-common-project/hadoop-common/src/site/markdown/Tracing.md
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tracing/SpanReceiverHost.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Bump the version of Apache HTrace to 3.2.0-incubating
> -
>
> Key: HADOOP-11894
> URL: https://issues.apache.org/jira/browse/HADOOP-11894
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0
>
> Attachments: HADOOP-11894.001.patch, HADOOP-11894.002.patch, 
> HADOOP-11894.003.patch
>
>
> * update pom.xml
> * update documentation
> * replace {{addKVAnnotation(byte[] key, byte[] value)}} with 
> {{addKVAnnotation(String key, String value)}}
> * replace {{SpanReceiverHost#getUniqueLocalTraceFileName}} with 
> {{LocalFileSpanReceiver#getUniqueLocalTraceFileName}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12004) test-patch breaks with reexec in certain situations

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564952#comment-14564952
 ] 

Hudson commented on HADOOP-12004:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #210 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/210/])
HADOOP-12004. test-patch breaks with reexec in certain situations (Sean Busbey 
via aw) (aw: rev 87c26f8b7d949f85fd3dc511f592112e1504e0a2)
* dev-support/test-patch.sh
* hadoop-common-project/hadoop-common/CHANGES.txt


> test-patch breaks with reexec in certain situations
> ---
>
> Key: HADOOP-12004
> URL: https://issues.apache.org/jira/browse/HADOOP-12004
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: HADOOP-12004.1.patch
>
>
> Looks like HADOOP-11911 forgot the equal sign:
> {code}
>   exec "${PATCH_DIR}/dev-support-test/test-patch.sh" \
> --reexec \
> --branch "${PATCH_BRANCH}" \
> --patch-dir="${PATCH_DIR}" \
>   "${USER_PARAMS[@]}"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11930) test-patch in offline mode should tell maven to be in offline mode

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564960#comment-14564960
 ] 

Hudson commented on HADOOP-11930:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #210 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/210/])
HADOOP-11930. test-patch in offline mode should tell maven to be in offline 
mode (Sean Busbey via aw) (aw: rev 7ebe80ec12a91602b4dcdafb4e3a75def6035ad6)
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/test-patch.sh


> test-patch in offline mode should tell maven to be in offline mode
> --
>
> Key: HADOOP-11930
> URL: https://issues.apache.org/jira/browse/HADOOP-11930
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Fix For: 2.8.0
>
> Attachments: HADOOP-11930.1.patch, HADOOP-11930.2.patch
>
>
> when we use --offline for test-patch, we should also flag maven to be offline 
> so that it doesn't attempt to talk to the internet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12042) Users may see TrashPolicy if hdfs dfs -rm is run

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564964#comment-14564964
 ] 

Hudson commented on HADOOP-12042:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #210 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/210/])
HADOOP-12042. Users may see TrashPolicy if hdfs dfs -rm is run (Contributed by 
Andreina J) (vinayakumarb: rev 7366e4256395ed7550702275d0d9f2674bd43d6c)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicyDefault.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Users may see TrashPolicy if hdfs dfs -rm is run
> 
>
> Key: HADOOP-12042
> URL: https://issues.apache.org/jira/browse/HADOOP-12042
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: J.Andreina
> Fix For: 2.8.0
>
> Attachments: HDFS-6775.1.patch, HDFS-6775.2.patch
>
>
> Doing 'hdfs dfs -rm file' generates an extra log message on the console:
> {code}
> 14/07/29 15:18:56 INFO fs.TrashPolicyDefault: Namenode trash configuration: 
> Deletion interval = 0 minutes, Emptier interval = 0 minutes.
> {code}
> This shouldn't be seen by users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12030) test-patch should only report on newly introduced findbugs warnings.

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564962#comment-14564962
 ] 

Hudson commented on HADOOP-12030:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #210 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/210/])
HADOOP-12030. test-patch should only report on newly introduced findbugs 
warnings. (Sean Busbey via aw) (aw: rev 
b01d33cf862a34f9988584d3d1f3995118110b90)
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/test-patch.sh


> test-patch should only report on newly introduced findbugs warnings.
> 
>
> Key: HADOOP-12030
> URL: https://issues.apache.org/jira/browse/HADOOP-12030
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>  Labels: test-patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12030.1.patch, HADOOP-12030.2.patch
>
>
> findbugs is currently reporting the total number of findbugs warnings for 
> touched modules rather than just newly introduced bugs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564948#comment-14564948
 ] 

Hudson commented on HADOOP-11934:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #210 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/210/])
HADOOP-11934. Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite 
loop. Contributed by Larry McCay. (cnauroth: rev 
860b8373c3a851386b8cd2d4265dd35e5aabc941)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java
* 
hadoop-common-project/hadoop-common/src/main/resources/META-INF/services/org.apache.hadoop.security.alias.CredentialProviderFactory
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/LocalJavaKeyStoreProvider.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/JavaKeyStoreProvider.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias/TestCredentialProviderFactory.java


> Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
> -
>
> Key: HADOOP-11934
> URL: https://issues.apache.org/jira/browse/HADOOP-11934
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Mike Yoder
>Assignee: Larry McCay
>Priority: Blocker
> Fix For: 2.7.1
>
> Attachments: HADOOP-11934-11.patch, HADOOP-11934.001.patch, 
> HADOOP-11934.002.patch, HADOOP-11934.003.patch, HADOOP-11934.004.patch, 
> HADOOP-11934.005.patch, HADOOP-11934.006.patch, HADOOP-11934.007.patch, 
> HADOOP-11934.008.patch, HADOOP-11934.009.patch, HADOOP-11934.010.patch, 
> HADOOP-11934.012.patch, HADOOP-11934.013.patch
>
>
> I was attempting to use the LdapGroupsMapping code and the 
> JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
> fatal, issue.  The code goes into what ought to have been an infinite loop, 
> were it not for it overflowing the stack and Java ending the loop.  Here is a 
> snippet of the stack; my annotations are at the bottom.
> {noformat}
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:65)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
>   at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
>   at 
> org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
>   at 
> org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
>   at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>   at org.apache.hadoop.security.Groups.(Groups.java:70)
>   at org.apache.hadoop.security.Groups.(Groups.java:66)
>   at 
> org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
>   at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
>   at 
> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:2753)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:2745)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:65)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(J

[jira] [Commented] (HADOOP-12035) shellcheck plugin displays a wrong version potentially

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564958#comment-14564958
 ] 

Hudson commented on HADOOP-12035:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #210 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/210/])
HADOOP-12035. shellcheck plugin displays a wrong version potentially (Kengo 
Seki via aw) (aw: rev f1cea9c6bf68f03f7136bebc245efbc0a95738e4)
* dev-support/test-patch.d/shellcheck.sh
* hadoop-common-project/hadoop-common/CHANGES.txt


> shellcheck plugin displays a wrong version potentially
> --
>
> Key: HADOOP-12035
> URL: https://issues.apache.org/jira/browse/HADOOP-12035
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Trivial
>  Labels: newbie, test-patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12035.001.patch
>
>
> In dev-support/test-patch.d/shellcheck.sh:
> {code}
> SHELLCHECK_VERSION=$(shellcheck --version | ${GREP} version: | ${AWK} '{print 
> $NF}')
> {code}
> it should be 
> {code}
> SHELLCHECK_VERSION=$(${SHELLCHECK} --version | …)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11929) add test-patch plugin points for customizing build layout

2015-05-29 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564937#comment-14564937
 ] 

Allen Wittenauer commented on HADOOP-11929:
---

OK, that bad run did highlight some things:

* bzip2 is broken on the jenkins servers too.  So this might not just be an OS 
X problem.
* javadoc output needs to be cleanedup
* better hit checkstyle sooner rather than later
* newer routines aren't using @@BASEDIR@@
* I don't like the / separator.
* Still some bugs in the clock. No way @author took 5 minutes and the framework 
took 80. lol

> add test-patch plugin points for customizing build layout
> -
>
> Key: HADOOP-11929
> URL: https://issues.apache.org/jira/browse/HADOOP-11929
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Assignee: Allen Wittenauer
>Priority: Minor
> Attachments: HADOOP-11929.00.patch, HADOOP-11929.01.patch, 
> HADOOP-11929.02.patch, HADOOP-11929.03.patch, hadoop.sh
>
>
> Sean Busbey and I had a chat about this at the Bug Bash. Here's the proposal:
>   * Introduce the concept of a 'personality module'.
>   * There can be only one personality.
>   * Personalities provide a single function that takes as input the name of 
> the test current being processed
>   * This function uses two other built-in functions to define two queues: 
> maven module name and profiles to use against those maven module names
>   * If something needs to be compiled prior to this test (but not actually 
> tested), the personality will be responsible for doing that compilation
> In hadoop, the classic example is hadoop-hdfs needs common compiled with the 
> native bits. So prior to the javac tests, the personality would check 
> CHANGED_MODULES, see hadoop-hdfs, and compile common w/ -Pnative prior to 
> letting test-patch.sh do the work in hadoop-hdfs. Another example is our lack 
> of test coverage of various native bits. Since these require profiles to be 
> defined prior to compilation, the personality could see that something 
> touches native code, set the appropriate profile, and let test-patch.sh be on 
> its way.
> One way to think of it is some higher order logic on top of the automated 
> 'figure out what modules and what tests to run' functions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564905#comment-14564905
 ] 

Hudson commented on HADOOP-11934:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #201 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/201/])
HADOOP-11934. Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite 
loop. Contributed by Larry McCay. (cnauroth: rev 
860b8373c3a851386b8cd2d4265dd35e5aabc941)
* 
hadoop-common-project/hadoop-common/src/main/resources/META-INF/services/org.apache.hadoop.security.alias.CredentialProviderFactory
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/JavaKeyStoreProvider.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/LocalJavaKeyStoreProvider.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias/TestCredentialProviderFactory.java


> Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
> -
>
> Key: HADOOP-11934
> URL: https://issues.apache.org/jira/browse/HADOOP-11934
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Mike Yoder
>Assignee: Larry McCay
>Priority: Blocker
> Fix For: 2.7.1
>
> Attachments: HADOOP-11934-11.patch, HADOOP-11934.001.patch, 
> HADOOP-11934.002.patch, HADOOP-11934.003.patch, HADOOP-11934.004.patch, 
> HADOOP-11934.005.patch, HADOOP-11934.006.patch, HADOOP-11934.007.patch, 
> HADOOP-11934.008.patch, HADOOP-11934.009.patch, HADOOP-11934.010.patch, 
> HADOOP-11934.012.patch, HADOOP-11934.013.patch
>
>
> I was attempting to use the LdapGroupsMapping code and the 
> JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
> fatal, issue.  The code goes into what ought to have been an infinite loop, 
> were it not for it overflowing the stack and Java ending the loop.  Here is a 
> snippet of the stack; my annotations are at the bottom.
> {noformat}
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:65)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
>   at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
>   at 
> org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
>   at 
> org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
>   at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>   at org.apache.hadoop.security.Groups.(Groups.java:70)
>   at org.apache.hadoop.security.Groups.(Groups.java:66)
>   at 
> org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
>   at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
>   at 
> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:2753)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:2745)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:65)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStor

[jira] [Commented] (HADOOP-12042) Users may see TrashPolicy if hdfs dfs -rm is run

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564906#comment-14564906
 ] 

Hudson commented on HADOOP-12042:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #201 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/201/])
HADOOP-12042. Users may see TrashPolicy if hdfs dfs -rm is run (Contributed by 
Andreina J) (vinayakumarb: rev 7366e4256395ed7550702275d0d9f2674bd43d6c)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicyDefault.java


> Users may see TrashPolicy if hdfs dfs -rm is run
> 
>
> Key: HADOOP-12042
> URL: https://issues.apache.org/jira/browse/HADOOP-12042
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: J.Andreina
> Fix For: 2.8.0
>
> Attachments: HDFS-6775.1.patch, HDFS-6775.2.patch
>
>
> Doing 'hdfs dfs -rm file' generates an extra log message on the console:
> {code}
> 14/07/29 15:18:56 INFO fs.TrashPolicyDefault: Namenode trash configuration: 
> Deletion interval = 0 minutes, Emptier interval = 0 minutes.
> {code}
> This shouldn't be seen by users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-7947) Validate XMLs if a relevant tool is available, when using scripts

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564766#comment-14564766
 ] 

Hudson commented on HADOOP-7947:


FAILURE: Integrated in Hadoop-Hdfs-trunk #2140 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2140/])
HADOOP-7947. Validate XMLs if a relevant tool is available, when using scripts 
(Kengo Seki via aw) (aw: rev 5df1fadf874f3f0176f6b36b8ff7317edd63770f)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestConfTest.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ConfTest.java
* hadoop-common-project/hadoop-common/src/main/bin/hadoop


> Validate XMLs if a relevant tool is available, when using scripts
> -
>
> Key: HADOOP-7947
> URL: https://issues.apache.org/jira/browse/HADOOP-7947
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Affects Versions: 2.7.0
>Reporter: Harsh J
>Assignee: Kengo Seki
>  Labels: newbie
> Fix For: 3.0.0
>
> Attachments: HADOOP-7947.001.patch, HADOOP-7947.002.patch, 
> HADOOP-7947.003.patch, HADOOP-7947.004.patch, HADOOP-7947.005.patch
>
>
> Given that we are locked down to using only XML for configuration and most of 
> the administrators need to manage it by themselves (unless a tool that 
> manages for you is used), it would be good to also validate the provided 
> config XML (*-site.xml) files with a tool like {{xmllint}} or maybe Xerces 
> somehow, when running a command or (at least) when starting up daemons.
> We should use this only if a relevant tool is available, and optionally be 
> silent if the env. requests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12042) Users may see TrashPolicy if hdfs dfs -rm is run

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564767#comment-14564767
 ] 

Hudson commented on HADOOP-12042:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2140 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2140/])
HADOOP-12042. Users may see TrashPolicy if hdfs dfs -rm is run (Contributed by 
Andreina J) (vinayakumarb: rev 7366e4256395ed7550702275d0d9f2674bd43d6c)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicyDefault.java


> Users may see TrashPolicy if hdfs dfs -rm is run
> 
>
> Key: HADOOP-12042
> URL: https://issues.apache.org/jira/browse/HADOOP-12042
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: J.Andreina
> Fix For: 2.8.0
>
> Attachments: HDFS-6775.1.patch, HDFS-6775.2.patch
>
>
> Doing 'hdfs dfs -rm file' generates an extra log message on the console:
> {code}
> 14/07/29 15:18:56 INFO fs.TrashPolicyDefault: Namenode trash configuration: 
> Deletion interval = 0 minutes, Emptier interval = 0 minutes.
> {code}
> This shouldn't be seen by users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12022) fix site -Pdocs -Pdist in hadoop-project-dist; cleanout remaining forrest bits

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564761#comment-14564761
 ] 

Hudson commented on HADOOP-12022:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2140 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2140/])
 HADOOP-12022. fix site -Pdocs -Pdist in hadoop-project-dist; cleanout 
remaining forrest bits (aw) (aw: rev ae1454342064c71f414d20ad0885e60a335c7420)
* 
hadoop-common-project/hadoop-common/src/main/docs/changes/ChangesFancyStyle.css
* hadoop-common-project/hadoop-common/src/main/docs/status.xml
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/skinconf.xml
* hadoop-hdfs-project/hadoop-hdfs/pom.xml
* hadoop-common-project/hadoop-common/src/main/docs/src/documentation/README.txt
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/common-logo.jpg
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hdfsarchitecture.png
* hadoop-project-dist/pom.xml
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hdfsarchitecture.gif
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hadoop-logo.jpg
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hdfsdatanodes.png
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hadoop-logo-big.jpg
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hdfsarchitecture.odg
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hdfsdatanodes.gif
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/classes/CatalogManager.properties
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/architecture.gif
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/core-logo.gif
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/site.xml
* hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/conf/cli.xconf
* 
hadoop-common-project/hadoop-common/src/main/docs/changes/ChangesSimpleStyle.css
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/tabs.xml
* hadoop-common-project/hadoop-common/src/main/docs/changes/changes2html.pl
* hadoop-common-project/hadoop-common/pom.xml
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hdfsdatanodes.odg
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/favicon.ico
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/index.xml


> fix site -Pdocs -Pdist in hadoop-project-dist; cleanout remaining forrest bits
> --
>
> Key: HADOOP-12022
> URL: https://issues.apache.org/jira/browse/HADOOP-12022
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HADOOP-12011.00.patch, HADOOP-12011.01.patch, 
> HADOOP-12011.02.patch
>
>
> Between HDFS-8350 and probably other changes, it would appear site -Pdocs 
> -Pdist no longer works and is breaking the nightly build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12030) test-patch should only report on newly introduced findbugs warnings.

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564762#comment-14564762
 ] 

Hudson commented on HADOOP-12030:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2140 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2140/])
HADOOP-12030. test-patch should only report on newly introduced findbugs 
warnings. (Sean Busbey via aw) (aw: rev 
b01d33cf862a34f9988584d3d1f3995118110b90)
* dev-support/test-patch.sh
* hadoop-common-project/hadoop-common/CHANGES.txt


> test-patch should only report on newly introduced findbugs warnings.
> 
>
> Key: HADOOP-12030
> URL: https://issues.apache.org/jira/browse/HADOOP-12030
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>  Labels: test-patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12030.1.patch, HADOOP-12030.2.patch
>
>
> findbugs is currently reporting the total number of findbugs warnings for 
> touched modules rather than just newly introduced bugs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11930) test-patch in offline mode should tell maven to be in offline mode

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564760#comment-14564760
 ] 

Hudson commented on HADOOP-11930:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2140 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2140/])
HADOOP-11930. test-patch in offline mode should tell maven to be in offline 
mode (Sean Busbey via aw) (aw: rev 7ebe80ec12a91602b4dcdafb4e3a75def6035ad6)
* dev-support/test-patch.sh
* hadoop-common-project/hadoop-common/CHANGES.txt


> test-patch in offline mode should tell maven to be in offline mode
> --
>
> Key: HADOOP-11930
> URL: https://issues.apache.org/jira/browse/HADOOP-11930
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Fix For: 2.8.0
>
> Attachments: HADOOP-11930.1.patch, HADOOP-11930.2.patch
>
>
> when we use --offline for test-patch, we should also flag maven to be offline 
> so that it doesn't attempt to talk to the internet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11406) xargs -P is not portable

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564759#comment-14564759
 ] 

Hudson commented on HADOOP-11406:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2140 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2140/])
HADOOP-11406. xargs -P is not portable (Kengo Seki via aw) (aw: rev 
5504a261f829cf2e7b70970246bf5a55c172be84)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
* 
hadoop-common-project/hadoop-common/src/main/conf/hadoop-user-functions.sh.example


> xargs -P is not portable
> 
>
> Key: HADOOP-11406
> URL: https://issues.apache.org/jira/browse/HADOOP-11406
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0
> Environment: Solaris
> Illumos
> AIX
> ... likely others
>Reporter: Allen Wittenauer
>Assignee: Kengo Seki
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HADOOP-11406.001.patch, HADOOP-11406.002.patch, 
> HADOOP-11406.002.patch
>
>
> hadoop-functions.sh uses xargs -P in the ssh handler.  -P is a GNU extension 
> and is not available on all operating systems.  We should add some detection 
> for support and perform an appropriate action.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12035) shellcheck plugin displays a wrong version potentially

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564758#comment-14564758
 ] 

Hudson commented on HADOOP-12035:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2140 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2140/])
HADOOP-12035. shellcheck plugin displays a wrong version potentially (Kengo 
Seki via aw) (aw: rev f1cea9c6bf68f03f7136bebc245efbc0a95738e4)
* dev-support/test-patch.d/shellcheck.sh
* hadoop-common-project/hadoop-common/CHANGES.txt


> shellcheck plugin displays a wrong version potentially
> --
>
> Key: HADOOP-12035
> URL: https://issues.apache.org/jira/browse/HADOOP-12035
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Trivial
>  Labels: newbie, test-patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12035.001.patch
>
>
> In dev-support/test-patch.d/shellcheck.sh:
> {code}
> SHELLCHECK_VERSION=$(shellcheck --version | ${GREP} version: | ${AWK} '{print 
> $NF}')
> {code}
> it should be 
> {code}
> SHELLCHECK_VERSION=$(${SHELLCHECK} --version | …)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11142) Remove hdfs dfs reference from file system shell documentation

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564747#comment-14564747
 ] 

Hudson commented on HADOOP-11142:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2140 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2140/])
HADOOP-11142. Remove hdfs dfs reference from file system shell documentation 
(Kengo Seki via aw) (aw: rev cbba7d68f0c7faf2b3bab41fd1694dd626db6492)
* hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md
* hadoop-common-project/hadoop-common/CHANGES.txt


> Remove hdfs dfs reference from file system shell documentation
> --
>
> Key: HADOOP-11142
> URL: https://issues.apache.org/jira/browse/HADOOP-11142
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Jonathan Allen
>Assignee: Kengo Seki
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0
>
> Attachments: HADOOP-11142.001.patch
>
>
> The File System Shell documentation references {{hdfs dfs}} in all of the 
> examples. The FS shell is not specific about the underlying file system and 
> so shouldn't reference hdfs. The correct usage should be {{hadoop fs}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11959) WASB should configure client side socket timeout in storage client blob request options

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564746#comment-14564746
 ] 

Hudson commented on HADOOP-11959:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2140 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2140/])
HADOOP-11959. WASB should configure client side socket timeout in storage 
client blob request options. Contributed by Ivan Mitic. (cnauroth: rev 
94e7d49a6dab7e7f4e873dcca67e7fcc98e7e1f8)
* 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
* 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestBlobDataValidation.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestAzureFileSystemErrorConditions.java
* 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/MockStorageInterface.java
* 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/StorageInterfaceImpl.java
* 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/StorageInterface.java
* hadoop-project/pom.xml


> WASB should configure client side socket timeout in storage client blob 
> request options
> ---
>
> Key: HADOOP-11959
> URL: https://issues.apache.org/jira/browse/HADOOP-11959
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Ivan Mitic
>Assignee: Ivan Mitic
> Fix For: 2.8.0
>
> Attachments: HADOOP-11959.2.patch, HADOOP-11959.patch
>
>
> On clusters/jobs where {{mapred.task.timeout}} is set to a larger value, we 
> noticed that tasks can sometimes get stuck on the below stack.
> {code}
> Thread 1: (state = IN_NATIVE)
> - java.net.SocketInputStream.socketRead0(java.io.FileDescriptor, byte[], int, 
> int, int) @bci=0 (Interpreted frame)
> - java.net.SocketInputStream.read(byte[], int, int, int) @bci=87, line=152 
> (Interpreted frame)
> - java.net.SocketInputStream.read(byte[], int, int) @bci=11, line=122 
> (Interpreted frame)
> - java.io.BufferedInputStream.fill() @bci=175, line=235 (Interpreted frame)
> - java.io.BufferedInputStream.read1(byte[], int, int) @bci=44, line=275 
> (Interpreted frame)
> - java.io.BufferedInputStream.read(byte[], int, int) @bci=49, line=334 
> (Interpreted frame)
> - sun.net.www.MeteredStream.read(byte[], int, int) @bci=16, line=134 
> (Interpreted frame)
> - java.io.FilterInputStream.read(byte[], int, int) @bci=7, line=133 
> (Interpreted frame)
> - sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(byte[], 
> int, int) @bci=4, line=3053 (Interpreted frame)
> - com.microsoft.azure.storage.core.NetworkInputStream.read(byte[], int, int) 
> @bci=7, line=49 (Interpreted frame)
> - 
> com.microsoft.azure.storage.blob.CloudBlob$10.postProcessResponse(java.net.HttpURLConnection,
>  com.microsoft.azure.storage.blob.CloudBlob, com.microsoft.azure
> .storage.blob.CloudBlobClient, com.microsoft.azure.storage.OperationContext, 
> java.lang.Integer) @bci=204, line=1691 (Interpreted frame)
> - 
> com.microsoft.azure.storage.blob.CloudBlob$10.postProcessResponse(java.net.HttpURLConnection,
>  java.lang.Object, java.lang.Object, com.microsoft.azure.storage
> .OperationContext, java.lang.Object) @bci=17, line=1613 (Interpreted frame)
> - 
> com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(java.lang.Object,
>  java.lang.Object, com.microsoft.azure.storage.core.StorageRequest, com.mi
> crosoft.azure.storage.RetryPolicyFactory, 
> com.microsoft.azure.storage.OperationContext) @bci=352, line=148 (Interpreted 
> frame)
> - com.microsoft.azure.storage.blob.CloudBlob.downloadRangeInternal(long, 
> java.lang.Long, byte[], int, com.microsoft.azure.storage.AccessCondition, 
> com.microsof
> t.azure.storage.blob.BlobRequestOptions, 
> com.microsoft.azure.storage.OperationContext) @bci=131, line=1468 
> (Interpreted frame)
> - com.microsoft.azure.storage.blob.BlobInputStream.dispatchRead(int) @bci=31, 
> line=255 (Interpreted frame)
> - com.microsoft.azure.storage.blob.BlobInputStream.readInternal(byte[], int, 
> int) @bci=52, line=448 (Interpreted frame)
> - com.microsoft.azure.storage.blob.BlobInputStream.read(byte[], int, int) 
> @bci=28, line=420 (Interpreted frame)
> - java.io.BufferedInputStream.read1(byte[], int, int) @bci=39, line=273 
> (Interpreted frame)
> - java.io.BufferedInputStream.read(byte[], int, int) @bci=49, line=334 
> (Interpreted frame)
> - java.io.DataInputStream.read(byte[], int, int) @bci=7, line=149 
> (Interpreted frame)
> - 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem$NativeAzureFsInputStream.read(byte[],
>  int, int) @bci=10, line=734 (Interpreted frame)
> - java.io.BufferedInputStream.read1(byte[], int, int) @bci=39, line=273 
> (In

[jira] [Commented] (HADOOP-12004) test-patch breaks with reexec in certain situations

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564748#comment-14564748
 ] 

Hudson commented on HADOOP-12004:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2140 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2140/])
HADOOP-12004. test-patch breaks with reexec in certain situations (Sean Busbey 
via aw) (aw: rev 87c26f8b7d949f85fd3dc511f592112e1504e0a2)
* dev-support/test-patch.sh
* hadoop-common-project/hadoop-common/CHANGES.txt


> test-patch breaks with reexec in certain situations
> ---
>
> Key: HADOOP-12004
> URL: https://issues.apache.org/jira/browse/HADOOP-12004
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: HADOOP-12004.1.patch
>
>
> Looks like HADOOP-11911 forgot the equal sign:
> {code}
>   exec "${PATCH_DIR}/dev-support-test/test-patch.sh" \
> --reexec \
> --branch "${PATCH_BRANCH}" \
> --patch-dir="${PATCH_DIR}" \
>   "${USER_PARAMS[@]}"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11894) Bump the version of Apache HTrace to 3.2.0-incubating

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564745#comment-14564745
 ] 

Hudson commented on HADOOP-11894:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2140 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2140/])
HADOOP-11894. Bump the version of Apache HTrace to 3.2.0-incubating (Masatake 
Iwasaki via Colin P. McCabe) (cmccabe: rev 
a1140959dab3f35accbd6c66abfa14f94ff7dcec)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* hadoop-project/pom.xml
* hadoop-common-project/hadoop-common/src/site/markdown/Tracing.md
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tracing/SpanReceiverHost.java


> Bump the version of Apache HTrace to 3.2.0-incubating
> -
>
> Key: HADOOP-11894
> URL: https://issues.apache.org/jira/browse/HADOOP-11894
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0
>
> Attachments: HADOOP-11894.001.patch, HADOOP-11894.002.patch, 
> HADOOP-11894.003.patch
>
>
> * update pom.xml
> * update documentation
> * replace {{addKVAnnotation(byte[] key, byte[] value)}} with 
> {{addKVAnnotation(String key, String value)}}
> * replace {{SpanReceiverHost#getUniqueLocalTraceFileName}} with 
> {{LocalFileSpanReceiver#getUniqueLocalTraceFileName}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564744#comment-14564744
 ] 

Hudson commented on HADOOP-11934:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2140 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2140/])
HADOOP-11934. Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite 
loop. Contributed by Larry McCay. (cnauroth: rev 
860b8373c3a851386b8cd2d4265dd35e5aabc941)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java
* 
hadoop-common-project/hadoop-common/src/main/resources/META-INF/services/org.apache.hadoop.security.alias.CredentialProviderFactory
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/LocalJavaKeyStoreProvider.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/JavaKeyStoreProvider.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias/TestCredentialProviderFactory.java


> Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
> -
>
> Key: HADOOP-11934
> URL: https://issues.apache.org/jira/browse/HADOOP-11934
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Mike Yoder
>Assignee: Larry McCay
>Priority: Blocker
> Fix For: 2.7.1
>
> Attachments: HADOOP-11934-11.patch, HADOOP-11934.001.patch, 
> HADOOP-11934.002.patch, HADOOP-11934.003.patch, HADOOP-11934.004.patch, 
> HADOOP-11934.005.patch, HADOOP-11934.006.patch, HADOOP-11934.007.patch, 
> HADOOP-11934.008.patch, HADOOP-11934.009.patch, HADOOP-11934.010.patch, 
> HADOOP-11934.012.patch, HADOOP-11934.013.patch
>
>
> I was attempting to use the LdapGroupsMapping code and the 
> JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
> fatal, issue.  The code goes into what ought to have been an infinite loop, 
> were it not for it overflowing the stack and Java ending the loop.  Here is a 
> snippet of the stack; my annotations are at the bottom.
> {noformat}
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:65)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
>   at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
>   at 
> org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
>   at 
> org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
>   at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>   at org.apache.hadoop.security.Groups.(Groups.java:70)
>   at org.apache.hadoop.security.Groups.(Groups.java:66)
>   at 
> org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
>   at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
>   at 
> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:2753)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:2745)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:65)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.

[jira] [Commented] (HADOOP-11983) HADOOP_USER_CLASSPATH_FIRST works the opposite of what it is supposed to do

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564743#comment-14564743
 ] 

Hudson commented on HADOOP-11983:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2140 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2140/])
HADOOP-11983. HADOOP_USER_CLASSPATH_FIRST works the opposite of what it is 
supposed to do (Sangjin Lee via aw) (aw: rev 
08ae87f6ba334bd3442234e65b93495fe312cbb7)
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
* hadoop-common-project/hadoop-common/CHANGES.txt


> HADOOP_USER_CLASSPATH_FIRST works the opposite of what it is supposed to do
> ---
>
> Key: HADOOP-11983
> URL: https://issues.apache.org/jira/browse/HADOOP-11983
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Fix For: 3.0.0
>
> Attachments: HADOOP-11983.001.patch
>
>
> The behavior of HADOOP_USER_CLASSPATH_FIRST works the opposite of what it 
> should do. If it is not set, HADOOP_CLASSPATH is prepended. If set, it is 
> appended.
> You can easily try out by doing something like
> {noformat}
> HADOOP_CLASSPATH=/Users/alice/tmp hadoop classpath
> {noformat}
> (HADOOP_CLASSPATH should point to an existing directory)
> I think the if clause in hadoop_add_to_classpath_userpath is reversed.
> This issue seems specific to the trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9382) Add dfs mv overwrite option

2015-05-29 Thread Keegan Witt (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keegan Witt updated HADOOP-9382:

Affects Version/s: (was: 2.0.0-alpha)

> Add dfs mv overwrite option
> ---
>
> Key: HADOOP-9382
> URL: https://issues.apache.org/jira/browse/HADOOP-9382
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Keegan Witt
>Assignee: Suresh Srinivas
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-9382.1.patch, HADOOP-9382.2.patch, 
> HADOOP-9382.patch
>
>
> Add a -f option to allow overwriting existing destinations in dfs mv command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12042) Users may see TrashPolicy if hdfs dfs -rm is run

2015-05-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564698#comment-14564698
 ] 

Hadoop QA commented on HADOOP-12042:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  16m 19s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 29s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 39s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m  6s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 49s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  22m  7s | Tests passed in 
hadoop-common. |
| | |  61m  1s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12731912/HDFS-6775.2.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / b75df69 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6869/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6869/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6869/console |


This message was automatically generated.

> Users may see TrashPolicy if hdfs dfs -rm is run
> 
>
> Key: HADOOP-12042
> URL: https://issues.apache.org/jira/browse/HADOOP-12042
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: J.Andreina
> Fix For: 2.8.0
>
> Attachments: HDFS-6775.1.patch, HDFS-6775.2.patch
>
>
> Doing 'hdfs dfs -rm file' generates an extra log message on the console:
> {code}
> 14/07/29 15:18:56 INFO fs.TrashPolicyDefault: Namenode trash configuration: 
> Deletion interval = 0 minutes, Emptier interval = 0 minutes.
> {code}
> This shouldn't be seen by users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11684) S3a to use thread pool that blocks clients

2015-05-29 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564687#comment-14564687
 ] 

Steve Loughran commented on HADOOP-11684:
-

Thomas, could you split the POM patch from the rest of the change. That way it 
is more visible in CHANGES.TXT and something we can link to from HADOOP-9991, 
the general JIRA to track updates. 

> S3a to use thread pool that blocks clients
> --
>
> Key: HADOOP-11684
> URL: https://issues.apache.org/jira/browse/HADOOP-11684
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Thomas Demoor
>Assignee: Thomas Demoor
> Attachments: HADOOP-11684-001.patch
>
>
> Currently, if fs.s3a.max.total.tasks are queued and another (part)upload 
> wants to start, a RejectedExecutionException is thrown. 
> We should use a threadpool that blocks clients, nicely throtthling them, 
> rather than throwing an exception. F.i. something similar to 
> https://github.com/apache/incubator-s4/blob/master/subprojects/s4-comm/src/main/java/org/apache/s4/comm/staging/BlockingThreadPoolExecutorService.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12038) SwiftNativeOutputStream should check whether a file exists or not before deleting

2015-05-29 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564672#comment-14564672
 ] 

Steve Loughran commented on HADOOP-12038:
-


bq. "The change is too simple "

Having spent 36 hours with a colleague tracking down a problem which turned out 
to be due to a patch "too trivial for tests", I now view any patch with that 
assertion as a warning sign:

http://steveloughran.blogspot.co.uk/2015/05/its-ok-to-submit-patches-without-tests.html

In the absence of a test, please give a state model describing the possible 
input states to the operation, listing what concurrent changes to the 
filesystem make take place between the exists() and delete() operation, then 
show how the patch completes successfully across all states. Yes, this is 
harsh, but it'll be good practise for the bigger tests

> SwiftNativeOutputStream should check whether a file exists or not before 
> deleting
> -
>
> Key: HADOOP-12038
> URL: https://issues.apache.org/jira/browse/HADOOP-12038
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Chen He
>Assignee: Chen He
>Priority: Minor
> Attachments: HADOOP-12038.000.patch
>
>
> 15/05/27 15:27:03 WARN snative.SwiftNativeOutputStream: Could not delete 
> /tmp/hadoop-root/output-3695386887711395289.tmp
> It should check whether the file exists or not before deleting. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12042) Users may see TrashPolicy if hdfs dfs -rm is run

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564669#comment-14564669
 ] 

Hudson commented on HADOOP-12042:
-

FAILURE: Integrated in Hadoop-trunk-Commit #7925 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7925/])
HADOOP-12042. Users may see TrashPolicy if hdfs dfs -rm is run (Contributed by 
Andreina J) (vinayakumarb: rev 7366e4256395ed7550702275d0d9f2674bd43d6c)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/TrashPolicyDefault.java


> Users may see TrashPolicy if hdfs dfs -rm is run
> 
>
> Key: HADOOP-12042
> URL: https://issues.apache.org/jira/browse/HADOOP-12042
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: J.Andreina
> Fix For: 2.8.0
>
> Attachments: HDFS-6775.1.patch, HDFS-6775.2.patch
>
>
> Doing 'hdfs dfs -rm file' generates an extra log message on the console:
> {code}
> 14/07/29 15:18:56 INFO fs.TrashPolicyDefault: Namenode trash configuration: 
> Deletion interval = 0 minutes, Emptier interval = 0 minutes.
> {code}
> This shouldn't be seen by users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-7947) Validate XMLs if a relevant tool is available, when using scripts

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564662#comment-14564662
 ] 

Hudson commented on HADOOP-7947:


SUCCESS: Integrated in Hadoop-Yarn-trunk #942 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/942/])
HADOOP-7947. Validate XMLs if a relevant tool is available, when using scripts 
(Kengo Seki via aw) (aw: rev 5df1fadf874f3f0176f6b36b8ff7317edd63770f)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestConfTest.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ConfTest.java
* hadoop-common-project/hadoop-common/src/main/bin/hadoop


> Validate XMLs if a relevant tool is available, when using scripts
> -
>
> Key: HADOOP-7947
> URL: https://issues.apache.org/jira/browse/HADOOP-7947
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Affects Versions: 2.7.0
>Reporter: Harsh J
>Assignee: Kengo Seki
>  Labels: newbie
> Fix For: 3.0.0
>
> Attachments: HADOOP-7947.001.patch, HADOOP-7947.002.patch, 
> HADOOP-7947.003.patch, HADOOP-7947.004.patch, HADOOP-7947.005.patch
>
>
> Given that we are locked down to using only XML for configuration and most of 
> the administrators need to manage it by themselves (unless a tool that 
> manages for you is used), it would be good to also validate the provided 
> config XML (*-site.xml) files with a tool like {{xmllint}} or maybe Xerces 
> somehow, when running a command or (at least) when starting up daemons.
> We should use this only if a relevant tool is available, and optionally be 
> silent if the env. requests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12030) test-patch should only report on newly introduced findbugs warnings.

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564661#comment-14564661
 ] 

Hudson commented on HADOOP-12030:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #942 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/942/])
HADOOP-12030. test-patch should only report on newly introduced findbugs 
warnings. (Sean Busbey via aw) (aw: rev 
b01d33cf862a34f9988584d3d1f3995118110b90)
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/test-patch.sh


> test-patch should only report on newly introduced findbugs warnings.
> 
>
> Key: HADOOP-12030
> URL: https://issues.apache.org/jira/browse/HADOOP-12030
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>  Labels: test-patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12030.1.patch, HADOOP-12030.2.patch
>
>
> findbugs is currently reporting the total number of findbugs warnings for 
> touched modules rather than just newly introduced bugs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11406) xargs -P is not portable

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564658#comment-14564658
 ] 

Hudson commented on HADOOP-11406:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #942 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/942/])
HADOOP-11406. xargs -P is not portable (Kengo Seki via aw) (aw: rev 
5504a261f829cf2e7b70970246bf5a55c172be84)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
* 
hadoop-common-project/hadoop-common/src/main/conf/hadoop-user-functions.sh.example


> xargs -P is not portable
> 
>
> Key: HADOOP-11406
> URL: https://issues.apache.org/jira/browse/HADOOP-11406
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0
> Environment: Solaris
> Illumos
> AIX
> ... likely others
>Reporter: Allen Wittenauer
>Assignee: Kengo Seki
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HADOOP-11406.001.patch, HADOOP-11406.002.patch, 
> HADOOP-11406.002.patch
>
>
> hadoop-functions.sh uses xargs -P in the ssh handler.  -P is a GNU extension 
> and is not available on all operating systems.  We should add some detection 
> for support and perform an appropriate action.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564648#comment-14564648
 ] 

Hudson commented on HADOOP-11934:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #942 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/942/])
HADOOP-11934. Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite 
loop. Contributed by Larry McCay. (cnauroth: rev 
860b8373c3a851386b8cd2d4265dd35e5aabc941)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/resources/META-INF/services/org.apache.hadoop.security.alias.CredentialProviderFactory
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/LocalJavaKeyStoreProvider.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias/TestCredentialProviderFactory.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/JavaKeyStoreProvider.java


> Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
> -
>
> Key: HADOOP-11934
> URL: https://issues.apache.org/jira/browse/HADOOP-11934
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Mike Yoder
>Assignee: Larry McCay
>Priority: Blocker
> Fix For: 2.7.1
>
> Attachments: HADOOP-11934-11.patch, HADOOP-11934.001.patch, 
> HADOOP-11934.002.patch, HADOOP-11934.003.patch, HADOOP-11934.004.patch, 
> HADOOP-11934.005.patch, HADOOP-11934.006.patch, HADOOP-11934.007.patch, 
> HADOOP-11934.008.patch, HADOOP-11934.009.patch, HADOOP-11934.010.patch, 
> HADOOP-11934.012.patch, HADOOP-11934.013.patch
>
>
> I was attempting to use the LdapGroupsMapping code and the 
> JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
> fatal, issue.  The code goes into what ought to have been an infinite loop, 
> were it not for it overflowing the stack and Java ending the loop.  Here is a 
> snippet of the stack; my annotations are at the bottom.
> {noformat}
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:65)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
>   at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
>   at 
> org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
>   at 
> org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
>   at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>   at org.apache.hadoop.security.Groups.(Groups.java:70)
>   at org.apache.hadoop.security.Groups.(Groups.java:66)
>   at 
> org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
>   at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
>   at 
> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:2753)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:2745)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:65)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.ja

[jira] [Commented] (HADOOP-11894) Bump the version of Apache HTrace to 3.2.0-incubating

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564649#comment-14564649
 ] 

Hudson commented on HADOOP-11894:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #942 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/942/])
HADOOP-11894. Bump the version of Apache HTrace to 3.2.0-incubating (Masatake 
Iwasaki via Colin P. McCabe) (cmccabe: rev 
a1140959dab3f35accbd6c66abfa14f94ff7dcec)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-project/pom.xml
* hadoop-common-project/hadoop-common/src/site/markdown/Tracing.md
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tracing/SpanReceiverHost.java


> Bump the version of Apache HTrace to 3.2.0-incubating
> -
>
> Key: HADOOP-11894
> URL: https://issues.apache.org/jira/browse/HADOOP-11894
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0
>
> Attachments: HADOOP-11894.001.patch, HADOOP-11894.002.patch, 
> HADOOP-11894.003.patch
>
>
> * update pom.xml
> * update documentation
> * replace {{addKVAnnotation(byte[] key, byte[] value)}} with 
> {{addKVAnnotation(String key, String value)}}
> * replace {{SpanReceiverHost#getUniqueLocalTraceFileName}} with 
> {{LocalFileSpanReceiver#getUniqueLocalTraceFileName}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12004) test-patch breaks with reexec in certain situations

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564652#comment-14564652
 ] 

Hudson commented on HADOOP-12004:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #942 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/942/])
HADOOP-12004. test-patch breaks with reexec in certain situations (Sean Busbey 
via aw) (aw: rev 87c26f8b7d949f85fd3dc511f592112e1504e0a2)
* dev-support/test-patch.sh
* hadoop-common-project/hadoop-common/CHANGES.txt


> test-patch breaks with reexec in certain situations
> ---
>
> Key: HADOOP-12004
> URL: https://issues.apache.org/jira/browse/HADOOP-12004
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: HADOOP-12004.1.patch
>
>
> Looks like HADOOP-11911 forgot the equal sign:
> {code}
>   exec "${PATCH_DIR}/dev-support-test/test-patch.sh" \
> --reexec \
> --branch "${PATCH_BRANCH}" \
> --patch-dir="${PATCH_DIR}" \
>   "${USER_PARAMS[@]}"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11930) test-patch in offline mode should tell maven to be in offline mode

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564659#comment-14564659
 ] 

Hudson commented on HADOOP-11930:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #942 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/942/])
HADOOP-11930. test-patch in offline mode should tell maven to be in offline 
mode (Sean Busbey via aw) (aw: rev 7ebe80ec12a91602b4dcdafb4e3a75def6035ad6)
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/test-patch.sh


> test-patch in offline mode should tell maven to be in offline mode
> --
>
> Key: HADOOP-11930
> URL: https://issues.apache.org/jira/browse/HADOOP-11930
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Fix For: 2.8.0
>
> Attachments: HADOOP-11930.1.patch, HADOOP-11930.2.patch
>
>
> when we use --offline for test-patch, we should also flag maven to be offline 
> so that it doesn't attempt to talk to the internet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12022) fix site -Pdocs -Pdist in hadoop-project-dist; cleanout remaining forrest bits

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564660#comment-14564660
 ] 

Hudson commented on HADOOP-12022:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #942 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/942/])
 HADOOP-12022. fix site -Pdocs -Pdist in hadoop-project-dist; cleanout 
remaining forrest bits (aw) (aw: rev ae1454342064c71f414d20ad0885e60a335c7420)
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hdfsdatanodes.gif
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/conf/cli.xconf
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hadoop-logo-big.jpg
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hadoop-logo.jpg
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hdfsarchitecture.gif
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hdfsdatanodes.odg
* hadoop-common-project/hadoop-common/pom.xml
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/tabs.xml
* hadoop-common-project/hadoop-common/src/main/docs/status.xml
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hdfsdatanodes.png
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/architecture.gif
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/classes/CatalogManager.properties
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/core-logo.gif
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hdfsarchitecture.odg
* hadoop-common-project/hadoop-common/src/main/docs/changes/changes2html.pl
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/skinconf.xml
* hadoop-common-project/hadoop-common/src/main/docs/src/documentation/README.txt
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/site.xml
* 
hadoop-common-project/hadoop-common/src/main/docs/changes/ChangesFancyStyle.css
* hadoop-project-dist/pom.xml
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hdfsarchitecture.png
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/favicon.ico
* hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html
* 
hadoop-common-project/hadoop-common/src/main/docs/changes/ChangesSimpleStyle.css
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/index.xml
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/common-logo.jpg
* hadoop-hdfs-project/hadoop-hdfs/pom.xml


> fix site -Pdocs -Pdist in hadoop-project-dist; cleanout remaining forrest bits
> --
>
> Key: HADOOP-12022
> URL: https://issues.apache.org/jira/browse/HADOOP-12022
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HADOOP-12011.00.patch, HADOOP-12011.01.patch, 
> HADOOP-12011.02.patch
>
>
> Between HDFS-8350 and probably other changes, it would appear site -Pdocs 
> -Pdist no longer works and is breaking the nightly build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11959) WASB should configure client side socket timeout in storage client blob request options

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564650#comment-14564650
 ] 

Hudson commented on HADOOP-11959:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #942 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/942/])
HADOOP-11959. WASB should configure client side socket timeout in storage 
client blob request options. Contributed by Ivan Mitic. (cnauroth: rev 
94e7d49a6dab7e7f4e873dcca67e7fcc98e7e1f8)
* 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/StorageInterfaceImpl.java
* 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestAzureFileSystemErrorConditions.java
* 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/MockStorageInterface.java
* 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestBlobDataValidation.java
* 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/StorageInterface.java
* hadoop-project/pom.xml


> WASB should configure client side socket timeout in storage client blob 
> request options
> ---
>
> Key: HADOOP-11959
> URL: https://issues.apache.org/jira/browse/HADOOP-11959
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Ivan Mitic
>Assignee: Ivan Mitic
> Fix For: 2.8.0
>
> Attachments: HADOOP-11959.2.patch, HADOOP-11959.patch
>
>
> On clusters/jobs where {{mapred.task.timeout}} is set to a larger value, we 
> noticed that tasks can sometimes get stuck on the below stack.
> {code}
> Thread 1: (state = IN_NATIVE)
> - java.net.SocketInputStream.socketRead0(java.io.FileDescriptor, byte[], int, 
> int, int) @bci=0 (Interpreted frame)
> - java.net.SocketInputStream.read(byte[], int, int, int) @bci=87, line=152 
> (Interpreted frame)
> - java.net.SocketInputStream.read(byte[], int, int) @bci=11, line=122 
> (Interpreted frame)
> - java.io.BufferedInputStream.fill() @bci=175, line=235 (Interpreted frame)
> - java.io.BufferedInputStream.read1(byte[], int, int) @bci=44, line=275 
> (Interpreted frame)
> - java.io.BufferedInputStream.read(byte[], int, int) @bci=49, line=334 
> (Interpreted frame)
> - sun.net.www.MeteredStream.read(byte[], int, int) @bci=16, line=134 
> (Interpreted frame)
> - java.io.FilterInputStream.read(byte[], int, int) @bci=7, line=133 
> (Interpreted frame)
> - sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(byte[], 
> int, int) @bci=4, line=3053 (Interpreted frame)
> - com.microsoft.azure.storage.core.NetworkInputStream.read(byte[], int, int) 
> @bci=7, line=49 (Interpreted frame)
> - 
> com.microsoft.azure.storage.blob.CloudBlob$10.postProcessResponse(java.net.HttpURLConnection,
>  com.microsoft.azure.storage.blob.CloudBlob, com.microsoft.azure
> .storage.blob.CloudBlobClient, com.microsoft.azure.storage.OperationContext, 
> java.lang.Integer) @bci=204, line=1691 (Interpreted frame)
> - 
> com.microsoft.azure.storage.blob.CloudBlob$10.postProcessResponse(java.net.HttpURLConnection,
>  java.lang.Object, java.lang.Object, com.microsoft.azure.storage
> .OperationContext, java.lang.Object) @bci=17, line=1613 (Interpreted frame)
> - 
> com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(java.lang.Object,
>  java.lang.Object, com.microsoft.azure.storage.core.StorageRequest, com.mi
> crosoft.azure.storage.RetryPolicyFactory, 
> com.microsoft.azure.storage.OperationContext) @bci=352, line=148 (Interpreted 
> frame)
> - com.microsoft.azure.storage.blob.CloudBlob.downloadRangeInternal(long, 
> java.lang.Long, byte[], int, com.microsoft.azure.storage.AccessCondition, 
> com.microsof
> t.azure.storage.blob.BlobRequestOptions, 
> com.microsoft.azure.storage.OperationContext) @bci=131, line=1468 
> (Interpreted frame)
> - com.microsoft.azure.storage.blob.BlobInputStream.dispatchRead(int) @bci=31, 
> line=255 (Interpreted frame)
> - com.microsoft.azure.storage.blob.BlobInputStream.readInternal(byte[], int, 
> int) @bci=52, line=448 (Interpreted frame)
> - com.microsoft.azure.storage.blob.BlobInputStream.read(byte[], int, int) 
> @bci=28, line=420 (Interpreted frame)
> - java.io.BufferedInputStream.read1(byte[], int, int) @bci=39, line=273 
> (Interpreted frame)
> - java.io.BufferedInputStream.read(byte[], int, int) @bci=49, line=334 
> (Interpreted frame)
> - java.io.DataInputStream.read(byte[], int, int) @bci=7, line=149 
> (Interpreted frame)
> - 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem$NativeAzureFsInputStream.read(byte[],
>  int, int) @bci=10, line=734 (Interpreted frame)
> - java.io.BufferedInputStream.read1(byte[], int, int) @bci=39, line=273 
> (Inte

[jira] [Commented] (HADOOP-12035) shellcheck plugin displays a wrong version potentially

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564657#comment-14564657
 ] 

Hudson commented on HADOOP-12035:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #942 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/942/])
HADOOP-12035. shellcheck plugin displays a wrong version potentially (Kengo 
Seki via aw) (aw: rev f1cea9c6bf68f03f7136bebc245efbc0a95738e4)
* dev-support/test-patch.d/shellcheck.sh
* hadoop-common-project/hadoop-common/CHANGES.txt


> shellcheck plugin displays a wrong version potentially
> --
>
> Key: HADOOP-12035
> URL: https://issues.apache.org/jira/browse/HADOOP-12035
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Trivial
>  Labels: newbie, test-patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12035.001.patch
>
>
> In dev-support/test-patch.d/shellcheck.sh:
> {code}
> SHELLCHECK_VERSION=$(shellcheck --version | ${GREP} version: | ${AWK} '{print 
> $NF}')
> {code}
> it should be 
> {code}
> SHELLCHECK_VERSION=$(${SHELLCHECK} --version | …)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11142) Remove hdfs dfs reference from file system shell documentation

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564651#comment-14564651
 ] 

Hudson commented on HADOOP-11142:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #942 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/942/])
HADOOP-11142. Remove hdfs dfs reference from file system shell documentation 
(Kengo Seki via aw) (aw: rev cbba7d68f0c7faf2b3bab41fd1694dd626db6492)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md


> Remove hdfs dfs reference from file system shell documentation
> --
>
> Key: HADOOP-11142
> URL: https://issues.apache.org/jira/browse/HADOOP-11142
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Jonathan Allen
>Assignee: Kengo Seki
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0
>
> Attachments: HADOOP-11142.001.patch
>
>
> The File System Shell documentation references {{hdfs dfs}} in all of the 
> examples. The FS shell is not specific about the underlying file system and 
> so shouldn't reference hdfs. The correct usage should be {{hadoop fs}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11983) HADOOP_USER_CLASSPATH_FIRST works the opposite of what it is supposed to do

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564647#comment-14564647
 ] 

Hudson commented on HADOOP-11983:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #942 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/942/])
HADOOP-11983. HADOOP_USER_CLASSPATH_FIRST works the opposite of what it is 
supposed to do (Sangjin Lee via aw) (aw: rev 
08ae87f6ba334bd3442234e65b93495fe312cbb7)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh


> HADOOP_USER_CLASSPATH_FIRST works the opposite of what it is supposed to do
> ---
>
> Key: HADOOP-11983
> URL: https://issues.apache.org/jira/browse/HADOOP-11983
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Fix For: 3.0.0
>
> Attachments: HADOOP-11983.001.patch
>
>
> The behavior of HADOOP_USER_CLASSPATH_FIRST works the opposite of what it 
> should do. If it is not set, HADOOP_CLASSPATH is prepended. If set, it is 
> appended.
> You can easily try out by doing something like
> {noformat}
> HADOOP_CLASSPATH=/Users/alice/tmp hadoop classpath
> {noformat}
> (HADOOP_CLASSPATH should point to an existing directory)
> I think the if clause in hadoop_add_to_classpath_userpath is reversed.
> This issue seems specific to the trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11406) xargs -P is not portable

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564635#comment-14564635
 ] 

Hudson commented on HADOOP-11406:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #212 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/212/])
HADOOP-11406. xargs -P is not portable (Kengo Seki via aw) (aw: rev 
5504a261f829cf2e7b70970246bf5a55c172be84)
* 
hadoop-common-project/hadoop-common/src/main/conf/hadoop-user-functions.sh.example
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
* hadoop-common-project/hadoop-common/CHANGES.txt


> xargs -P is not portable
> 
>
> Key: HADOOP-11406
> URL: https://issues.apache.org/jira/browse/HADOOP-11406
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0
> Environment: Solaris
> Illumos
> AIX
> ... likely others
>Reporter: Allen Wittenauer
>Assignee: Kengo Seki
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HADOOP-11406.001.patch, HADOOP-11406.002.patch, 
> HADOOP-11406.002.patch
>
>
> hadoop-functions.sh uses xargs -P in the ssh handler.  -P is a GNU extension 
> and is not available on all operating systems.  We should add some detection 
> for support and perform an appropriate action.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11894) Bump the version of Apache HTrace to 3.2.0-incubating

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564626#comment-14564626
 ] 

Hudson commented on HADOOP-11894:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #212 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/212/])
HADOOP-11894. Bump the version of Apache HTrace to 3.2.0-incubating (Masatake 
Iwasaki via Colin P. McCabe) (cmccabe: rev 
a1140959dab3f35accbd6c66abfa14f94ff7dcec)
* hadoop-project/pom.xml
* hadoop-common-project/hadoop-common/src/site/markdown/Tracing.md
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tracing/SpanReceiverHost.java


> Bump the version of Apache HTrace to 3.2.0-incubating
> -
>
> Key: HADOOP-11894
> URL: https://issues.apache.org/jira/browse/HADOOP-11894
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0
>
> Attachments: HADOOP-11894.001.patch, HADOOP-11894.002.patch, 
> HADOOP-11894.003.patch
>
>
> * update pom.xml
> * update documentation
> * replace {{addKVAnnotation(byte[] key, byte[] value)}} with 
> {{addKVAnnotation(String key, String value)}}
> * replace {{SpanReceiverHost#getUniqueLocalTraceFileName}} with 
> {{LocalFileSpanReceiver#getUniqueLocalTraceFileName}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11142) Remove hdfs dfs reference from file system shell documentation

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564628#comment-14564628
 ] 

Hudson commented on HADOOP-11142:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #212 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/212/])
HADOOP-11142. Remove hdfs dfs reference from file system shell documentation 
(Kengo Seki via aw) (aw: rev cbba7d68f0c7faf2b3bab41fd1694dd626db6492)
* hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md
* hadoop-common-project/hadoop-common/CHANGES.txt


> Remove hdfs dfs reference from file system shell documentation
> --
>
> Key: HADOOP-11142
> URL: https://issues.apache.org/jira/browse/HADOOP-11142
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Jonathan Allen
>Assignee: Kengo Seki
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0
>
> Attachments: HADOOP-11142.001.patch
>
>
> The File System Shell documentation references {{hdfs dfs}} in all of the 
> examples. The FS shell is not specific about the underlying file system and 
> so shouldn't reference hdfs. The correct usage should be {{hadoop fs}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11930) test-patch in offline mode should tell maven to be in offline mode

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564637#comment-14564637
 ] 

Hudson commented on HADOOP-11930:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #212 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/212/])
HADOOP-11930. test-patch in offline mode should tell maven to be in offline 
mode (Sean Busbey via aw) (aw: rev 7ebe80ec12a91602b4dcdafb4e3a75def6035ad6)
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/test-patch.sh


> test-patch in offline mode should tell maven to be in offline mode
> --
>
> Key: HADOOP-11930
> URL: https://issues.apache.org/jira/browse/HADOOP-11930
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Fix For: 2.8.0
>
> Attachments: HADOOP-11930.1.patch, HADOOP-11930.2.patch
>
>
> when we use --offline for test-patch, we should also flag maven to be offline 
> so that it doesn't attempt to talk to the internet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12030) test-patch should only report on newly introduced findbugs warnings.

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564639#comment-14564639
 ] 

Hudson commented on HADOOP-12030:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #212 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/212/])
HADOOP-12030. test-patch should only report on newly introduced findbugs 
warnings. (Sean Busbey via aw) (aw: rev 
b01d33cf862a34f9988584d3d1f3995118110b90)
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/test-patch.sh


> test-patch should only report on newly introduced findbugs warnings.
> 
>
> Key: HADOOP-12030
> URL: https://issues.apache.org/jira/browse/HADOOP-12030
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>  Labels: test-patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12030.1.patch, HADOOP-12030.2.patch
>
>
> findbugs is currently reporting the total number of findbugs warnings for 
> touched modules rather than just newly introduced bugs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-7947) Validate XMLs if a relevant tool is available, when using scripts

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564640#comment-14564640
 ] 

Hudson commented on HADOOP-7947:


SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #212 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/212/])
HADOOP-7947. Validate XMLs if a relevant tool is available, when using scripts 
(Kengo Seki via aw) (aw: rev 5df1fadf874f3f0176f6b36b8ff7317edd63770f)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ConfTest.java
* hadoop-common-project/hadoop-common/src/main/bin/hadoop
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestConfTest.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Validate XMLs if a relevant tool is available, when using scripts
> -
>
> Key: HADOOP-7947
> URL: https://issues.apache.org/jira/browse/HADOOP-7947
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Affects Versions: 2.7.0
>Reporter: Harsh J
>Assignee: Kengo Seki
>  Labels: newbie
> Fix For: 3.0.0
>
> Attachments: HADOOP-7947.001.patch, HADOOP-7947.002.patch, 
> HADOOP-7947.003.patch, HADOOP-7947.004.patch, HADOOP-7947.005.patch
>
>
> Given that we are locked down to using only XML for configuration and most of 
> the administrators need to manage it by themselves (unless a tool that 
> manages for you is used), it would be good to also validate the provided 
> config XML (*-site.xml) files with a tool like {{xmllint}} or maybe Xerces 
> somehow, when running a command or (at least) when starting up daemons.
> We should use this only if a relevant tool is available, and optionally be 
> silent if the env. requests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12022) fix site -Pdocs -Pdist in hadoop-project-dist; cleanout remaining forrest bits

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564638#comment-14564638
 ] 

Hudson commented on HADOOP-12022:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #212 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/212/])
 HADOOP-12022. fix site -Pdocs -Pdist in hadoop-project-dist; cleanout 
remaining forrest bits (aw) (aw: rev ae1454342064c71f414d20ad0885e60a335c7420)
* 
hadoop-common-project/hadoop-common/src/main/docs/changes/ChangesSimpleStyle.css
* 
hadoop-common-project/hadoop-common/src/main/docs/changes/ChangesFancyStyle.css
* hadoop-common-project/hadoop-common/src/main/docs/changes/changes2html.pl
* hadoop-common-project/hadoop-common/pom.xml
* hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hdfsarchitecture.gif
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/favicon.ico
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/conf/cli.xconf
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hdfsarchitecture.odg
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hdfsarchitecture.png
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hadoop-logo-big.jpg
* hadoop-project-dist/pom.xml
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/architecture.gif
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hdfsdatanodes.gif
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/classes/CatalogManager.properties
* hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/tabs.xml
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hdfsdatanodes.odg
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/index.xml
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/core-logo.gif
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/skinconf.xml
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/common-logo.jpg
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hdfsdatanodes.png
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/site.xml
* 
hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hadoop-logo.jpg
* hadoop-common-project/hadoop-common/src/main/docs/status.xml
* hadoop-common-project/hadoop-common/src/main/docs/src/documentation/README.txt


> fix site -Pdocs -Pdist in hadoop-project-dist; cleanout remaining forrest bits
> --
>
> Key: HADOOP-12022
> URL: https://issues.apache.org/jira/browse/HADOOP-12022
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HADOOP-12011.00.patch, HADOOP-12011.01.patch, 
> HADOOP-12011.02.patch
>
>
> Between HDFS-8350 and probably other changes, it would appear site -Pdocs 
> -Pdist no longer works and is breaking the nightly build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12035) shellcheck plugin displays a wrong version potentially

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564634#comment-14564634
 ] 

Hudson commented on HADOOP-12035:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #212 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/212/])
HADOOP-12035. shellcheck plugin displays a wrong version potentially (Kengo 
Seki via aw) (aw: rev f1cea9c6bf68f03f7136bebc245efbc0a95738e4)
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/test-patch.d/shellcheck.sh


> shellcheck plugin displays a wrong version potentially
> --
>
> Key: HADOOP-12035
> URL: https://issues.apache.org/jira/browse/HADOOP-12035
> Project: Hadoop Common
>  Issue Type: Test
>  Components: build
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Trivial
>  Labels: newbie, test-patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12035.001.patch
>
>
> In dev-support/test-patch.d/shellcheck.sh:
> {code}
> SHELLCHECK_VERSION=$(shellcheck --version | ${GREP} version: | ${AWK} '{print 
> $NF}')
> {code}
> it should be 
> {code}
> SHELLCHECK_VERSION=$(${SHELLCHECK} --version | …)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11983) HADOOP_USER_CLASSPATH_FIRST works the opposite of what it is supposed to do

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564623#comment-14564623
 ] 

Hudson commented on HADOOP-11983:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #212 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/212/])
HADOOP-11983. HADOOP_USER_CLASSPATH_FIRST works the opposite of what it is 
supposed to do (Sangjin Lee via aw) (aw: rev 
08ae87f6ba334bd3442234e65b93495fe312cbb7)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh


> HADOOP_USER_CLASSPATH_FIRST works the opposite of what it is supposed to do
> ---
>
> Key: HADOOP-11983
> URL: https://issues.apache.org/jira/browse/HADOOP-11983
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Fix For: 3.0.0
>
> Attachments: HADOOP-11983.001.patch
>
>
> The behavior of HADOOP_USER_CLASSPATH_FIRST works the opposite of what it 
> should do. If it is not set, HADOOP_CLASSPATH is prepended. If set, it is 
> appended.
> You can easily try out by doing something like
> {noformat}
> HADOOP_CLASSPATH=/Users/alice/tmp hadoop classpath
> {noformat}
> (HADOOP_CLASSPATH should point to an existing directory)
> I think the if clause in hadoop_add_to_classpath_userpath is reversed.
> This issue seems specific to the trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11959) WASB should configure client side socket timeout in storage client blob request options

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564627#comment-14564627
 ] 

Hudson commented on HADOOP-11959:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #212 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/212/])
HADOOP-11959. WASB should configure client side socket timeout in storage 
client blob request options. Contributed by Ivan Mitic. (cnauroth: rev 
94e7d49a6dab7e7f4e873dcca67e7fcc98e7e1f8)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestAzureFileSystemErrorConditions.java
* 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/StorageInterfaceImpl.java
* 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestBlobDataValidation.java
* 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
* 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/MockStorageInterface.java
* 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/StorageInterface.java
* hadoop-project/pom.xml


> WASB should configure client side socket timeout in storage client blob 
> request options
> ---
>
> Key: HADOOP-11959
> URL: https://issues.apache.org/jira/browse/HADOOP-11959
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Ivan Mitic
>Assignee: Ivan Mitic
> Fix For: 2.8.0
>
> Attachments: HADOOP-11959.2.patch, HADOOP-11959.patch
>
>
> On clusters/jobs where {{mapred.task.timeout}} is set to a larger value, we 
> noticed that tasks can sometimes get stuck on the below stack.
> {code}
> Thread 1: (state = IN_NATIVE)
> - java.net.SocketInputStream.socketRead0(java.io.FileDescriptor, byte[], int, 
> int, int) @bci=0 (Interpreted frame)
> - java.net.SocketInputStream.read(byte[], int, int, int) @bci=87, line=152 
> (Interpreted frame)
> - java.net.SocketInputStream.read(byte[], int, int) @bci=11, line=122 
> (Interpreted frame)
> - java.io.BufferedInputStream.fill() @bci=175, line=235 (Interpreted frame)
> - java.io.BufferedInputStream.read1(byte[], int, int) @bci=44, line=275 
> (Interpreted frame)
> - java.io.BufferedInputStream.read(byte[], int, int) @bci=49, line=334 
> (Interpreted frame)
> - sun.net.www.MeteredStream.read(byte[], int, int) @bci=16, line=134 
> (Interpreted frame)
> - java.io.FilterInputStream.read(byte[], int, int) @bci=7, line=133 
> (Interpreted frame)
> - sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(byte[], 
> int, int) @bci=4, line=3053 (Interpreted frame)
> - com.microsoft.azure.storage.core.NetworkInputStream.read(byte[], int, int) 
> @bci=7, line=49 (Interpreted frame)
> - 
> com.microsoft.azure.storage.blob.CloudBlob$10.postProcessResponse(java.net.HttpURLConnection,
>  com.microsoft.azure.storage.blob.CloudBlob, com.microsoft.azure
> .storage.blob.CloudBlobClient, com.microsoft.azure.storage.OperationContext, 
> java.lang.Integer) @bci=204, line=1691 (Interpreted frame)
> - 
> com.microsoft.azure.storage.blob.CloudBlob$10.postProcessResponse(java.net.HttpURLConnection,
>  java.lang.Object, java.lang.Object, com.microsoft.azure.storage
> .OperationContext, java.lang.Object) @bci=17, line=1613 (Interpreted frame)
> - 
> com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(java.lang.Object,
>  java.lang.Object, com.microsoft.azure.storage.core.StorageRequest, com.mi
> crosoft.azure.storage.RetryPolicyFactory, 
> com.microsoft.azure.storage.OperationContext) @bci=352, line=148 (Interpreted 
> frame)
> - com.microsoft.azure.storage.blob.CloudBlob.downloadRangeInternal(long, 
> java.lang.Long, byte[], int, com.microsoft.azure.storage.AccessCondition, 
> com.microsof
> t.azure.storage.blob.BlobRequestOptions, 
> com.microsoft.azure.storage.OperationContext) @bci=131, line=1468 
> (Interpreted frame)
> - com.microsoft.azure.storage.blob.BlobInputStream.dispatchRead(int) @bci=31, 
> line=255 (Interpreted frame)
> - com.microsoft.azure.storage.blob.BlobInputStream.readInternal(byte[], int, 
> int) @bci=52, line=448 (Interpreted frame)
> - com.microsoft.azure.storage.blob.BlobInputStream.read(byte[], int, int) 
> @bci=28, line=420 (Interpreted frame)
> - java.io.BufferedInputStream.read1(byte[], int, int) @bci=39, line=273 
> (Interpreted frame)
> - java.io.BufferedInputStream.read(byte[], int, int) @bci=49, line=334 
> (Interpreted frame)
> - java.io.DataInputStream.read(byte[], int, int) @bci=7, line=149 
> (Interpreted frame)
> - 
> org.apache.hadoop.fs.azure.NativeAzureFileSystem$NativeAzureFsInputStream.read(byte[],
>  int, int) @bci=10, line=734 (Interpreted frame)
> - java.io.BufferedInputStream.read1(byte[], int, int) @bci=39, line=

[jira] [Commented] (HADOOP-12004) test-patch breaks with reexec in certain situations

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564629#comment-14564629
 ] 

Hudson commented on HADOOP-12004:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #212 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/212/])
HADOOP-12004. test-patch breaks with reexec in certain situations (Sean Busbey 
via aw) (aw: rev 87c26f8b7d949f85fd3dc511f592112e1504e0a2)
* dev-support/test-patch.sh
* hadoop-common-project/hadoop-common/CHANGES.txt


> test-patch breaks with reexec in certain situations
> ---
>
> Key: HADOOP-12004
> URL: https://issues.apache.org/jira/browse/HADOOP-12004
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: HADOOP-12004.1.patch
>
>
> Looks like HADOOP-11911 forgot the equal sign:
> {code}
>   exec "${PATCH_DIR}/dev-support-test/test-patch.sh" \
> --reexec \
> --branch "${PATCH_BRANCH}" \
> --patch-dir="${PATCH_DIR}" \
>   "${USER_PARAMS[@]}"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11934) Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop

2015-05-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564624#comment-14564624
 ] 

Hudson commented on HADOOP-11934:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #212 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/212/])
HADOOP-11934. Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite 
loop. Contributed by Larry McCay. (cnauroth: rev 
860b8373c3a851386b8cd2d4265dd35e5aabc941)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java
* 
hadoop-common-project/hadoop-common/src/main/resources/META-INF/services/org.apache.hadoop.security.alias.CredentialProviderFactory
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/LocalJavaKeyStoreProvider.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/JavaKeyStoreProvider.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/alias/TestCredentialProviderFactory.java


> Use of JavaKeyStoreProvider in LdapGroupsMapping causes infinite loop
> -
>
> Key: HADOOP-11934
> URL: https://issues.apache.org/jira/browse/HADOOP-11934
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Mike Yoder
>Assignee: Larry McCay
>Priority: Blocker
> Fix For: 2.7.1
>
> Attachments: HADOOP-11934-11.patch, HADOOP-11934.001.patch, 
> HADOOP-11934.002.patch, HADOOP-11934.003.patch, HADOOP-11934.004.patch, 
> HADOOP-11934.005.patch, HADOOP-11934.006.patch, HADOOP-11934.007.patch, 
> HADOOP-11934.008.patch, HADOOP-11934.009.patch, HADOOP-11934.010.patch, 
> HADOOP-11934.012.patch, HADOOP-11934.013.patch
>
>
> I was attempting to use the LdapGroupsMapping code and the 
> JavaKeyStoreProvider at the same time, and hit a really interesting, yet 
> fatal, issue.  The code goes into what ought to have been an infinite loop, 
> were it not for it overflowing the stack and Java ending the loop.  Here is a 
> snippet of the stack; my annotations are at the bottom.
> {noformat}
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:65)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:291)
>   at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:58)
>   at 
> org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:1863)
>   at 
> org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1843)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.getPassword(LdapGroupsMapping.java:386)
>   at 
> org.apache.hadoop.security.LdapGroupsMapping.setConf(LdapGroupsMapping.java:349)
>   at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>   at org.apache.hadoop.security.Groups.(Groups.java:70)
>   at org.apache.hadoop.security.Groups.(Groups.java:66)
>   at 
> org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
>   at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
>   at 
> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:804)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:774)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:647)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:2753)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$Key.(FileSystem.java:2745)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2611)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:88)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:65)
>   at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStor

[jira] [Commented] (HADOOP-11694) Über-jira: S3a stabilisation phase II

2015-05-29 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564617#comment-14564617
 ] 

Steve Loughran commented on HADOOP-11694:
-

Link to HADOOP-12020; support for reduced-replication

> Über-jira: S3a stabilisation phase II
> -
>
> Key: HADOOP-11694
> URL: https://issues.apache.org/jira/browse/HADOOP-11694
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Steve Loughran
>
> HADOOP-11571 covered the core s3a bugs surfacing in Hadoop-2.6 & other 
> enhancements to improve S3 (performance, proxy, custom endpoints)
> This JIRA covers post-2.7 issues and enhancements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12020) Support AWS S3 reduced redundancy storage class

2015-05-29 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564616#comment-14564616
 ] 

Steve Loughran commented on HADOOP-12020:
-

we're not going to add any new features to S3n; we're at too much risk of 
breaking something else.

For s3a, we'd welcome a patch, *with tests*.

> Support AWS S3 reduced redundancy storage class
> ---
>
> Key: HADOOP-12020
> URL: https://issues.apache.org/jira/browse/HADOOP-12020
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.0
> Environment: Hadoop on AWS
>Reporter: Yann Landrin-Schweitzer
>
> Amazon S3 uses, by default, the NORMAL_STORAGE class for s3 objects.
> This offers, according to Amazon's material, 99.% reliability.
> For many applications, however, the 99.99% reliability offered by the 
> REDUCED_REDUNDANCY storage class is amply sufficient, and comes with a 
> significant cost saving.
> HDFS, when using the legacy s3n protocol, or the new s3a scheme, should 
> support overriding the default storage class of created s3 objects so that 
> users can take advantage of this cost benefit.
> This would require minor changes of the s3n and s3a drivers, using 
> a configuration property fs.s3n.storage.class to override the default storage 
> when desirable. 
> This override could be implemented in Jets3tNativeFileSystemStore with:
>   S3Object object = new S3Object(key);
>   ...
>   if(storageClass!=null)  object.setStorageClass(storageClass);
> It would take a more complex form in s3a, e.g. setting:
> InitiateMultipartUploadRequest initiateMPURequest =
> new InitiateMultipartUploadRequest(bucket, key, om);
> if(storageClass !=null ) {
> initiateMPURequest = 
> initiateMPURequest.withStorageClass(storageClass);
> }
> and similar statements in various places.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11119) TrashPolicyDefault init pushes messages to command line

2015-05-29 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-9:
--
Labels:   (was: BB2015-05-TBR)

> TrashPolicyDefault init pushes messages to command line
> ---
>
> Key: HADOOP-9
> URL: https://issues.apache.org/jira/browse/HADOOP-9
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>Assignee: Brahma Reddy Battula
>Priority: Minor
> Attachments: HADOOP-9-002.patch, HADOOP-9.patch
>
>
> During a fresh install of trunk:
> {code}
> aw-mbp-work:hadoop-3.0.0-SNAPSHOT aw$ bin/hadoop fs -put /etc/hosts /tmp
> aw-mbp-work:hadoop-3.0.0-SNAPSHOT aw$ bin/hadoop fs -rm /tmp/hosts
> 14/09/23 13:05:46 INFO fs.TrashPolicyDefault: Namenode trash configuration: 
> Deletion interval = 0 minutes, Emptier interval = 0 minutes.
> Deleted /tmp/hosts
> {code}
> The info message for the Namenode trash configuration isn't very useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11119) TrashPolicyDefault init pushes messages to command line

2015-05-29 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564601#comment-14564601
 ] 

Brahma Reddy Battula commented on HADOOP-9:
---

Thanks [~vinayrpet]

> TrashPolicyDefault init pushes messages to command line
> ---
>
> Key: HADOOP-9
> URL: https://issues.apache.org/jira/browse/HADOOP-9
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>Assignee: Brahma Reddy Battula
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-9-002.patch, HADOOP-9.patch
>
>
> During a fresh install of trunk:
> {code}
> aw-mbp-work:hadoop-3.0.0-SNAPSHOT aw$ bin/hadoop fs -put /etc/hosts /tmp
> aw-mbp-work:hadoop-3.0.0-SNAPSHOT aw$ bin/hadoop fs -rm /tmp/hosts
> 14/09/23 13:05:46 INFO fs.TrashPolicyDefault: Namenode trash configuration: 
> Deletion interval = 0 minutes, Emptier interval = 0 minutes.
> Deleted /tmp/hosts
> {code}
> The info message for the Namenode trash configuration isn't very useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11979) optionally have test-patch make a partial -1 before long run checks

2015-05-29 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564600#comment-14564600
 ] 

Steve Loughran commented on HADOOP-11979:
-

this is a nice idea. we could go one further by actually splitting the fast 
from the slow tests; anything with a minicluster set up tagged as slowr (JUnit 
4 categories), and have them skipped on the first round.

> optionally have test-patch make a partial -1 before long run checks
> ---
>
> Key: HADOOP-11979
> URL: https://issues.apache.org/jira/browse/HADOOP-11979
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Sean Busbey
>Priority: Minor
>
> If test-patch has -1s for things that are relatively fast (e.g. javac, 
> checkstyle, whitespace, etc), then it would be nice if it would post a "-1s 
> so far" message to jira before running the long-run phase (i.e. test) similar 
> to what it does now for reexec of dev-support patches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12042) Users may see TrashPolicy if hdfs dfs -rm is run

2015-05-29 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-12042:
---
   Resolution: Fixed
Fix Version/s: 2.8.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2.
Thanks [~andreina]

> Users may see TrashPolicy if hdfs dfs -rm is run
> 
>
> Key: HADOOP-12042
> URL: https://issues.apache.org/jira/browse/HADOOP-12042
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: J.Andreina
> Fix For: 2.8.0
>
> Attachments: HDFS-6775.1.patch, HDFS-6775.2.patch
>
>
> Doing 'hdfs dfs -rm file' generates an extra log message on the console:
> {code}
> 14/07/29 15:18:56 INFO fs.TrashPolicyDefault: Namenode trash configuration: 
> Deletion interval = 0 minutes, Emptier interval = 0 minutes.
> {code}
> This shouldn't be seen by users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >