[jira] [Created] (HADOOP-12872) Fix formatting in ServiceLevelAuth.md

2016-03-02 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HADOOP-12872:
--

 Summary: Fix formatting in ServiceLevelAuth.md
 Key: HADOOP-12872
 URL: https://issues.apache.org/jira/browse/HADOOP-12872
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.2
Reporter: Akira AJISAKA
Priority: Trivial


{noformat} `security.client.protocol.hosts>> will be 
<<

[jira] [Updated] (HADOOP-12855) Add option to disable JVMPauseMonitor across services

2016-03-02 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-12855:

Attachment: HADOOP-12855-002.patch

Patch 002:
* Fix checkstyle warnings

> Add option to disable JVMPauseMonitor across services
> -
>
> Key: HADOOP-12855
> URL: https://issues.apache.org/jira/browse/HADOOP-12855
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance
>Affects Versions: 2.8.0
> Environment: JVMs with miniHDFS and miniYarn clusters
>Reporter: Steve Loughran
>Assignee: John Zhuge
> Attachments: HADOOP-12855-001.patch, HADOOP-12855-002.patch
>
>
> Now that the YARN and HDFS services automatically start a JVM pause monitor, 
> if you start up the mini HDFS and YARN clusters, with history server, you are 
> spinning off 5 + threads, all looking for JVM pauses, all printing things out 
> when it happens.
> We do not need these monitors in minicluster testing; they merely add load 
> and noise to tests.
> Rather than retrofit new options everywhere, how about having a 
> "jvm.pause.monitor.enabled" flag (default true), which, when set, starts off 
> the monitor thread.
> That way, the existing code is unchanged, there is always a JVM pause monitor 
> for the various services —it just isn't spinning up threads.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12860) Expand section "Data Encryption on HTTP" in SecureMode documentation

2016-03-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15177415#comment-15177415
 ] 

Hadoop QA commented on HADOOP-12860:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 9m 5s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12791119/HADOOP-12860.002.patch
 |
| JIRA Issue | HADOOP-12860 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux f67511569783 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 27941a1 |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8772/console |
| Powered by | Apache Yetus 0.3.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Expand section "Data Encryption on HTTP" in SecureMode documentation
> 
>
> Key: HADOOP-12860
> URL: https://issues.apache.org/jira/browse/HADOOP-12860
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: docuentation
> Attachments: HADOOP-12860.001.patch, HADOOP-12860.002.patch
>
>
> Section {{Data Encryption on HTTP}} in _Hadoop in Secure Mode_ should be be 
> expanded to talk about configurations needed to enable SSL for web UI of 
> HDFS/YARN/MapReduce daemons.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12860) Expand section "Data Encryption on HTTP" in SecureMode documentation

2016-03-02 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12860:
-
Attachment: HADOOP-12860.002.patch

Rev02: more updates. At some point back in time, {{dfs.https.port}} was 
removed. Also, added https port for journal nodes.

> Expand section "Data Encryption on HTTP" in SecureMode documentation
> 
>
> Key: HADOOP-12860
> URL: https://issues.apache.org/jira/browse/HADOOP-12860
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: docuentation
> Attachments: HADOOP-12860.001.patch, HADOOP-12860.002.patch
>
>
> Section {{Data Encryption on HTTP}} in _Hadoop in Secure Mode_ should be be 
> expanded to talk about configurations needed to enable SSL for web UI of 
> HDFS/YARN/MapReduce daemons.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12871) Fix dead link to NativeLibraries.html in CommandsManual.md

2016-03-02 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HADOOP-12871:
--

 Summary: Fix dead link to NativeLibraries.html in CommandsManual.md
 Key: HADOOP-12871
 URL: https://issues.apache.org/jira/browse/HADOOP-12871
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.2
Reporter: Akira AJISAKA
Priority: Minor


{noformat:title=CommandsManual.md}
This command checks the availability of the Hadoop native code. See 
[\#NativeLibraries.html](#NativeLibraries.html) for more information. By 
default, this command only checks the availability of libhadoop.
{noformat}
The link should be fixed to {{\[Native Libaries\](./NativeLibraries.html)}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12869) CryptoInputStream#read() may return incorrect result

2016-03-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15177396#comment-15177396
 ] 

Hadoop QA commented on HADOOP-12869:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 52s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 43s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 49s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 42s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 42s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 21s 
{color} | {color:red} hadoop-common-project/hadoop-common: patch generated 2 
new + 32 unchanged - 0 fixed = 34 total (was 32) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 48s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 11s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 26s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12791107/HADOOP-12869.002.patch
 |
| JIRA Issue | HADOOP-12869 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 84991624b8ac 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | 

[jira] [Created] (HADOOP-12870) Fix typo admininistration in CommandsManual.md

2016-03-02 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HADOOP-12870:
--

 Summary: Fix typo admininistration in CommandsManual.md
 Key: HADOOP-12870
 URL: https://issues.apache.org/jira/browse/HADOOP-12870
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.2
Reporter: Akira AJISAKA
Priority: Minor


{noformat:title=CommandsManual.md}
All of these commands are executed from the `hadoop` shell command. They have 
been broken up into [User Commands](#User_Commands) and [Admininistration 
Commands](#Admininistration_Commands).
{noformat}
"Admininistration" should be "Administration".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12857) Rework hadoop-tools-dist

2016-03-02 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15177353#comment-15177353
 ] 

Allen Wittenauer commented on HADOOP-12857:
---

OK, so I'm not imagining it.  Thanks!

FYI, -00 has a dumb bug that you won't see if you run the optional bits from 
the HADOOP_PREFIX dir. Grr.  (The profiles aren't getting build with the 
HADOOP_TOOLS_HOME in the path.)

I'll wait to see what yetus has to say before posting a new patch though.  I'm 
sure there are whitespace and other issues lol.

> Rework hadoop-tools-dist
> 
>
> Key: HADOOP-12857
> URL: https://issues.apache.org/jira/browse/HADOOP-12857
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12857.00.patch
>
>
> As hadoop-tools grows bigger and bigger, it's becoming evident that having a 
> single directory that gets sucked in is starting to become a big burden as 
> the number of tools grows.  Let's rework this to be smarter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12857) Rework hadoop-tools-dist

2016-03-02 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15177339#comment-15177339
 ] 

Chris Nauroth commented on HADOOP-12857:


bq. Why does the hdfs haadmin command require hadoop-tools in the classpath? Is 
this actually a long standing bug/misunderstanding of where toolrunner comes 
from?

I looked through revision history, and it appears that it was always this way, 
right from the old HDFS-1623 feature branch.  I can't think of any good reason 
for it to do this, so I think it's a bug.

> Rework hadoop-tools-dist
> 
>
> Key: HADOOP-12857
> URL: https://issues.apache.org/jira/browse/HADOOP-12857
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12857.00.patch
>
>
> As hadoop-tools grows bigger and bigger, it's becoming evident that having a 
> single directory that gets sucked in is starting to become a big burden as 
> the number of tools grows.  Let's rework this to be smarter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12857) Rework hadoop-tools-dist

2016-03-02 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12857:
--
Hadoop Flags: Incompatible change
Release Note: 
* Turning on optional things from the tools directory can now be done via 
hadoop-env.sh without impacting the various user-facing CLASSPATH.
* The tools directory is no longer pulled in blindly for any utilities that 
pull it in.  

> Rework hadoop-tools-dist
> 
>
> Key: HADOOP-12857
> URL: https://issues.apache.org/jira/browse/HADOOP-12857
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12857.00.patch
>
>
> As hadoop-tools grows bigger and bigger, it's becoming evident that having a 
> single directory that gets sucked in is starting to become a big burden as 
> the number of tools grows.  Let's rework this to be smarter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12869) CryptoInputStream#read() may return incorrect result

2016-03-02 Thread Dapeng Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dapeng Sun updated HADOOP-12869:

Attachment: HADOOP-12869.002.patch

Updated patch to make logic simple

> CryptoInputStream#read() may return incorrect result
> 
>
> Key: HADOOP-12869
> URL: https://issues.apache.org/jira/browse/HADOOP-12869
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2, 3.0.0
>Reporter: Dapeng Sun
>Assignee: Dapeng Sun
>Priority: Critical
> Attachments: HADOOP-12869.001.patch, HADOOP-12869.002.patch
>
>
> Here is the comment of {{FilterInputStream#read()}}:
> {noformat}
> /**
>  * Reads the next byte of data from this input stream. The value
>  * byte is returned as an int in the range
>  * 0 to 255. If no byte is available
>  * because the end of the stream has been reached, the value
>  * -1 is returned. This method blocks until input data
>  * is available, the end of the stream is detected, or an exception
>  * is thrown.
>  * 
>  * This method
>  * simply performs in.read() and returns the result.
>  *
>  * @return the next byte of data, or -1 if the end of the
>  * stream is reached.
>  * @exception  IOException  if an I/O error occurs.
>  * @seejava.io.FilterInputStream#in
>  */
> public int read() throws IOException {
> return in.read();
> }
> {noformat}
> Here is the implementation of {{CryptoInputStream#read()}} in Hadoop Common:
> {noformat}
> @Override
> public int read() throws IOException {
>   return (read(oneByteBuf, 0, 1) == -1) ? -1 : (oneByteBuf[0] & 0xff);
> 
> }
> {noformat}
> The return value of {{read(oneByteBuf, 0, 1)}} maybe 1, -1 and 0:
> For {{1}}: we should return the content of {{oneByteBuf}}
> For {{-1}}: we should return {{-1}} to stand for the end of stream
> For {{0}}: it means we didn't get decryption data back and it is not the end 
> of the stream, we should continue to decrypt the stream. But it return {{0}} 
> on {{read()}} in current implementation, it means the decrypted content is 
> {{0}} and it is incorrect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12857) Rework hadoop-tools-dist

2016-03-02 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12857:
--
Status: Patch Available  (was: Open)

> Rework hadoop-tools-dist
> 
>
> Key: HADOOP-12857
> URL: https://issues.apache.org/jira/browse/HADOOP-12857
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12857.00.patch
>
>
> As hadoop-tools grows bigger and bigger, it's becoming evident that having a 
> single directory that gets sucked in is starting to become a big burden as 
> the number of tools grows.  Let's rework this to be smarter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12857) Rework hadoop-tools-dist

2016-03-02 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15177293#comment-15177293
 ] 

Allen Wittenauer commented on HADOOP-12857:
---

-00:

 tl;dr:  HADOOP_TOOLS_PATH is no longer used in the codebase

* removed toolspath from haadmin because I can't see what it needs from there 
and mvn dependencies don't list anything either
* added various HADOOP_TOOLS_* vars to locate content, similar to what is 
present for the other parts of Hadoop
* added those entries to the various envvars subcommands
* added the necessary hooks to build profiles and built-ins
* changed all of the built-ins to use the specific hooks for them at runtime
* added generic *_entry handlers to deal with comma delimited options
* added ability to turn on built-in optional components from hadoop-env.sh 
without doing anything crazy
* added and modified quite a few shell unit tests to test all this code
* added commons-httpclient back to openstack so I could move forward (see 
HADOOP-12868)

Todo:
* need to update the docs for S3, etc, to tell how to turn them on now

> Rework hadoop-tools-dist
> 
>
> Key: HADOOP-12857
> URL: https://issues.apache.org/jira/browse/HADOOP-12857
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12857.00.patch
>
>
> As hadoop-tools grows bigger and bigger, it's becoming evident that having a 
> single directory that gets sucked in is starting to become a big burden as 
> the number of tools grows.  Let's rework this to be smarter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12857) Rework hadoop-tools-dist

2016-03-02 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12857:
--
Attachment: HADOOP-12857.00.patch

> Rework hadoop-tools-dist
> 
>
> Key: HADOOP-12857
> URL: https://issues.apache.org/jira/browse/HADOOP-12857
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12857.00.patch
>
>
> As hadoop-tools grows bigger and bigger, it's becoming evident that having a 
> single directory that gets sucked in is starting to become a big burden as 
> the number of tools grows.  Let's rework this to be smarter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11792) Remove all of the CHANGES.txt files

2016-03-02 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15177257#comment-15177257
 ] 

Akira AJISAKA commented on HADOOP-11792:


+1 for removing CHANGES.txt from trunk, branch-2, and branch-2.8.

> Remove all of the CHANGES.txt files
> ---
>
> Key: HADOOP-11792
> URL: https://issues.apache.org/jira/browse/HADOOP-11792
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Jayaradha
>
> With the commit of HADOOP-11731, the CHANGES.txt files are now EOLed.  We 
> should remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12793) Write a new group mapping service guide

2016-03-02 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12793:
-
Attachment: HADOOP-12793.006.patch

Great catch!
I updated the docs again to correct that.

> Write a new group mapping service guide
> ---
>
> Key: HADOOP-12793
> URL: https://issues.apache.org/jira/browse/HADOOP-12793
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: ldap, supportability
> Attachments: HADOOP-12791.001.patch, HADOOP-12793.002.patch, 
> HADOOP-12793.003.patch, HADOOP-12793.004.patch, HADOOP-12793.005.patch, 
> HADOOP-12793.006.patch
>
>
> LdapGroupsMapping has lots of configurable properties and is thus fairly 
> complex in nature. _HDFS Permissions Guide_ has a minimal introduction to 
> LdapGroupsMapping, with reference to "More information on configuring the 
> group mapping service is available in the Javadocs."
> However, its Javadoc provides no information about how to configure it. 
> Core-default.xml has descriptions for each property, but still lacks a 
> comprehensive tutorial. Without a tutorial/guide, these configurable 
> properties would be buried under the sea of properties.
> Both Cloudera and HortonWorks has some information regarding LDAP group 
> mapping:
> http://www.cloudera.com/documentation/enterprise/latest/topics/cm_sg_ldap_grp_mappings.html
> http://hortonworks.com/blog/hadoop-groupmapping-ldap-integration/
> But neither cover all configurable features, such as using SSL with LDAP, and 
> POSIX group semantics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store

2016-03-02 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12862:
-
Attachment: HADOOP-12862.002.patch

Rev02: updated core-default.xml according to Mike's comments.

> LDAP Group Mapping over SSL can not specify trust store
> ---
>
> Key: HADOOP-12862
> URL: https://issues.apache.org/jira/browse/HADOOP-12862
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12862.001.patch, HADOOP-12862.002.patch
>
>
> In a secure environment, SSL is used to encrypt LDAP request for group 
> mapping resolution.
> We (+[~yoderme], +[~tgrayson]) have found that its implementation is strange.
> For information, Hadoop name node, as an LDAP client, talks to a LDAP server 
> to resolve the group mapping of a user. In the case of LDAP over SSL, a 
> typical scenario is to establish one-way authentication (the client verifies 
> the server's certificate is real) by storing the server's certificate in the 
> client's truststore.
> A rarer scenario is to establish two-way authentication: in addition to store 
> truststore for the client to verify the server, the server also verifies the 
> client's certificate is real, and the client stores its own certificate in 
> its keystore.
> However, the current implementation for LDAP over SSL does not seem to be 
> correct in that it only configures keystore but no truststore (so LDAP server 
> can verify Hadoop's certificate, but Hadoop may not be able to verify LDAP 
> server's certificate)
> I think there should an extra pair of properties to specify the 
> truststore/password for LDAP server, and use that to configure system 
> properties {{javax.net.ssl.trustStore}}/{{javax.net.ssl.trustStorePassword}}
> I am a security layman so my words can be imprecise. But I hope this makes 
> sense.
> Oracle's SSL LDAP documentation: 
> http://docs.oracle.com/javase/jndi/tutorial/ldap/security/ssl.html
> JSSE reference guide: 
> http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12855) Add option to disable JVMPauseMonitor across services

2016-03-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15177198#comment-15177198
 ] 

Hadoop QA commented on HADOOP-12855:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 39s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 40s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 11s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 46s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 0s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 0s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 23s 
{color} | {color:red} hadoop-common-project/hadoop-common: patch generated 2 
new + 7 unchanged - 1 fixed = 9 total (was 8) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 16s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 47s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
26s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m 53s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12791091/HADOOP-12855-001.patch
 |
| JIRA Issue | HADOOP-12855 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3d76b64f2b84 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |

[jira] [Commented] (HADOOP-12847) hadoop daemonlog should support https and SPNEGO for Kerberized cluster

2016-03-02 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15177176#comment-15177176
 ] 

Wei-Chiu Chuang commented on HADOOP-12847:
--

I'm thinking I can use {{URLConnectionFactory}} to make the implementation more 
compact.

> hadoop daemonlog should support https and SPNEGO for Kerberized cluster
> ---
>
> Key: HADOOP-12847
> URL: https://issues.apache.org/jira/browse/HADOOP-12847
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12847.001.patch, HADOOP-12847.002.patch
>
>
> {{hadoop daemonlog}} is a simple, yet useful tool for debugging.
> However, it does not support https, nor does it support a Kerberized Hadoop 
> cluster.
> Using {{AuthenticatedURL}}, it will be able to support SPNEGO negotiation 
> with a Kerberized name node web ui. It will also fall back to simple 
> authentication if the cluster is not Kerberized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12869) CryptoInputStream#read() may return incorrect result

2016-03-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15177156#comment-15177156
 ] 

Hadoop QA commented on HADOOP-12869:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
57s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 1s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 53s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 58s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 46s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 42s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 53s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 4s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12791088/HADOOP-12869.001.patch
 |
| JIRA Issue | HADOOP-12869 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5b47f0573f72 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store

2016-03-02 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15177122#comment-15177122
 ] 

Wei-Chiu Chuang commented on HADOOP-12862:
--

My bad. WebImageTool does not even enable SSL.

> LDAP Group Mapping over SSL can not specify trust store
> ---
>
> Key: HADOOP-12862
> URL: https://issues.apache.org/jira/browse/HADOOP-12862
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12862.001.patch
>
>
> In a secure environment, SSL is used to encrypt LDAP request for group 
> mapping resolution.
> We (+[~yoderme], +[~tgrayson]) have found that its implementation is strange.
> For information, Hadoop name node, as an LDAP client, talks to a LDAP server 
> to resolve the group mapping of a user. In the case of LDAP over SSL, a 
> typical scenario is to establish one-way authentication (the client verifies 
> the server's certificate is real) by storing the server's certificate in the 
> client's truststore.
> A rarer scenario is to establish two-way authentication: in addition to store 
> truststore for the client to verify the server, the server also verifies the 
> client's certificate is real, and the client stores its own certificate in 
> its keystore.
> However, the current implementation for LDAP over SSL does not seem to be 
> correct in that it only configures keystore but no truststore (so LDAP server 
> can verify Hadoop's certificate, but Hadoop may not be able to verify LDAP 
> server's certificate)
> I think there should an extra pair of properties to specify the 
> truststore/password for LDAP server, and use that to configure system 
> properties {{javax.net.ssl.trustStore}}/{{javax.net.ssl.trustStorePassword}}
> I am a security layman so my words can be imprecise. But I hope this makes 
> sense.
> Oracle's SSL LDAP documentation: 
> http://docs.oracle.com/javase/jndi/tutorial/ldap/security/ssl.html
> JSSE reference guide: 
> http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12774) s3a should use UGI.getCurrentUser.getShortname() for username

2016-03-02 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge reassigned HADOOP-12774:
---

Assignee: John Zhuge

> s3a should use UGI.getCurrentUser.getShortname() for username
> -
>
> Key: HADOOP-12774
> URL: https://issues.apache.org/jira/browse/HADOOP-12774
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: John Zhuge
>
> S3a users {{System.getProperty("user.name")}} to get the username for the 
> homedir. This is wrong, as it doesn't work on a YARN app where the identity 
> is set by HADOOP_USER_NAME, or in a doAs clause.
> Obviously, {{UGI.getCurrentUser.getShortname()}} provides that name, 
> everywhere. 
> This is a simple change in the source, though testing is harder ... probably 
> best to try in a doAs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store

2016-03-02 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15177105#comment-15177105
 ] 

Wei-Chiu Chuang commented on HADOOP-12862:
--

bq. it would be excellent if we had one place in hadoop to specify which TLS 
versions were permissible and which ciphers to use (or not use...) and then 
have all TLS connections default to that.

It would be tricky -- as far as I know, there are a few sources of SSL 
connections: namenode uses Jetty, datanodes use Netty+Jetty, KMS/HttpFs use 
Tomcat, and JNDI (for LDAP over SSL).

And I just found WebImageTool uses Jetty without disabling SSLv3, so this is 
another source of inconsistency.

> LDAP Group Mapping over SSL can not specify trust store
> ---
>
> Key: HADOOP-12862
> URL: https://issues.apache.org/jira/browse/HADOOP-12862
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12862.001.patch
>
>
> In a secure environment, SSL is used to encrypt LDAP request for group 
> mapping resolution.
> We (+[~yoderme], +[~tgrayson]) have found that its implementation is strange.
> For information, Hadoop name node, as an LDAP client, talks to a LDAP server 
> to resolve the group mapping of a user. In the case of LDAP over SSL, a 
> typical scenario is to establish one-way authentication (the client verifies 
> the server's certificate is real) by storing the server's certificate in the 
> client's truststore.
> A rarer scenario is to establish two-way authentication: in addition to store 
> truststore for the client to verify the server, the server also verifies the 
> client's certificate is real, and the client stores its own certificate in 
> its keystore.
> However, the current implementation for LDAP over SSL does not seem to be 
> correct in that it only configures keystore but no truststore (so LDAP server 
> can verify Hadoop's certificate, but Hadoop may not be able to verify LDAP 
> server's certificate)
> I think there should an extra pair of properties to specify the 
> truststore/password for LDAP server, and use that to configure system 
> properties {{javax.net.ssl.trustStore}}/{{javax.net.ssl.trustStorePassword}}
> I am a security layman so my words can be imprecise. But I hope this makes 
> sense.
> Oracle's SSL LDAP documentation: 
> http://docs.oracle.com/javase/jndi/tutorial/ldap/security/ssl.html
> JSSE reference guide: 
> http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12869) CryptoInputStream#read() may return incorrect result

2016-03-02 Thread Dapeng Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15177100#comment-15177100
 ] 

Dapeng Sun commented on HADOOP-12869:
-

Added [~dian.fu] who investigated this issue together with me.

> CryptoInputStream#read() may return incorrect result
> 
>
> Key: HADOOP-12869
> URL: https://issues.apache.org/jira/browse/HADOOP-12869
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2, 3.0.0
>Reporter: Dapeng Sun
>Assignee: Dapeng Sun
>Priority: Critical
> Attachments: HADOOP-12869.001.patch
>
>
> Here is the comment of {{FilterInputStream#read()}}:
> {noformat}
> /**
>  * Reads the next byte of data from this input stream. The value
>  * byte is returned as an int in the range
>  * 0 to 255. If no byte is available
>  * because the end of the stream has been reached, the value
>  * -1 is returned. This method blocks until input data
>  * is available, the end of the stream is detected, or an exception
>  * is thrown.
>  * 
>  * This method
>  * simply performs in.read() and returns the result.
>  *
>  * @return the next byte of data, or -1 if the end of the
>  * stream is reached.
>  * @exception  IOException  if an I/O error occurs.
>  * @seejava.io.FilterInputStream#in
>  */
> public int read() throws IOException {
> return in.read();
> }
> {noformat}
> Here is the implementation of {{CryptoInputStream#read()}} in Hadoop Common:
> {noformat}
> @Override
> public int read() throws IOException {
>   return (read(oneByteBuf, 0, 1) == -1) ? -1 : (oneByteBuf[0] & 0xff);
> 
> }
> {noformat}
> The return value of {{read(oneByteBuf, 0, 1)}} maybe 1, -1 and 0:
> For {{1}}: we should return the content of {{oneByteBuf}}
> For {{-1}}: we should return {{-1}} to stand for the end of stream
> For {{0}}: it means we didn't get decryption data back and it is not the end 
> of the stream, we should continue to decrypt the stream. But it return {{0}} 
> on {{read()}} in current implementation, it means the decrypted content is 
> {{0}} and it is incorrect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12855) Add option to disable JVMPauseMonitor across services

2016-03-02 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-12855:

Status: Patch Available  (was: In Progress)

This is a quick fix to avoid running multiple JVM pause monitor threads in mini 
clusters. The change is limited only to class {{JvmPauseMonitor}}.

One JVM probably just need one {{JvmPauseMonitor}} instance. A singleton design 
should work, just like {{JvmMetrics}}. This means changing calls of {{new 
JvmPauseMonitor()}} to {{JvmPauseMonitor.getSingleton()}} for example. 
Currently there are 8 such calls in 3 Hadoop projects: hdfs, mapreduce, and 
yarn.

> Add option to disable JVMPauseMonitor across services
> -
>
> Key: HADOOP-12855
> URL: https://issues.apache.org/jira/browse/HADOOP-12855
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance
>Affects Versions: 2.8.0
> Environment: JVMs with miniHDFS and miniYarn clusters
>Reporter: Steve Loughran
>Assignee: John Zhuge
> Attachments: HADOOP-12855-001.patch
>
>
> Now that the YARN and HDFS services automatically start a JVM pause monitor, 
> if you start up the mini HDFS and YARN clusters, with history server, you are 
> spinning off 5 + threads, all looking for JVM pauses, all printing things out 
> when it happens.
> We do not need these monitors in minicluster testing; they merely add load 
> and noise to tests.
> Rather than retrofit new options everywhere, how about having a 
> "jvm.pause.monitor.enabled" flag (default true), which, when set, starts off 
> the monitor thread.
> That way, the existing code is unchanged, there is always a JVM pause monitor 
> for the various services —it just isn't spinning up threads.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12869) CryptoInputStream#read() may return incorrect result

2016-03-02 Thread Dapeng Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dapeng Sun updated HADOOP-12869:

Description: 
Here is the comment of {{FilterInputStream#read()}}:
{noformat}
/**
 * Reads the next byte of data from this input stream. The value
 * byte is returned as an int in the range
 * 0 to 255. If no byte is available
 * because the end of the stream has been reached, the value
 * -1 is returned. This method blocks until input data
 * is available, the end of the stream is detected, or an exception
 * is thrown.
 * 
 * This method
 * simply performs in.read() and returns the result.
 *
 * @return the next byte of data, or -1 if the end of the
 * stream is reached.
 * @exception  IOException  if an I/O error occurs.
 * @seejava.io.FilterInputStream#in
 */
public int read() throws IOException {
return in.read();
}
{noformat}
Here is the implementation of {{CryptoInputStream#read()}} in Hadoop Common:
{noformat}
@Override
public int read() throws IOException {
  return (read(oneByteBuf, 0, 1) == -1) ? -1 : (oneByteBuf[0] & 0xff);  
}
{noformat}
The return value of {{read(oneByteBuf, 0, 1)}} maybe 1, -1 and 0:
For {{1}}: we should return the content of {{oneByteBuf}}
For {{-1}}: we should return {{-1}} to stand for the end of stream
For {{0}}: it means we didn't get decryption data back and it is not the end of 
the stream, we should continue to decrypt the stream. But it return {{0}} on 
{{read()}} in current implementation, it means the decrypted content is {{0}} 
and it is incorrect.


  was:
Here is the comment of {{FilterInputStream#read()}}:
{noformat}
/**
 * Reads the next byte of data from this input stream. The value
 * byte is returned as an int in the range
 * 0 to 255. If no byte is available
 * because the end of the stream has been reached, the value
 * -1 is returned. This method blocks until input data
 * is available, the end of the stream is detected, or an exception
 * is thrown.
 * 
 * This method
 * simply performs in.read() and returns the result.
 *
 * @return the next byte of data, or -1 if the end of the
 * stream is reached.
 * @exception  IOException  if an I/O error occurs.
 * @seejava.io.FilterInputStream#in
 */
public int read() throws IOException {
return in.read();
}
{noformat}
Here is the implemention of {{CryptoInputStream#read()}} in Hadoop Common:
{noformat}
@Override
public int read() throws IOException {
  return (read(oneByteBuf, 0, 1) == -1) ? -1 : (oneByteBuf[0] & 0xff);  
}
{noformat}
The return value of {{read(oneByteBuf, 0, 1)}} maybe 1, -1 and 0:
For {{1}}: we should return the content of {{oneByteBuf}}
For {{-1}}: we should return {{-1}} to stand for the end of stream
For {{0}}: it means we didn't get decryption data back and it is not the end of 
the stream, we should continue to decrypt the stream.



> CryptoInputStream#read() may return incorrect result
> 
>
> Key: HADOOP-12869
> URL: https://issues.apache.org/jira/browse/HADOOP-12869
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2, 3.0.0
>Reporter: Dapeng Sun
>Assignee: Dapeng Sun
>Priority: Critical
> Attachments: HADOOP-12869.001.patch
>
>
> Here is the comment of {{FilterInputStream#read()}}:
> {noformat}
> /**
>  * Reads the next byte of data from this input stream. The value
>  * byte is returned as an int in the range
>  * 0 to 255. If no byte is available
>  * because the end of the stream has been reached, the value
>  * -1 is returned. This method blocks until input data
>  * is available, the end of the stream is detected, or an exception
>  * is thrown.
>  * 
>  * This method
>  * simply performs in.read() and returns the result.
>  *
>  * @return the next byte of data, or -1 if the end of the
>  * stream is reached.
>  * @exception  IOException  if an I/O error occurs.
>  * @seejava.io.FilterInputStream#in
>  */
> public int read() throws IOException {
> return in.read();
> }
> {noformat}
> Here is the implementation of {{CryptoInputStream#read()}} in Hadoop Common:
> {noformat}
> @Override
> public int read() throws IOException {
>   return (read(oneByteBuf, 0, 1) == -1) ? -1 : (oneByteBuf[0] & 0xff);
> 
> }
> {noformat}
> The return value of {{read(oneByteBuf, 0, 1)}} maybe 1, -1 and 0:
> For {{1}}: we should return the content of {{oneByteBuf}}
> For {{-1}}: we should return {{-1}} to stand for the end of stream
> For {{0}}: it means we 

[jira] [Updated] (HADOOP-12869) CryptoInputStream#read() may return incorrect result

2016-03-02 Thread Dapeng Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dapeng Sun updated HADOOP-12869:

Description: 
Here is the comment of {{FilterInputStream#read()}}:
{noformat}
/**
 * Reads the next byte of data from this input stream. The value
 * byte is returned as an int in the range
 * 0 to 255. If no byte is available
 * because the end of the stream has been reached, the value
 * -1 is returned. This method blocks until input data
 * is available, the end of the stream is detected, or an exception
 * is thrown.
 * 
 * This method
 * simply performs in.read() and returns the result.
 *
 * @return the next byte of data, or -1 if the end of the
 * stream is reached.
 * @exception  IOException  if an I/O error occurs.
 * @seejava.io.FilterInputStream#in
 */
public int read() throws IOException {
return in.read();
}
{noformat}
Here is the implemention of {{CryptoInputStream#read()}} in Hadoop Common:
{noformat}
@Override
public int read() throws IOException {
  return (read(oneByteBuf, 0, 1) == -1) ? -1 : (oneByteBuf[0] & 0xff);  
}
{noformat}
The return value of {{read(oneByteBuf, 0, 1)}} maybe 1, -1 and 0:
For {{1}}: we should return the content of {{oneByteBuf}}
For {{-1}}: we should return {{-1}} to stand for the end of stream
For {{0}}: it means we didn't get decryption data back and it is not the end of 
the stream, we should continue to decrypt the stream.


  was:
Here is the comment of InputStream#read():
{noformat} 
   /**
 * Reads the next byte of data from the input stream. The value byte is
 * returned as an int in the range 0 to
 * 255. If no byte is available because the end of the stream
 * has been reached, the value -1 is returned. This method
 * blocks until input data is available, the end of the stream is detected,
 * or an exception is thrown.
 *
 *  A subclass must provide an implementation of this method.
 *
 * @return the next byte of data, or -1 if the end of the
 * stream is reached.
 * @exception  IOException  if an I/O error occurs.
 */
public abstract int read() throws IOException;
{noformat} 
Here is the implemention of {{CryptoInputStream#read()}} in Hadoop Common:
{noformat}
@Override
public int read() throws IOException {
  return (read(oneByteBuf, 0, 1) == -1) ? -1 : (oneByteBuf[0] & 0xff);  
}
{noformat}
The return value of {{read(oneByteBuf, 0, 1)}} maybe 1, -1 and 0:
For {{1}}: we should return the content of {{oneByteBuf}}
For {{-1}}: we should return {{-1}} to stand for the end of stream
For {{0}}: it means we didn't get decryption data back and it is not the end of 
the stream, we should continue to decrypt the stream.



> CryptoInputStream#read() may return incorrect result
> 
>
> Key: HADOOP-12869
> URL: https://issues.apache.org/jira/browse/HADOOP-12869
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2, 3.0.0
>Reporter: Dapeng Sun
>Assignee: Dapeng Sun
>Priority: Critical
> Attachments: HADOOP-12869.001.patch
>
>
> Here is the comment of {{FilterInputStream#read()}}:
> {noformat}
> /**
>  * Reads the next byte of data from this input stream. The value
>  * byte is returned as an int in the range
>  * 0 to 255. If no byte is available
>  * because the end of the stream has been reached, the value
>  * -1 is returned. This method blocks until input data
>  * is available, the end of the stream is detected, or an exception
>  * is thrown.
>  * 
>  * This method
>  * simply performs in.read() and returns the result.
>  *
>  * @return the next byte of data, or -1 if the end of the
>  * stream is reached.
>  * @exception  IOException  if an I/O error occurs.
>  * @seejava.io.FilterInputStream#in
>  */
> public int read() throws IOException {
> return in.read();
> }
> {noformat}
> Here is the implemention of {{CryptoInputStream#read()}} in Hadoop Common:
> {noformat}
> @Override
> public int read() throws IOException {
>   return (read(oneByteBuf, 0, 1) == -1) ? -1 : (oneByteBuf[0] & 0xff);
> 
> }
> {noformat}
> The return value of {{read(oneByteBuf, 0, 1)}} maybe 1, -1 and 0:
> For {{1}}: we should return the content of {{oneByteBuf}}
> For {{-1}}: we should return {{-1}} to stand for the end of stream
> For {{0}}: it means we didn't get decryption data back and it is not the end 
> of the stream, we should continue to decrypt the stream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12869) CryptoInputStream#read() may return incorrect result

2016-03-02 Thread Dapeng Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dapeng Sun updated HADOOP-12869:

Priority: Blocker  (was: Major)

> CryptoInputStream#read() may return incorrect result
> 
>
> Key: HADOOP-12869
> URL: https://issues.apache.org/jira/browse/HADOOP-12869
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2, 3.0.0
>Reporter: Dapeng Sun
>Assignee: Dapeng Sun
>Priority: Blocker
> Attachments: HADOOP-12869.001.patch
>
>
> Here is the comment of InputStream#read():
> {noformat} 
>/**
>  * Reads the next byte of data from the input stream. The value byte is
>  * returned as an int in the range 0 to
>  * 255. If no byte is available because the end of the stream
>  * has been reached, the value -1 is returned. This method
>  * blocks until input data is available, the end of the stream is 
> detected,
>  * or an exception is thrown.
>  *
>  *  A subclass must provide an implementation of this method.
>  *
>  * @return the next byte of data, or -1 if the end of the
>  * stream is reached.
>  * @exception  IOException  if an I/O error occurs.
>  */
> public abstract int read() throws IOException;
> {noformat} 
> Here is the implemention of {{CryptoInputStream#read()}} in Hadoop Common:
> {noformat}
> @Override
> public int read() throws IOException {
>   return (read(oneByteBuf, 0, 1) == -1) ? -1 : (oneByteBuf[0] & 0xff);
> 
> }
> {noformat}
> The return value of {{read(oneByteBuf, 0, 1)}} maybe 1, -1 and 0:
> For {{1}}: we should return the content of {{oneByteBuf}}
> For {{-1}}: we should return {{-1}} to stand for the end of stream
> For {{0}}: it means we didn't get decryption data back and it is not the end 
> of the stream, we should continue to decrypt the stream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12869) CryptoInputStream#read() may return incorrect result

2016-03-02 Thread Dapeng Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dapeng Sun updated HADOOP-12869:

Priority: Critical  (was: Blocker)

> CryptoInputStream#read() may return incorrect result
> 
>
> Key: HADOOP-12869
> URL: https://issues.apache.org/jira/browse/HADOOP-12869
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2, 3.0.0
>Reporter: Dapeng Sun
>Assignee: Dapeng Sun
>Priority: Critical
> Attachments: HADOOP-12869.001.patch
>
>
> Here is the comment of InputStream#read():
> {noformat} 
>/**
>  * Reads the next byte of data from the input stream. The value byte is
>  * returned as an int in the range 0 to
>  * 255. If no byte is available because the end of the stream
>  * has been reached, the value -1 is returned. This method
>  * blocks until input data is available, the end of the stream is 
> detected,
>  * or an exception is thrown.
>  *
>  *  A subclass must provide an implementation of this method.
>  *
>  * @return the next byte of data, or -1 if the end of the
>  * stream is reached.
>  * @exception  IOException  if an I/O error occurs.
>  */
> public abstract int read() throws IOException;
> {noformat} 
> Here is the implemention of {{CryptoInputStream#read()}} in Hadoop Common:
> {noformat}
> @Override
> public int read() throws IOException {
>   return (read(oneByteBuf, 0, 1) == -1) ? -1 : (oneByteBuf[0] & 0xff);
> 
> }
> {noformat}
> The return value of {{read(oneByteBuf, 0, 1)}} maybe 1, -1 and 0:
> For {{1}}: we should return the content of {{oneByteBuf}}
> For {{-1}}: we should return {{-1}} to stand for the end of stream
> For {{0}}: it means we didn't get decryption data back and it is not the end 
> of the stream, we should continue to decrypt the stream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12869) CryptoInputStream#read() may return incorrect result

2016-03-02 Thread Dapeng Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dapeng Sun updated HADOOP-12869:

Description: 
Here is the comment of InputStream#read():
{noformat} 
   /**
 * Reads the next byte of data from the input stream. The value byte is
 * returned as an int in the range 0 to
 * 255. If no byte is available because the end of the stream
 * has been reached, the value -1 is returned. This method
 * blocks until input data is available, the end of the stream is detected,
 * or an exception is thrown.
 *
 *  A subclass must provide an implementation of this method.
 *
 * @return the next byte of data, or -1 if the end of the
 * stream is reached.
 * @exception  IOException  if an I/O error occurs.
 */
public abstract int read() throws IOException;
{noformat} 
Here is the implemention of {{CryptoInputStream#read()}} in Hadoop Common:
{noformat}
@Override
public int read() throws IOException {
  return (read(oneByteBuf, 0, 1) == -1) ? -1 : (oneByteBuf[0] & 0xff);  
}
{noformat}
The return value of {{read(oneByteBuf, 0, 1)}} maybe 1, -1 and 0:
For {{1}}: we should return the content of {{oneByteBuf}}
For {{-1}}: we should return {{-1}} to stand for the end of stream
For {{0}}: it means we didn't get decryption data back and it is not the end of 
the stream, we should continue to decrypt the stream.


> CryptoInputStream#read() may return incorrect result
> 
>
> Key: HADOOP-12869
> URL: https://issues.apache.org/jira/browse/HADOOP-12869
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2, 3.0.0
>Reporter: Dapeng Sun
>Assignee: Dapeng Sun
> Attachments: HADOOP-12869.001.patch
>
>
> Here is the comment of InputStream#read():
> {noformat} 
>/**
>  * Reads the next byte of data from the input stream. The value byte is
>  * returned as an int in the range 0 to
>  * 255. If no byte is available because the end of the stream
>  * has been reached, the value -1 is returned. This method
>  * blocks until input data is available, the end of the stream is 
> detected,
>  * or an exception is thrown.
>  *
>  *  A subclass must provide an implementation of this method.
>  *
>  * @return the next byte of data, or -1 if the end of the
>  * stream is reached.
>  * @exception  IOException  if an I/O error occurs.
>  */
> public abstract int read() throws IOException;
> {noformat} 
> Here is the implemention of {{CryptoInputStream#read()}} in Hadoop Common:
> {noformat}
> @Override
> public int read() throws IOException {
>   return (read(oneByteBuf, 0, 1) == -1) ? -1 : (oneByteBuf[0] & 0xff);
> 
> }
> {noformat}
> The return value of {{read(oneByteBuf, 0, 1)}} maybe 1, -1 and 0:
> For {{1}}: we should return the content of {{oneByteBuf}}
> For {{-1}}: we should return {{-1}} to stand for the end of stream
> For {{0}}: it means we didn't get decryption data back and it is not the end 
> of the stream, we should continue to decrypt the stream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12855) Add option to disable JVMPauseMonitor across services

2016-03-02 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-12855:

Attachment: HADOOP-12855-001.patch

Patch 001:
* Start a pause monitor thread only when no such thread is running in JVM
* Stop the thread when stopping the last instance of JvmPauseMonitor
* Pass TestMiniDFSCluster and TestMiniYarnCluster unit tests. Only see 1 line 
of log message “Starting JVM pause monitor" per minicluster session.

> Add option to disable JVMPauseMonitor across services
> -
>
> Key: HADOOP-12855
> URL: https://issues.apache.org/jira/browse/HADOOP-12855
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance
>Affects Versions: 2.8.0
> Environment: JVMs with miniHDFS and miniYarn clusters
>Reporter: Steve Loughran
>Assignee: John Zhuge
> Attachments: HADOOP-12855-001.patch
>
>
> Now that the YARN and HDFS services automatically start a JVM pause monitor, 
> if you start up the mini HDFS and YARN clusters, with history server, you are 
> spinning off 5 + threads, all looking for JVM pauses, all printing things out 
> when it happens.
> We do not need these monitors in minicluster testing; they merely add load 
> and noise to tests.
> Rather than retrofit new options everywhere, how about having a 
> "jvm.pause.monitor.enabled" flag (default true), which, when set, starts off 
> the monitor thread.
> That way, the existing code is unchanged, there is always a JVM pause monitor 
> for the various services —it just isn't spinning up threads.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-12857) Rework hadoop-tools-dist

2016-03-02 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15177090#comment-15177090
 ] 

Allen Wittenauer edited comment on HADOOP-12857 at 3/3/16 3:25 AM:
---

Why does the hdfs haadmin command require hadoop-tools in the classpath?  Is 
this actually a long standing bug/misunderstanding of where toolrunner comes 
from?


was (Author: aw):
Why does the hdfs haadmin command require hadoop-tools in the classpath?  Is 
this actually a long standing bug?

> Rework hadoop-tools-dist
> 
>
> Key: HADOOP-12857
> URL: https://issues.apache.org/jira/browse/HADOOP-12857
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>
> As hadoop-tools grows bigger and bigger, it's becoming evident that having a 
> single directory that gets sucked in is starting to become a big burden as 
> the number of tools grows.  Let's rework this to be smarter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12857) Rework hadoop-tools-dist

2016-03-02 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15177090#comment-15177090
 ] 

Allen Wittenauer commented on HADOOP-12857:
---

Why does the hdfs haadmin command require hadoop-tools in the classpath?  Is 
this actually a long standing bug?

> Rework hadoop-tools-dist
> 
>
> Key: HADOOP-12857
> URL: https://issues.apache.org/jira/browse/HADOOP-12857
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>
> As hadoop-tools grows bigger and bigger, it's becoming evident that having a 
> single directory that gets sucked in is starting to become a big burden as 
> the number of tools grows.  Let's rework this to be smarter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12869) CryptoInputStream#read() may return incorrect result

2016-03-02 Thread Dapeng Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dapeng Sun updated HADOOP-12869:

Status: Patch Available  (was: Open)

> CryptoInputStream#read() may return incorrect result
> 
>
> Key: HADOOP-12869
> URL: https://issues.apache.org/jira/browse/HADOOP-12869
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2, 3.0.0
>Reporter: Dapeng Sun
>Assignee: Dapeng Sun
> Attachments: HADOOP-12869.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12869) CryptoInputStream#read() may return incorrect result

2016-03-02 Thread Dapeng Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dapeng Sun updated HADOOP-12869:

Attachment: HADOOP-12869.001.patch

> CryptoInputStream#read() may return incorrect result
> 
>
> Key: HADOOP-12869
> URL: https://issues.apache.org/jira/browse/HADOOP-12869
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2, 3.0.0
>Reporter: Dapeng Sun
>Assignee: Dapeng Sun
> Attachments: HADOOP-12869.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12869) CryptoInputStream#read() may return incorrect result

2016-03-02 Thread Dapeng Sun (JIRA)
Dapeng Sun created HADOOP-12869:
---

 Summary: CryptoInputStream#read() may return incorrect result
 Key: HADOOP-12869
 URL: https://issues.apache.org/jira/browse/HADOOP-12869
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.7.2, 3.0.0
Reporter: Dapeng Sun
Assignee: Dapeng Sun






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12859) Disable hiding field style checks in class setters

2016-03-02 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15176913#comment-15176913
 ] 

Kai Zheng commented on HADOOP-12859:


Thanks Andrew for having this!

> Disable hiding field style checks in class setters
> --
>
> Key: HADOOP-12859
> URL: https://issues.apache.org/jira/browse/HADOOP-12859
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 2.9.0
>
> Attachments: HADOOP-12859-v1.patch
>
>
> As discussed in mailing list, this will disable style checks in class setters 
> like the following:
> {noformat}
> void setBlockLocations(LocatedBlocks blockLocations) {:42: 'blockLocations' 
> hides a field.
> void setTimeout(int timeout) {:25: 'timeout' hides a field.
> void setLocatedBlocks(List locatedBlocks) {:46: 'locatedBlocks' 
> hides a field.
> void setRemaining(long remaining) {:28: 'remaining' hides a field.
> void setBytesPerCRC(int bytesPerCRC) {:29: 'bytesPerCRC' hides a field.
> void setCrcType(DataChecksum.Type crcType) {:39: 'crcType' hides a field.
> void setCrcPerBlock(long crcPerBlock) {:30: 'crcPerBlock' hides a field.
> void setRefetchBlocks(boolean refetchBlocks) {:35: 'refetchBlocks' hides a 
> field.
> void setLastRetriedIndex(int lastRetriedIndex) {:34: 'lastRetriedIndex' hides 
> a field.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12832) Implement unix-like 'FsShell -touch'

2016-03-02 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge reassigned HADOOP-12832:
---

Assignee: John Zhuge

> Implement unix-like 'FsShell -touch' 
> -
>
> Key: HADOOP-12832
> URL: https://issues.apache.org/jira/browse/HADOOP-12832
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.6.4
>Reporter: Gera Shegalov
>Assignee: John Zhuge
>
> We needed to touch a bunch of files as in 
> https://en.wikipedia.org/wiki/Touch_(Unix) . 
> Because FsShell does not expose FileSystem#setTimes  , we had to do it 
> programmatically in Scalding REPL. Seems like it should not be this 
> complicated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11356) Removed deprecated o.a.h.fs.permission.AccessControlException

2016-03-02 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-11356:
-
Release Note: org.apache.hadoop.fs.permission.AccessControlException was 
deprecated in the last major release, and has been removed in favor of 
org.apache.hadoop.security.AccessControlException

> Removed deprecated o.a.h.fs.permission.AccessControlException
> -
>
> Key: HADOOP-11356
> URL: https://issues.apache.org/jira/browse/HADOOP-11356
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Li Lu
> Fix For: 3.0.0
>
> Attachments: HDFS-7479-120414.patch
>
>
> The {{o.a.h.fs.permission.AccessControlException}} has been deprecated for 
> last major releases and it should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12868) hadoop-openstack's pom has missing and unused dependencies

2016-03-02 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15176730#comment-15176730
 ] 

Allen Wittenauer commented on HADOOP-12868:
---

Here's the compile errors without commons-httpclient:

{code}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on 
project hadoop-openstack: Compilation failure: Compilation failure:
[ERROR] 
/Users/aw/Src/aw/hadoop/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftBadRequestException.java:[21,37]
 package org.apache.commons.httpclient does not exist
[ERROR] 
/Users/aw/Src/aw/hadoop/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftInvalidResponseException.java:[21,37]
 package org.apache.commons.httpclient does not exist
[ERROR] 
/Users/aw/Src/aw/hadoop/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftBadRequestException.java:[34,35]
 cannot find symbol
[ERROR] symbol:   class HttpMethod
[ERROR] location: class 
org.apache.hadoop.fs.swift.exceptions.SwiftBadRequestException
[ERROR] 
/Users/aw/Src/aw/hadoop/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/exceptions/SwiftInvalidResponseException.java:[51,40]
 cannot find symbol
[ERROR] symbol:   class HttpMethod
[ERROR] location: class 
org.apache.hadoop.fs.swift.exceptions.SwiftInvalidResponseException
[ERROR] 
/Users/aw/Src/aw/hadoop/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeFileSystemStore.java:[20,37]
 package org.apache.commons.httpclient does not exist
[ERROR] 
/Users/aw/Src/aw/hadoop/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeFileSystemStore.java:[21,37]
 package org.apache.commons.httpclient does not exist
[ERROR] 
/Users/aw/Src/aw/hadoop/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/SwiftRestClient.java:[21,37]
 package org.apache.commons.httpclient does not exist
[ERROR] 
/Users/aw/Src/aw/hadoop/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/SwiftRestClient.java:[22,37]
 package org.apache.commons.httpclient does not exist
[ERROR] 
/Users/aw/Src/aw/hadoop/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/SwiftRestClient.java:[23,37]
 package org.apache.commons.httpclient does not exist
[ERROR] 
/Users/aw/Src/aw/hadoop/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/SwiftRestClient.java:[24,37]
 package org.apache.commons.httpclient does not exist
[ERROR] 
/Users/aw/Src/aw/hadoop/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/SwiftRestClient.java:[25,37]
 package org.apache.commons.httpclient does not exist
[ERROR] 
/Users/aw/Src/aw/hadoop/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/SwiftRestClient.java:[26,37]
 package org.apache.commons.httpclient does not exist
[ERROR] 
/Users/aw/Src/aw/hadoop/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/SwiftRestClient.java:[27,37]
 package org.apache.commons.httpclient does not exist
[ERROR] 
/Users/aw/Src/aw/hadoop/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/SwiftRestClient.java:[28,45]
 package org.apache.commons.httpclient.methods does not exist
[ERROR] 
/Users/aw/Src/aw/hadoop/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/SwiftRestClient.java:[29,45]
 package org.apache.commons.httpclient.methods does not exist
[ERROR] 
/Users/aw/Src/aw/hadoop/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/SwiftRestClient.java:[30,45]
 package org.apache.commons.httpclient.methods does not exist
[ERROR] 
/Users/aw/Src/aw/hadoop/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/SwiftRestClient.java:[31,45]
 package org.apache.commons.httpclient.methods does not exist
[ERROR] 
/Users/aw/Src/aw/hadoop/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/SwiftRestClient.java:[32,45]
 package org.apache.commons.httpclient.methods does not exist
[ERROR] 
/Users/aw/Src/aw/hadoop/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/SwiftRestClient.java:[33,45]
 package org.apache.commons.httpclient.methods does not exist
[ERROR] 
/Users/aw/Src/aw/hadoop/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/SwiftRestClient.java:[34,45]
 package org.apache.commons.httpclient.methods does not exist
[ERROR] 
/Users/aw/Src/aw/hadoop/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/SwiftRestClient.java:[35,44]
 package org.apache.commons.httpclient.params does not exist
[ERROR] 

[jira] [Commented] (HADOOP-12793) Write a new group mapping service guide

2016-03-02 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15176677#comment-15176677
 ] 

Masatake Iwasaki commented on HADOOP-12793:
---

{noformat}
SSL is enable by setting `hadoop.security.group.mapping.ldap` to `true`.
{noformat}

The configuration key should be {{hadoop.security.group.mapping.ldap.ssl}}?


> Write a new group mapping service guide
> ---
>
> Key: HADOOP-12793
> URL: https://issues.apache.org/jira/browse/HADOOP-12793
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: ldap, supportability
> Attachments: HADOOP-12791.001.patch, HADOOP-12793.002.patch, 
> HADOOP-12793.003.patch, HADOOP-12793.004.patch, HADOOP-12793.005.patch
>
>
> LdapGroupsMapping has lots of configurable properties and is thus fairly 
> complex in nature. _HDFS Permissions Guide_ has a minimal introduction to 
> LdapGroupsMapping, with reference to "More information on configuring the 
> group mapping service is available in the Javadocs."
> However, its Javadoc provides no information about how to configure it. 
> Core-default.xml has descriptions for each property, but still lacks a 
> comprehensive tutorial. Without a tutorial/guide, these configurable 
> properties would be buried under the sea of properties.
> Both Cloudera and HortonWorks has some information regarding LDAP group 
> mapping:
> http://www.cloudera.com/documentation/enterprise/latest/topics/cm_sg_ldap_grp_mappings.html
> http://hortonworks.com/blog/hadoop-groupmapping-ldap-integration/
> But neither cover all configurable features, such as using SSL with LDAP, and 
> POSIX group semantics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12868) hadoop-openstack's pom has missing and unused dependencies

2016-03-02 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15176649#comment-15176649
 ] 

Masatake Iwasaki commented on HADOOP-12868:
---

HADOOP-12552 fixed the dependency on httpclient in hadoop-openstack/pom.xml but 
there seems to be other problems.

> hadoop-openstack's pom has missing and unused dependencies
> --
>
> Key: HADOOP-12868
> URL: https://issues.apache.org/jira/browse/HADOOP-12868
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> Attempting to compile openstack on a fairly fresh maven repo fails due to 
> commons-httpclient not being a declared dependency.  After that is fixed, 
> doing a maven dependency:analyze shows other problems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store

2016-03-02 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15176640#comment-15176640
 ] 

Mike Yoder commented on HADOOP-12862:
-

Patch looks reasonable based on a quick skim.  One nit is that

{quote}
+File path to the SSL truststore that contains the SSL certificate of the
+LDAP server.
{quote}
Should be along the lines of the "File path to the SSL truststore that contains 
the root certificate used to sign the LDAP server's certificate. Specify this 
if the LDAP server's certificate is not signed by a well known certificate 
authority."


> LDAP Group Mapping over SSL can not specify trust store
> ---
>
> Key: HADOOP-12862
> URL: https://issues.apache.org/jira/browse/HADOOP-12862
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12862.001.patch
>
>
> In a secure environment, SSL is used to encrypt LDAP request for group 
> mapping resolution.
> We (+[~yoderme], +[~tgrayson]) have found that its implementation is strange.
> For information, Hadoop name node, as an LDAP client, talks to a LDAP server 
> to resolve the group mapping of a user. In the case of LDAP over SSL, a 
> typical scenario is to establish one-way authentication (the client verifies 
> the server's certificate is real) by storing the server's certificate in the 
> client's truststore.
> A rarer scenario is to establish two-way authentication: in addition to store 
> truststore for the client to verify the server, the server also verifies the 
> client's certificate is real, and the client stores its own certificate in 
> its keystore.
> However, the current implementation for LDAP over SSL does not seem to be 
> correct in that it only configures keystore but no truststore (so LDAP server 
> can verify Hadoop's certificate, but Hadoop may not be able to verify LDAP 
> server's certificate)
> I think there should an extra pair of properties to specify the 
> truststore/password for LDAP server, and use that to configure system 
> properties {{javax.net.ssl.trustStore}}/{{javax.net.ssl.trustStorePassword}}
> I am a security layman so my words can be imprecise. But I hope this makes 
> sense.
> Oracle's SSL LDAP documentation: 
> http://docs.oracle.com/javase/jndi/tutorial/ldap/security/ssl.html
> JSSE reference guide: 
> http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store

2016-03-02 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12862:
-
Attachment: HADOOP-12862.001.patch

Attaching an initial patch. This patch straightforwardly specifies truststore 
file path and password, and added the config in core-default.xml

Looking forward, I should also update docs (GroupsMapping.md which will be 
added in HADOOP-12793) and might also refactor the code a bit.

> LDAP Group Mapping over SSL can not specify trust store
> ---
>
> Key: HADOOP-12862
> URL: https://issues.apache.org/jira/browse/HADOOP-12862
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12862.001.patch
>
>
> In a secure environment, SSL is used to encrypt LDAP request for group 
> mapping resolution.
> We (+[~yoderme], +[~tgrayson]) have found that its implementation is strange.
> For information, Hadoop name node, as an LDAP client, talks to a LDAP server 
> to resolve the group mapping of a user. In the case of LDAP over SSL, a 
> typical scenario is to establish one-way authentication (the client verifies 
> the server's certificate is real) by storing the server's certificate in the 
> client's truststore.
> A rarer scenario is to establish two-way authentication: in addition to store 
> truststore for the client to verify the server, the server also verifies the 
> client's certificate is real, and the client stores its own certificate in 
> its keystore.
> However, the current implementation for LDAP over SSL does not seem to be 
> correct in that it only configures keystore but no truststore (so LDAP server 
> can verify Hadoop's certificate, but Hadoop may not be able to verify LDAP 
> server's certificate)
> I think there should an extra pair of properties to specify the 
> truststore/password for LDAP server, and use that to configure system 
> properties {{javax.net.ssl.trustStore}}/{{javax.net.ssl.trustStorePassword}}
> I am a security layman so my words can be imprecise. But I hope this makes 
> sense.
> Oracle's SSL LDAP documentation: 
> http://docs.oracle.com/javase/jndi/tutorial/ldap/security/ssl.html
> JSSE reference guide: 
> http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HADOOP-12855) Add option to disable JVMPauseMonitor across services

2016-03-02 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-12855 started by John Zhuge.
---
> Add option to disable JVMPauseMonitor across services
> -
>
> Key: HADOOP-12855
> URL: https://issues.apache.org/jira/browse/HADOOP-12855
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance
>Affects Versions: 2.8.0
> Environment: JVMs with miniHDFS and miniYarn clusters
>Reporter: Steve Loughran
>Assignee: John Zhuge
>
> Now that the YARN and HDFS services automatically start a JVM pause monitor, 
> if you start up the mini HDFS and YARN clusters, with history server, you are 
> spinning off 5 + threads, all looking for JVM pauses, all printing things out 
> when it happens.
> We do not need these monitors in minicluster testing; they merely add load 
> and noise to tests.
> Rather than retrofit new options everywhere, how about having a 
> "jvm.pause.monitor.enabled" flag (default true), which, when set, starts off 
> the monitor thread.
> That way, the existing code is unchanged, there is always a JVM pause monitor 
> for the various services —it just isn't spinning up threads.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store

2016-03-02 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15176476#comment-15176476
 ] 

Mike Yoder commented on HADOOP-12862:
-

Traditionally we have focused only on the server side of the TLS connection 
with regards to the POODLE attack. Strictly speaking, the AD server is not our 
code, so primary responsibility is on the AD server side. But you do raise a 
good point about SSLv3 and weak ciphers - it would be excellent if we had one 
place in hadoop to specify which TLS versions were permissible and which 
ciphers to use (or not use...) and then have all TLS connections default to 
that.

> LDAP Group Mapping over SSL can not specify trust store
> ---
>
> Key: HADOOP-12862
> URL: https://issues.apache.org/jira/browse/HADOOP-12862
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>
> In a secure environment, SSL is used to encrypt LDAP request for group 
> mapping resolution.
> We (+[~yoderme], +[~tgrayson]) have found that its implementation is strange.
> For information, Hadoop name node, as an LDAP client, talks to a LDAP server 
> to resolve the group mapping of a user. In the case of LDAP over SSL, a 
> typical scenario is to establish one-way authentication (the client verifies 
> the server's certificate is real) by storing the server's certificate in the 
> client's truststore.
> A rarer scenario is to establish two-way authentication: in addition to store 
> truststore for the client to verify the server, the server also verifies the 
> client's certificate is real, and the client stores its own certificate in 
> its keystore.
> However, the current implementation for LDAP over SSL does not seem to be 
> correct in that it only configures keystore but no truststore (so LDAP server 
> can verify Hadoop's certificate, but Hadoop may not be able to verify LDAP 
> server's certificate)
> I think there should an extra pair of properties to specify the 
> truststore/password for LDAP server, and use that to configure system 
> properties {{javax.net.ssl.trustStore}}/{{javax.net.ssl.trustStorePassword}}
> I am a security layman so my words can be imprecise. But I hope this makes 
> sense.
> Oracle's SSL LDAP documentation: 
> http://docs.oracle.com/javase/jndi/tutorial/ldap/security/ssl.html
> JSSE reference guide: 
> http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12857) Rework hadoop-tools-dist

2016-03-02 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15176413#comment-15176413
 ] 

Allen Wittenauer commented on HADOOP-12857:
---

bq. Would it make sense to leave these alone as special cases for now and defer 
improving them to a separate patch? I think the primary benefit of this 
proposal is improved manageability of the truly optional components.

Two things lead me to the answer no:

a) More than half of the bits in hadoop-tools are being called by a script.  (I 
know! it's way more than I expected!)  The optional components are in the 
minority.

b) We'll definitely end up with duplicate jars in the classpath for those bits. 
 (The classpath de-duper doesn't expand the asterisks.) 

But really, it's not that much extra just do it in one pass.  I'll likely have 
a patch in the next day or so. (ofc, being unemployed helps haha)

> Rework hadoop-tools-dist
> 
>
> Key: HADOOP-12857
> URL: https://issues.apache.org/jira/browse/HADOOP-12857
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>
> As hadoop-tools grows bigger and bigger, it's becoming evident that having a 
> single directory that gets sucked in is starting to become a big burden as 
> the number of tools grows.  Let's rework this to be smarter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store

2016-03-02 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15176411#comment-15176411
 ] 

Wei-Chiu Chuang commented on HADOOP-12862:
--

Hi Mike, thanks for the comment.
Fixing it should be trivial, just add a pair of new parameters, and update the 
docs about it.

The other thing I'm looking into is the potential security vulnerability with 
LDAP over SSL. Hadoop implementation uses JNDI API, which uses default SSL 
factory to create connection. This could potentially suffer from POODLE attack 
with SSLv3; also, it should reject some weak ciphers. In Hadoop, all server 
side SSL has been patched to reject SSLv3, and reject weak ciphers. I wonder if 
this is an issue.

Thanks again.

> LDAP Group Mapping over SSL can not specify trust store
> ---
>
> Key: HADOOP-12862
> URL: https://issues.apache.org/jira/browse/HADOOP-12862
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>
> In a secure environment, SSL is used to encrypt LDAP request for group 
> mapping resolution.
> We (+[~yoderme], +[~tgrayson]) have found that its implementation is strange.
> For information, Hadoop name node, as an LDAP client, talks to a LDAP server 
> to resolve the group mapping of a user. In the case of LDAP over SSL, a 
> typical scenario is to establish one-way authentication (the client verifies 
> the server's certificate is real) by storing the server's certificate in the 
> client's truststore.
> A rarer scenario is to establish two-way authentication: in addition to store 
> truststore for the client to verify the server, the server also verifies the 
> client's certificate is real, and the client stores its own certificate in 
> its keystore.
> However, the current implementation for LDAP over SSL does not seem to be 
> correct in that it only configures keystore but no truststore (so LDAP server 
> can verify Hadoop's certificate, but Hadoop may not be able to verify LDAP 
> server's certificate)
> I think there should an extra pair of properties to specify the 
> truststore/password for LDAP server, and use that to configure system 
> properties {{javax.net.ssl.trustStore}}/{{javax.net.ssl.trustStorePassword}}
> I am a security layman so my words can be imprecise. But I hope this makes 
> sense.
> Oracle's SSL LDAP documentation: 
> http://docs.oracle.com/javase/jndi/tutorial/ldap/security/ssl.html
> JSSE reference guide: 
> http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store

2016-03-02 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HADOOP-12862:


Assignee: Wei-Chiu Chuang

> LDAP Group Mapping over SSL can not specify trust store
> ---
>
> Key: HADOOP-12862
> URL: https://issues.apache.org/jira/browse/HADOOP-12862
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>
> In a secure environment, SSL is used to encrypt LDAP request for group 
> mapping resolution.
> We (+[~yoderme], +[~tgrayson]) have found that its implementation is strange.
> For information, Hadoop name node, as an LDAP client, talks to a LDAP server 
> to resolve the group mapping of a user. In the case of LDAP over SSL, a 
> typical scenario is to establish one-way authentication (the client verifies 
> the server's certificate is real) by storing the server's certificate in the 
> client's truststore.
> A rarer scenario is to establish two-way authentication: in addition to store 
> truststore for the client to verify the server, the server also verifies the 
> client's certificate is real, and the client stores its own certificate in 
> its keystore.
> However, the current implementation for LDAP over SSL does not seem to be 
> correct in that it only configures keystore but no truststore (so LDAP server 
> can verify Hadoop's certificate, but Hadoop may not be able to verify LDAP 
> server's certificate)
> I think there should an extra pair of properties to specify the 
> truststore/password for LDAP server, and use that to configure system 
> properties {{javax.net.ssl.trustStore}}/{{javax.net.ssl.trustStorePassword}}
> I am a security layman so my words can be imprecise. But I hope this makes 
> sense.
> Oracle's SSL LDAP documentation: 
> http://docs.oracle.com/javase/jndi/tutorial/ldap/security/ssl.html
> JSSE reference guide: 
> http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12862) LDAP Group Mapping over SSL can not specify trust store

2016-03-02 Thread Mike Yoder (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15176388#comment-15176388
 ] 

Mike Yoder commented on HADOOP-12862:
-

Makes perfect sense here. Thanks for the investigation; I actually thought the 
"keystore" was misnamed, but you're right that it really is being used as a 
keystore. I haven't heard of an AD server requiring client-side certs before... 
so you're right, we need new config parameters 
hadoop.security.group.mapping.ldap.ssl.truststore and friends.


> LDAP Group Mapping over SSL can not specify trust store
> ---
>
> Key: HADOOP-12862
> URL: https://issues.apache.org/jira/browse/HADOOP-12862
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>
> In a secure environment, SSL is used to encrypt LDAP request for group 
> mapping resolution.
> We (+[~yoderme], +[~tgrayson]) have found that its implementation is strange.
> For information, Hadoop name node, as an LDAP client, talks to a LDAP server 
> to resolve the group mapping of a user. In the case of LDAP over SSL, a 
> typical scenario is to establish one-way authentication (the client verifies 
> the server's certificate is real) by storing the server's certificate in the 
> client's truststore.
> A rarer scenario is to establish two-way authentication: in addition to store 
> truststore for the client to verify the server, the server also verifies the 
> client's certificate is real, and the client stores its own certificate in 
> its keystore.
> However, the current implementation for LDAP over SSL does not seem to be 
> correct in that it only configures keystore but no truststore (so LDAP server 
> can verify Hadoop's certificate, but Hadoop may not be able to verify LDAP 
> server's certificate)
> I think there should an extra pair of properties to specify the 
> truststore/password for LDAP server, and use that to configure system 
> properties {{javax.net.ssl.trustStore}}/{{javax.net.ssl.trustStorePassword}}
> I am a security layman so my words can be imprecise. But I hope this makes 
> sense.
> Oracle's SSL LDAP documentation: 
> http://docs.oracle.com/javase/jndi/tutorial/ldap/security/ssl.html
> JSSE reference guide: 
> http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12868) hadoop-openstack's pom has missing and unused dependencies

2016-03-02 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12868:
--
Description: Attempting to compile openstack on a fairly fresh maven repo 
fails due to commons-httpclient not being a declared dependency.  After that is 
fixed, doing a maven dependency:analyze shows other problems.

> hadoop-openstack's pom has missing and unused dependencies
> --
>
> Key: HADOOP-12868
> URL: https://issues.apache.org/jira/browse/HADOOP-12868
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> Attempting to compile openstack on a fairly fresh maven repo fails due to 
> commons-httpclient not being a declared dependency.  After that is fixed, 
> doing a maven dependency:analyze shows other problems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12868) hadoop-openstack's pom has missing and unused dependencies

2016-03-02 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-12868:
-

 Summary: hadoop-openstack's pom has missing and unused dependencies
 Key: HADOOP-12868
 URL: https://issues.apache.org/jira/browse/HADOOP-12868
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tools
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Priority: Blocker






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12867) clean up how rumen and sls are executed

2016-03-02 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-12867:
-

 Summary: clean up how rumen and sls are executed
 Key: HADOOP-12867
 URL: https://issues.apache.org/jira/browse/HADOOP-12867
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tools
Affects Versions: 3.0.0
Reporter: Allen Wittenauer


sls and rumen commands are buried where no one can see them. this should be 
fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12866) add a subcommand for gridmix

2016-03-02 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-12866:
-

 Summary: add a subcommand for gridmix
 Key: HADOOP-12866
 URL: https://issues.apache.org/jira/browse/HADOOP-12866
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tools
Affects Versions: 3.0.0
Reporter: Allen Wittenauer


gridmix shouldn't require a raw java command line to run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12865) hadoop-datajoin should be documented or dropped

2016-03-02 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-12865:
-

 Summary: hadoop-datajoin should be documented or dropped
 Key: HADOOP-12865
 URL: https://issues.apache.org/jira/browse/HADOOP-12865
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Priority: Minor


hadoop-tools's datajoin is meant to be an example (I think), but it doesn't 
actually appear to be documented anywhere so that people can see it or use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12793) Write a new group mapping service guide

2016-03-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15176285#comment-15176285
 ] 

Hadoop QA commented on HADOOP-12793:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 27s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 20s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 18s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 29s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 18s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 11s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 27s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 49s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 26s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 32s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 28s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_72. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 31s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m 51s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
33s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 211m 1s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_72 Failed junit tests | hadoop.hdfs.server.namenode.TestEditLog |
|   | hadoop.hdfs.TestDFSClientRetries |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.server.datanode.TestFsDatasetCache |
|   | 

[jira] [Commented] (HADOOP-12864) bin/rcc doesn't work on trunk

2016-03-02 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15176268#comment-15176268
 ] 

Allen Wittenauer commented on HADOOP-12864:
---

rcc needs to have the streaming jar added to the classpath.

> bin/rcc doesn't work on trunk
> -
>
> Key: HADOOP-12864
> URL: https://issues.apache.org/jira/browse/HADOOP-12864
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> When o.a.h.record was moved, bin/rcc was never updated to pull those classes 
> from the streaming jar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12857) Rework hadoop-tools-dist

2016-03-02 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15176220#comment-15176220
 ] 

Allen Wittenauer commented on HADOOP-12857:
---

I have some sample code working.  It was very enlightening and I know what to 
do now.  If we really do want to keep one directory, here's my current plan of 
attack:

* Truly optional components (s3, azure, openstack, kafka, etc), will have a 
shellprofile built that users can enable by doing the necessary incantations.  
I'm currently thinking I might be able to add content to hadoop-env.sh at build 
time to actually turn these things on via a single env-var setting or one per 
feature. No promises.  (Yes, I'm currently looking for my "Black Hat of Bash 
Wizardry" to make this happen.) Worst case, it'll be a "copy and rename to 
HADOOP_CONF_DIR".

* With some help from [~raviprak] to make me see the forest for the trees, I 
can now build shell parse-able dependency lists at build time.  I have two ways 
I can process this:  I can either store these lists in the hadoop-dist target 
directory or in the target directory of the actually tools+using a 
well-known-name+find to build the necessary shell magic at build time.  I'm 
leaning towards the latter since that will allow mvn clean to work in 
hadoop-dist in an expected way, since there won't be a hidden dependency on 
hadoop-tools having been run before the mvn package.

* distch, distcp, archive-logs, etc, are extremely problematic. Using shell 
profiles for these WILL NOT WORK since they a) aren't really optional and b) 
removing them from the command line tools won't really help anyone.  Currently 
these commands load all of HADOOP_TOOLS_PATH which is awful. I want to add to 
libexec/ a tools directory that stores helper functions for tools jars that are 
required for the various subcommands.  It will use similar but different code 
from the optional components.  It will key off a different filename for the 
dependency list and there will need to be a contract between the helper 
function names and the dependency file name.  (This sounds worse than what it 
is.) 

I *wish* there was a way to dynamically add subcommands to hadoop, mapred, etc, 
but the code just isn't quite there yet.  We can do usage now, but not actually 
execution.

One big question: How should this work proceed?
# Single patch
# Multiple patches with a strict commit dependency order
# Separate branch followed by a branch merge

Given this work will likely be all or nothing I'm not a fan of multiple patches.

> Rework hadoop-tools-dist
> 
>
> Key: HADOOP-12857
> URL: https://issues.apache.org/jira/browse/HADOOP-12857
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>
> As hadoop-tools grows bigger and bigger, it's becoming evident that having a 
> single directory that gets sucked in is starting to become a big burden as 
> the number of tools grows.  Let's rework this to be smarter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12857) Rework hadoop-tools-dist

2016-03-02 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reassigned HADOOP-12857:
-

Assignee: Allen Wittenauer

> Rework hadoop-tools-dist
> 
>
> Key: HADOOP-12857
> URL: https://issues.apache.org/jira/browse/HADOOP-12857
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>
> As hadoop-tools grows bigger and bigger, it's becoming evident that having a 
> single directory that gets sucked in is starting to become a big burden as 
> the number of tools grows.  Let's rework this to be smarter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12864) bin/rcc doesn't work on trunk

2016-03-02 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12864:
--
Description: When o.a.h.record was moved, bin/rcc was never updated to pull 
those classes from the streaming jar.

> bin/rcc doesn't work on trunk
> -
>
> Key: HADOOP-12864
> URL: https://issues.apache.org/jira/browse/HADOOP-12864
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> When o.a.h.record was moved, bin/rcc was never updated to pull those classes 
> from the streaming jar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12864) bin/rcc doesn't work on trunk

2016-03-02 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12864:
--
Priority: Blocker  (was: Major)

> bin/rcc doesn't work on trunk
> -
>
> Key: HADOOP-12864
> URL: https://issues.apache.org/jira/browse/HADOOP-12864
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Priority: Blocker
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12864) bin/rcc doesn't work on trunk

2016-03-02 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-12864:
-

 Summary: bin/rcc doesn't work on trunk
 Key: HADOOP-12864
 URL: https://issues.apache.org/jira/browse/HADOOP-12864
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 3.0.0
Reporter: Allen Wittenauer






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12859) Disable hiding field style checks in class setters

2016-03-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15176092#comment-15176092
 ] 

Hudson commented on HADOOP-12859:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9412 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9412/])
HADOOP-12859. Disable hiding field style checks in class setters. (wang: rev 
480302b4ba932ff16b88081dac651f0a5c46c09b)
* hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml
* hadoop-common-project/hadoop-common/CHANGES.txt


> Disable hiding field style checks in class setters
> --
>
> Key: HADOOP-12859
> URL: https://issues.apache.org/jira/browse/HADOOP-12859
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 2.9.0
>
> Attachments: HADOOP-12859-v1.patch
>
>
> As discussed in mailing list, this will disable style checks in class setters 
> like the following:
> {noformat}
> void setBlockLocations(LocatedBlocks blockLocations) {:42: 'blockLocations' 
> hides a field.
> void setTimeout(int timeout) {:25: 'timeout' hides a field.
> void setLocatedBlocks(List locatedBlocks) {:46: 'locatedBlocks' 
> hides a field.
> void setRemaining(long remaining) {:28: 'remaining' hides a field.
> void setBytesPerCRC(int bytesPerCRC) {:29: 'bytesPerCRC' hides a field.
> void setCrcType(DataChecksum.Type crcType) {:39: 'crcType' hides a field.
> void setCrcPerBlock(long crcPerBlock) {:30: 'crcPerBlock' hides a field.
> void setRefetchBlocks(boolean refetchBlocks) {:35: 'refetchBlocks' hides a 
> field.
> void setLastRetriedIndex(int lastRetriedIndex) {:34: 'lastRetriedIndex' hides 
> a field.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12859) Disable hiding field style checks in class setters

2016-03-02 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12859:
-
   Resolution: Fixed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2, thanks for the contribution Kai!

> Disable hiding field style checks in class setters
> --
>
> Key: HADOOP-12859
> URL: https://issues.apache.org/jira/browse/HADOOP-12859
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 2.9.0
>
> Attachments: HADOOP-12859-v1.patch
>
>
> As discussed in mailing list, this will disable style checks in class setters 
> like the following:
> {noformat}
> void setBlockLocations(LocatedBlocks blockLocations) {:42: 'blockLocations' 
> hides a field.
> void setTimeout(int timeout) {:25: 'timeout' hides a field.
> void setLocatedBlocks(List locatedBlocks) {:46: 'locatedBlocks' 
> hides a field.
> void setRemaining(long remaining) {:28: 'remaining' hides a field.
> void setBytesPerCRC(int bytesPerCRC) {:29: 'bytesPerCRC' hides a field.
> void setCrcType(DataChecksum.Type crcType) {:39: 'crcType' hides a field.
> void setCrcPerBlock(long crcPerBlock) {:30: 'crcPerBlock' hides a field.
> void setRefetchBlocks(boolean refetchBlocks) {:35: 'refetchBlocks' hides a 
> field.
> void setLastRetriedIndex(int lastRetriedIndex) {:34: 'lastRetriedIndex' hides 
> a field.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12859) Disable hiding field style checks in class setters

2016-03-02 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12859:
-
Summary: Disable hiding field style checks in class setters  (was: Disable 
hidding field style checks in class setters)

> Disable hiding field style checks in class setters
> --
>
> Key: HADOOP-12859
> URL: https://issues.apache.org/jira/browse/HADOOP-12859
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12859-v1.patch
>
>
> As discussed in mailing list, this will disable style checks in class setters 
> like the following:
> {noformat}
> void setBlockLocations(LocatedBlocks blockLocations) {:42: 'blockLocations' 
> hides a field.
> void setTimeout(int timeout) {:25: 'timeout' hides a field.
> void setLocatedBlocks(List locatedBlocks) {:46: 'locatedBlocks' 
> hides a field.
> void setRemaining(long remaining) {:28: 'remaining' hides a field.
> void setBytesPerCRC(int bytesPerCRC) {:29: 'bytesPerCRC' hides a field.
> void setCrcType(DataChecksum.Type crcType) {:39: 'crcType' hides a field.
> void setCrcPerBlock(long crcPerBlock) {:30: 'crcPerBlock' hides a field.
> void setRefetchBlocks(boolean refetchBlocks) {:35: 'refetchBlocks' hides a 
> field.
> void setLastRetriedIndex(int lastRetriedIndex) {:34: 'lastRetriedIndex' hides 
> a field.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12859) Disable hidding field style checks in class setters

2016-03-02 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15176062#comment-15176062
 ] 

Andrew Wang commented on HADOOP-12859:
--

Sounds great, let's check it in!

> Disable hidding field style checks in class setters
> ---
>
> Key: HADOOP-12859
> URL: https://issues.apache.org/jira/browse/HADOOP-12859
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12859-v1.patch
>
>
> As discussed in mailing list, this will disable style checks in class setters 
> like the following:
> {noformat}
> void setBlockLocations(LocatedBlocks blockLocations) {:42: 'blockLocations' 
> hides a field.
> void setTimeout(int timeout) {:25: 'timeout' hides a field.
> void setLocatedBlocks(List locatedBlocks) {:46: 'locatedBlocks' 
> hides a field.
> void setRemaining(long remaining) {:28: 'remaining' hides a field.
> void setBytesPerCRC(int bytesPerCRC) {:29: 'bytesPerCRC' hides a field.
> void setCrcType(DataChecksum.Type crcType) {:39: 'crcType' hides a field.
> void setCrcPerBlock(long crcPerBlock) {:30: 'crcPerBlock' hides a field.
> void setRefetchBlocks(boolean refetchBlocks) {:35: 'refetchBlocks' hides a 
> field.
> void setLastRetriedIndex(int lastRetriedIndex) {:34: 'lastRetriedIndex' hides 
> a field.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12815) TestS3ContractRootDir#testRmEmptyRootDirNonRecursive and TestS3ContractRootDir#testRmRootRecursive fail on branch-2.

2016-03-02 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15175976#comment-15175976
 ] 

Ravi Prakash commented on HADOOP-12815:
---

Thanks for your review Chris! I'm sorry I haven't had a chance to look at the 
patch yet, but will do soon.

> TestS3ContractRootDir#testRmEmptyRootDirNonRecursive and 
> TestS3ContractRootDir#testRmRootRecursive fail on branch-2.
> 
>
> Key: HADOOP-12815
> URL: https://issues.apache.org/jira/browse/HADOOP-12815
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chris Nauroth
>Assignee: Matthew Paduano
> Attachments: HADOOP-12815.branch-2.01.patch
>
>
> TestS3ContractRootDir#testRmEmptyRootDirNonRecursive and 
> TestS3ContractRootDir#testRmRootRecursive fail on branch-2.  The tests pass 
> on trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12793) Write a new group mapping service guide

2016-03-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15175920#comment-15175920
 ] 

Hadoop QA commented on HADOOP-12793:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 34s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 30s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 51s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 55s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 51s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 33s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 32s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 56s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 46s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 24s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 54m 5s 
{color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 43s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 53m 38s 
{color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
25s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 170m 56s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12790927/HADOOP-12793.004.patch
 |
| JIRA Issue | HADOOP-12793 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| 

[jira] [Commented] (HADOOP-12827) WebHdfs socket timeouts should be configurable

2016-03-02 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15175874#comment-15175874
 ] 

Xiaoyu Yao commented on HADOOP-12827:
-

+1 patch v004, I will commit it shortly.

> WebHdfs socket timeouts should be configurable
> --
>
> Key: HADOOP-12827
> URL: https://issues.apache.org/jira/browse/HADOOP-12827
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
> Environment: all
>Reporter: Austin Donnelly
>Assignee: Austin Donnelly
>  Labels: easyfix, newbie
> Attachments: HADOOP-12827.001.patch, HADOOP-12827.002.patch, 
> HADOOP-12827.002.patch, HADOOP-12827.002.patch, HADOOP-12827.003.patch, 
> HADOOP-12827.004.patch
>
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> WebHdfs client connections use sockets with fixed timeouts of 60 seconds to 
> connect, and 60 seconds for reads.
> This is a problem because I am trying to use WebHdfs to access an archive 
> storage system which can take minutes to hours to return the requested data 
> over WebHdfs.
> The fix is to add new configuration file options to allow these 60s defaults 
> to be customised in hdfs-site.xml.
> If the new configuration options are not present, the behavior is unchanged 
> from before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11212) NetUtils.wrapException to handle SocketException explicitly

2016-03-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15175832#comment-15175832
 ] 

Hadoop QA commented on HADOOP-11212:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 1s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 41s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 42s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 30s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_72. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 16s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 20s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_72 Timed out junit tests | 
org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12790942/HADOOP-11212-001.patch
 |
| JIRA Issue | HADOOP-11212 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2aa1f4f981a7 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 

[jira] [Updated] (HADOOP-12793) Write a new group mapping service guide

2016-03-02 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12793:
-
Attachment: HADOOP-12793.005.patch

Rev05: to be more precise about LDAP support. Hadoop does not implement LDAP 
protocol itself; rather, it relies on JNDI API to perform LDAP query.

> Write a new group mapping service guide
> ---
>
> Key: HADOOP-12793
> URL: https://issues.apache.org/jira/browse/HADOOP-12793
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: ldap, supportability
> Attachments: HADOOP-12791.001.patch, HADOOP-12793.002.patch, 
> HADOOP-12793.003.patch, HADOOP-12793.004.patch, HADOOP-12793.005.patch
>
>
> LdapGroupsMapping has lots of configurable properties and is thus fairly 
> complex in nature. _HDFS Permissions Guide_ has a minimal introduction to 
> LdapGroupsMapping, with reference to "More information on configuring the 
> group mapping service is available in the Javadocs."
> However, its Javadoc provides no information about how to configure it. 
> Core-default.xml has descriptions for each property, but still lacks a 
> comprehensive tutorial. Without a tutorial/guide, these configurable 
> properties would be buried under the sea of properties.
> Both Cloudera and HortonWorks has some information regarding LDAP group 
> mapping:
> http://www.cloudera.com/documentation/enterprise/latest/topics/cm_sg_ldap_grp_mappings.html
> http://hortonworks.com/blog/hadoop-groupmapping-ldap-integration/
> But neither cover all configurable features, such as using SSL with LDAP, and 
> POSIX group semantics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11212) NetUtils.wrapException to handle SocketException explicitly

2016-03-02 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11212:

Affects Version/s: (was: 3.0.0)
   2.7.2
   Status: Patch Available  (was: Open)

> NetUtils.wrapException to handle SocketException explicitly
> ---
>
> Key: HADOOP-11212
> URL: https://issues.apache.org/jira/browse/HADOOP-11212
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-11212-001.patch
>
>
> the {{NetUtil.wrapException()} method doesn't catch {{SocketException}}, so 
> it is wrapped with an IOE —this loses information, and stops any extra diags 
> /wiki links being added



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11212) NetUtils.wrapException to handle SocketException explicitly

2016-03-02 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11212:

Attachment: HADOOP-11212-001.patch

Patch 001, adds the exception. Some of the exceptions above (e.g 
NoRouteToHostException) are subclasses of SocketException —so that must go 
last. A comment emphasises that point.


I would include a new stack trace, but SLIDER-1096 covers how I can't 
build/test against Hadoop 2.8.0 right now

> NetUtils.wrapException to handle SocketException explicitly
> ---
>
> Key: HADOOP-11212
> URL: https://issues.apache.org/jira/browse/HADOOP-11212
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-11212-001.patch
>
>
> the {{NetUtil.wrapException()} method doesn't catch {{SocketException}}, so 
> it is wrapped with an IOE —this loses information, and stops any extra diags 
> /wiki links being added



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11212) NetUtils.wrapException to handle SocketException explicitly

2016-03-02 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15175645#comment-15175645
 ] 

Steve Loughran commented on HADOOP-11212:
-

Page for this is on the wiki: http://wiki.apache.org/hadoop/SocketException

> NetUtils.wrapException to handle SocketException explicitly
> ---
>
> Key: HADOOP-11212
> URL: https://issues.apache.org/jira/browse/HADOOP-11212
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> the {{NetUtil.wrapException()} method doesn't catch {{SocketException}}, so 
> it is wrapped with an IOE —this loses information, and stops any extra diags 
> /wiki links being added



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10940) RPC client does no bounds checking of responses

2016-03-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15175636#comment-15175636
 ] 

Hadoop QA commented on HADOOP-10940:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 48s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 36s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 50s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 37s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} hadoop-common-project/hadoop-common: patch 
generated 0 new + 268 unchanged - 4 fixed = 268 total (was 272) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 45s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 57s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 9s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12778043/HADOOP-10940.patch |
| JIRA Issue | HADOOP-10940 |
| Optional Tests |  asflicense  xml  compile  javac  javadoc  mvninstall  
mvnsite  unit  findbugs  checkstyle  |
| uname | Linux 275a2b8f90a1 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 

[jira] [Updated] (HADOOP-12793) Write a new group mapping service guide

2016-03-02 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12793:
-
Attachment: HADOOP-12793.004.patch

Thanks [~iwasakims] for reviewing!
I've made an update to the patch, fixed a few typos, and also expanded a bit of 
LdapGroupsMapping. Please review again!

> Write a new group mapping service guide
> ---
>
> Key: HADOOP-12793
> URL: https://issues.apache.org/jira/browse/HADOOP-12793
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: ldap, supportability
> Attachments: HADOOP-12791.001.patch, HADOOP-12793.002.patch, 
> HADOOP-12793.003.patch, HADOOP-12793.004.patch
>
>
> LdapGroupsMapping has lots of configurable properties and is thus fairly 
> complex in nature. _HDFS Permissions Guide_ has a minimal introduction to 
> LdapGroupsMapping, with reference to "More information on configuring the 
> group mapping service is available in the Javadocs."
> However, its Javadoc provides no information about how to configure it. 
> Core-default.xml has descriptions for each property, but still lacks a 
> comprehensive tutorial. Without a tutorial/guide, these configurable 
> properties would be buried under the sea of properties.
> Both Cloudera and HortonWorks has some information regarding LDAP group 
> mapping:
> http://www.cloudera.com/documentation/enterprise/latest/topics/cm_sg_ldap_grp_mappings.html
> http://hortonworks.com/blog/hadoop-groupmapping-ldap-integration/
> But neither cover all configurable features, such as using SSL with LDAP, and 
> POSIX group semantics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10940) RPC client does no bounds checking of responses

2016-03-02 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15175552#comment-15175552
 ] 

Steve Loughran commented on HADOOP-10940:
-

I can see the need for this; it's just beyond my competence level to review. 
sorry

> RPC client does no bounds checking of responses
> ---
>
> Key: HADOOP-10940
> URL: https://issues.apache.org/jira/browse/HADOOP-10940
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10940.patch, HADOOP-10940.patch, 
> HADOOP-10940.patch, HADOOP-10940.patch, HADOOP-10940.patch
>
>
> The rpc client does no bounds checking of server responses.  In the case of 
> communicating with an older and incompatible RPC, this may lead to OOM issues 
> and leaking of resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12470) In-page TOC of documentation should be automatically generated by doxia macro

2016-03-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15175458#comment-15175458
 ] 

Hadoop QA commented on HADOOP-12470:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 49s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 44s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 34s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 11s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 21s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 9m 25s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
58s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 36s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 41s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 8m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 4s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 17s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 9m 17s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 160m 19s 
{color} | {color:red} root in the patch failed with JDK v1.8.0_72. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 21s {color} 
| {color:red} root in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
29s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 338m 36s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_72 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
| JDK v1.7.0_95 Failed junit tests | hadoop.hdfs.TestDFSUpgradeFromImage |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12790867/HADOOP-12470.001.patch
 |
| JIRA Issue | HADOOP-12470 |
| 

[jira] [Commented] (HADOOP-12827) WebHdfs socket timeouts should be configurable

2016-03-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15175450#comment-15175450
 ] 

Hadoop QA commented on HADOOP-12827:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 31s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
0s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 18s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 19s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 11s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 19s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 19s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 10s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 0s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_72. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 55m 6s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_72. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 57s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 52m 7s 
{color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| 

[jira] [Commented] (HADOOP-12859) Disable hidding field style checks in class setters

2016-03-02 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15175428#comment-15175428
 ] 

Kai Zheng commented on HADOOP-12859:


Hi Andrew, below is my trying.

1. Given the current codes:
{noformat}
[root@zkdesk hadoop]# mvn checkstyle:checkstyle
[root@zkdesk hadoop]# grep -c 'hides a field' 
hadoop-common-project/hadoop-common/target/test/checkstyle-errors.xml 
396
{noformat}

2. Added the following test codes to the codebase in the hadoop-common module:
{code}
+public class MyCheckstyleCheck {
+  private String test1;
+  private String test2;
+  private String test3;
+  private String test4;
+
+  public String getTest1() {
+return test1;
+  }
+
+  public void setTest1(String test1) {
+this.test1 = test1;
+  }
+
+  public String getTest2() {
+return test2;
+  }
+
+  public void setTest2(String test2) {
+this.test2 = test2;
+  }
+
+  public String getTest3() {
+return test3;
+  }
+
+  public void setTest3(String test3) {
+this.test3 = test3;
+  }
+
+  public String getTest4() {
+return test4;
+  }
+
+  public void setTest4(String test4) {
+this.test4 = test4;
+  }
+}
{code}
And run the above command again, it gives:
{noformat}
[root@zkdesk hadoop]# grep -c 'hides a field' 
hadoop-common-project/hadoop-common/target/test/checkstyle-errors.xml 
400
{noformat}

3. Applied the patch here and tested again, it gave:
{noformat}
[root@zkdesk hadoop]# grep -c 'hides a field' 
hadoop-common-project/hadoop-common/target/test/checkstyle-errors.xml 
297
{noformat}
In addition to {{mvn install}} as you mentioned in HADOOP-12713, I also ran 
{{mvn clean package}} to ensure the patch to take effect.
Note this change shall not get rid of any {{hides a field}} warnings because in 
other cases the checking does make sense.

> Disable hidding field style checks in class setters
> ---
>
> Key: HADOOP-12859
> URL: https://issues.apache.org/jira/browse/HADOOP-12859
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12859-v1.patch
>
>
> As discussed in mailing list, this will disable style checks in class setters 
> like the following:
> {noformat}
> void setBlockLocations(LocatedBlocks blockLocations) {:42: 'blockLocations' 
> hides a field.
> void setTimeout(int timeout) {:25: 'timeout' hides a field.
> void setLocatedBlocks(List locatedBlocks) {:46: 'locatedBlocks' 
> hides a field.
> void setRemaining(long remaining) {:28: 'remaining' hides a field.
> void setBytesPerCRC(int bytesPerCRC) {:29: 'bytesPerCRC' hides a field.
> void setCrcType(DataChecksum.Type crcType) {:39: 'crcType' hides a field.
> void setCrcPerBlock(long crcPerBlock) {:30: 'crcPerBlock' hides a field.
> void setRefetchBlocks(boolean refetchBlocks) {:35: 'refetchBlocks' hides a 
> field.
> void setLastRetriedIndex(int lastRetriedIndex) {:34: 'lastRetriedIndex' hides 
> a field.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12827) WebHdfs socket timeouts should be configurable

2016-03-02 Thread Austin Donnelly (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15175374#comment-15175374
 ] 

Austin Donnelly commented on HADOOP-12827:
--

Looks good to me.  Thanks for the updates [~chris.douglas].

> WebHdfs socket timeouts should be configurable
> --
>
> Key: HADOOP-12827
> URL: https://issues.apache.org/jira/browse/HADOOP-12827
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
> Environment: all
>Reporter: Austin Donnelly
>Assignee: Austin Donnelly
>  Labels: easyfix, newbie
> Attachments: HADOOP-12827.001.patch, HADOOP-12827.002.patch, 
> HADOOP-12827.002.patch, HADOOP-12827.002.patch, HADOOP-12827.003.patch, 
> HADOOP-12827.004.patch
>
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> WebHdfs client connections use sockets with fixed timeouts of 60 seconds to 
> connect, and 60 seconds for reads.
> This is a problem because I am trying to use WebHdfs to access an archive 
> storage system which can take minutes to hours to return the requested data 
> over WebHdfs.
> The fix is to add new configuration file options to allow these 60s defaults 
> to be customised in hdfs-site.xml.
> If the new configuration options are not present, the behavior is unchanged 
> from before.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12863) Too many connection opened to TimelineServer while publishing entities

2016-03-02 Thread Rohith Sharma K S (JIRA)
Rohith Sharma K S created HADOOP-12863:
--

 Summary: Too many connection opened to TimelineServer while 
publishing entities
 Key: HADOOP-12863
 URL: https://issues.apache.org/jira/browse/HADOOP-12863
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Rohith Sharma K S
Priority: Critical


It is observed that there are too many connections are kept opened to 
TimelineServer while publishing entities via SystemMetricsPublisher. This cause 
sometimes resource shortage for other process or RM itself

{noformat}
tcp0  0 10.18.99.110:3999   10.18.214.60:59265  ESTABLISHED 
115302/java 
tcp0  0 10.18.99.110:25001  :::*LISTEN  
115302/java 
tcp0  0 10.18.99.110:25002  :::*LISTEN  
115302/java 
tcp0  0 10.18.99.110:25003  :::*LISTEN  
115302/java 
tcp0  0 10.18.99.110:25004  :::*LISTEN  
115302/java 
tcp0  0 10.18.99.110:25005  :::*LISTEN  
115302/java 
tcp1  0 10.18.99.110:48866  10.18.99.110:8188   CLOSE_WAIT  
115302/java 
tcp1  0 10.18.99.110:48137  10.18.99.110:8188   CLOSE_WAIT  
115302/java 
tcp1  0 10.18.99.110:47553  10.18.99.110:8188   CLOSE_WAIT  
115302/java 
tcp1  0 10.18.99.110:48424  10.18.99.110:8188   CLOSE_WAIT  
115302/java 
tcp1  0 10.18.99.110:48139  10.18.99.110:8188   CLOSE_WAIT  
115302/java 
tcp1  0 10.18.99.110:48096  10.18.99.110:8188   CLOSE_WAIT  
115302/java 
tcp1  0 10.18.99.110:47558  10.18.99.110:8188   CLOSE_WAIT  
115302/java 
tcp1  0 10.18.99.110:49270  10.18.99.110:8188   CLOSE_WAIT  
115302/java 
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)