[jira] [Commented] (HADOOP-13039) Add documentation for configuration property ipc.maximum.data.length for controlling maximum RPC message size.

2016-04-26 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15259538#comment-15259538
 ] 

Mingliang Liu commented on HADOOP-13039:


Thank you [~arpitagarwal] and [~cnauroth] for your review and comments.

> Add documentation for configuration property ipc.maximum.data.length for 
> controlling maximum RPC message size.
> --
>
> Key: HADOOP-13039
> URL: https://issues.apache.org/jira/browse/HADOOP-13039
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Chris Nauroth
>Assignee: Mingliang Liu
> Fix For: 2.7.3
>
> Attachments: HADOOP-13039.000.patch, HADOOP-13039.001.patch, 
> HADOOP-13039.001.patch, HADOOP-13039.001.patch
>
>
> The RPC server enforces a maximum length on incoming messages.  Messages 
> larger than the maximum are rejected immediately.  The maximum length can be 
> tuned by setting configuration property {{ipc.maximum.data.length}}, but this 
> is not documented in core-site.xml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13039) Add documentation for configuration property ipc.maximum.data.length for controlling maximum RPC message size.

2016-04-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15259496#comment-15259496
 ] 

Hudson commented on HADOOP-13039:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9677 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9677/])
HADOOP-13039. Add documentation for configuration property (arp: rev 
ea5475d1c125ff96b0650d0182f694380412c0da)
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml


> Add documentation for configuration property ipc.maximum.data.length for 
> controlling maximum RPC message size.
> --
>
> Key: HADOOP-13039
> URL: https://issues.apache.org/jira/browse/HADOOP-13039
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Chris Nauroth
>Assignee: Mingliang Liu
> Fix For: 2.7.3
>
> Attachments: HADOOP-13039.000.patch, HADOOP-13039.001.patch, 
> HADOOP-13039.001.patch, HADOOP-13039.001.patch
>
>
> The RPC server enforces a maximum length on incoming messages.  Messages 
> larger than the maximum are rejected immediately.  The maximum length can be 
> tuned by setting configuration property {{ipc.maximum.data.length}}, but this 
> is not documented in core-site.xml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13039) Add documentation for configuration property ipc.maximum.data.length for controlling maximum RPC message size.

2016-04-26 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-13039:
---
Description: The RPC server enforces a maximum length on incoming messages. 
 Messages larger than the maximum are rejected immediately.  The maximum length 
can be tuned by setting configuration property {{ipc.maximum.data.length}}, but 
this is not documented in core-site.xml.  (was: The RPC server enforces a 
maximum length on incoming messages.  Messages larger than the maximum are 
rejected immediately as potentially malicious.  The maximum length can be tuned 
by setting configuration property {{ipc.maximum.data.length}}, but this is not 
documented in core-site.xml.)

> Add documentation for configuration property ipc.maximum.data.length for 
> controlling maximum RPC message size.
> --
>
> Key: HADOOP-13039
> URL: https://issues.apache.org/jira/browse/HADOOP-13039
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Chris Nauroth
>Assignee: Mingliang Liu
> Fix For: 2.7.3
>
> Attachments: HADOOP-13039.000.patch, HADOOP-13039.001.patch, 
> HADOOP-13039.001.patch, HADOOP-13039.001.patch
>
>
> The RPC server enforces a maximum length on incoming messages.  Messages 
> larger than the maximum are rejected immediately.  The maximum length can be 
> tuned by setting configuration property {{ipc.maximum.data.length}}, but this 
> is not documented in core-site.xml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13039) Add documentation for configuration property ipc.maximum.data.length for controlling maximum RPC message size.

2016-04-26 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-13039:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.7.3
   Status: Resolved  (was: Patch Available)

+1 I committed this for 2.7.3. Thanks for the contribution [~liuml07] and 
thanks [~cnauroth] for the code review.

> Add documentation for configuration property ipc.maximum.data.length for 
> controlling maximum RPC message size.
> --
>
> Key: HADOOP-13039
> URL: https://issues.apache.org/jira/browse/HADOOP-13039
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Chris Nauroth
>Assignee: Mingliang Liu
> Fix For: 2.7.3
>
> Attachments: HADOOP-13039.000.patch, HADOOP-13039.001.patch, 
> HADOOP-13039.001.patch, HADOOP-13039.001.patch
>
>
> The RPC server enforces a maximum length on incoming messages.  Messages 
> larger than the maximum are rejected immediately as potentially malicious.  
> The maximum length can be tuned by setting configuration property 
> {{ipc.maximum.data.length}}, but this is not documented in core-site.xml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12469) distcp should not ignore the ignoreFailures option

2016-04-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15259455#comment-15259455
 ] 

Hadoop QA commented on HADOOP-12469:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 19s 
{color} | {color:green} hadoop-distcp in the patch passed with JDK v1.8.0_92. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 23s 
{color} | {color:green} hadoop-distcp in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 37s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12784743/HADOOP-12469.004.patch
 |
| JIRA Issue | HADOOP-12469 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0c5233f3895e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 68b4564 |
| Default Java | 1.7.0_95 |
| Multi-JDK versions |  

[jira] [Commented] (HADOOP-12469) distcp should not ignore the ignoreFailures option

2016-04-26 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15259419#comment-15259419
 ] 

Mingliang Liu commented on HADOOP-12469:


Can anyone commit this reviewed patch? Thanks.

> distcp should not ignore the ignoreFailures option
> --
>
> Key: HADOOP-12469
> URL: https://issues.apache.org/jira/browse/HADOOP-12469
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.7.1
>Reporter: Gera Shegalov
>Assignee: Mingliang Liu
>Priority: Critical
> Attachments: HADOOP-12469.000.patch, HADOOP-12469.001.patch, 
> HADOOP-12469.002.patch, HADOOP-12469.003.patch, HADOOP-12469.004.patch
>
>
> {{RetriableFileCopyCommand.CopyReadException}} is double-wrapped via
> # via {{RetriableCommand::execute}}
> # via {{CopyMapper#copyFileWithRetry}}
> before {{CopyMapper::handleFailure}} tests 
> {code}
> if (ignoreFailures && exception.getCause() instanceof
> RetriableFileCopyCommand.CopyReadException
> {code}
> which is always false.
> Orthogonally, ignoring failures should be mutually exclusive with the atomic 
> option otherwise an incomplete dir is eligible for commit defeating the 
> purpose.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12738) Create unit test to automatically compare Common related classes and core-default.xml

2016-04-26 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-12738:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: 2.8.0
Target Version/s: 2.8.0
  Status: Resolved  (was: Patch Available)

Committed. Thanks, [~rchiang].

> Create unit test to automatically compare Common related classes and 
> core-default.xml
> -
>
> Key: HADOOP-12738
> URL: https://issues.apache.org/jira/browse/HADOOP-12738
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 2.7.1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HADOOP-12738.001.patch, HADOOP-12738.002.patch, 
> HADOOP-12738.003.patch
>
>
> Create a unit test that will automatically compare the fields in the various 
> Common related classes and core-default.xml. It should throw an error if a 
> property is missing in either the class or the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-04-26 Thread Ling Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15259396#comment-15259396
 ] 

Ling Zhou commented on HADOOP-12756:


Thank you for your comments Steve, they are very helpful.
1.The name OSS does have many meanings, so we will use hadoop-aliyun or some 
other name to replace hadoop-oss.
2.We will work with the lastest hadoop-trunk and look for approaches to solve 
http-client dependency conflicts.
3.We will make sure all dependencies will be declared in hadoop-project/pom.xml.
4.It's a little different in the this implementation with aws module, and we 
will talk to Yi offline for more about that.
5.Yes stability is the first, and performance work can be done in the next 
phase. Now we will focus on stability and learn more about s3a work.

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: shimingfei
>Assignee: shimingfei
> Attachments: 0001-OSS-filesystem-integration-with-Hadoop.patch, HCFS 
> User manual.md, OSS integration.pdf, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12738) Create unit test to automatically compare Common related classes and core-default.xml

2016-04-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15259374#comment-15259374
 ] 

Hudson commented on HADOOP-12738:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9675 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9675/])
HADOOP-12738. Create unit test to automatically compare Common related 
(iwasakims: rev 68b4564e78380a2fac1a9000fb862104d4bc86e5)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestCommonConfigurationFields.java


> Create unit test to automatically compare Common related classes and 
> core-default.xml
> -
>
> Key: HADOOP-12738
> URL: https://issues.apache.org/jira/browse/HADOOP-12738
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 2.7.1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-12738.001.patch, HADOOP-12738.002.patch, 
> HADOOP-12738.003.patch
>
>
> Create a unit test that will automatically compare the fields in the various 
> Common related classes and core-default.xml. It should throw an error if a 
> property is missing in either the class or the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12892) fix/rewrite create-release

2016-04-26 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15259373#comment-15259373
 ] 

Andrew Wang commented on HADOOP-12892:
--

Thanks for the rev Allen, LGTM +1. Tested locally and seems to have worked. I 
see you chose not to do some of the option validations I recommended, but 
that's a matter of taste.

Only nit is that there's an extra space in --asfrelease usage text. Minor 
enough that we can fix on at commit time? IDK if you have anything else 
feature-wise you want to add.

> fix/rewrite create-release
> --
>
> Key: HADOOP-12892
> URL: https://issues.apache.org/jira/browse/HADOOP-12892
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-12892.00.patch, HADOOP-12892.01.patch, 
> HADOOP-12892.02.patch, HADOOP-12892.03.patch
>
>
> create-release needs some major surgery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12738) Create unit test to automatically compare Common related classes and core-default.xml

2016-04-26 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15259364#comment-15259364
 ] 

Masatake Iwasaki commented on HADOOP-12738:
---

+1, committing this.

> Create unit test to automatically compare Common related classes and 
> core-default.xml
> -
>
> Key: HADOOP-12738
> URL: https://issues.apache.org/jira/browse/HADOOP-12738
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 2.7.1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-12738.001.patch, HADOOP-12738.002.patch, 
> HADOOP-12738.003.patch
>
>
> Create a unit test that will automatically compare the fields in the various 
> Common related classes and core-default.xml. It should throw an error if a 
> property is missing in either the class or the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12563) Updated utility to create/modify token files

2016-04-26 Thread Matthew Paduano (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthew Paduano updated HADOOP-12563:
-
Attachment: HADOOP-12563.16.patch

diff from 15,16
{code}
2165c2165
< +Mockito.verify(spyCreds).readProtos(in);
---
> +Mockito.verify(spyCreds).readProto(in);
{code}

testing 
{code}
mvn test 
-Dtest=TestCredentials,TestRMContainerAllocator,TestContainerManagerRecovery,TestDtUtilShell,TestCommandShell
---
Test set: org.apache.hadoop.security.TestCredentials
---
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.961 sec - in 
org.apache.hadoop.security.TestCredentials
---
Test set: org.apache.hadoop.security.token.TestDtUtilShell
---
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.778 sec - in 
org.apache.hadoop.security.token.TestDtUtilShell
---
Test set: org.apache.hadoop.tools.TestCommandShell
---
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.304 sec - in 
org.apache.hadoop.tools.TestCommandShell
---
Test set: 
org.apache.hadoop.yarn.server.nodemanager.containermanager.TestContainerManagerRecovery
---
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.827 sec - in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.TestContainerManagerRecovery
---
Test set: org.apache.hadoop.mapreduce.v2.app.rm.TestRMContainerAllocator
---
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 133.303 sec - 
in org.apache.hadoop.mapreduce.v2.app.rm.TestRMContainerAllocator
{code}

> Updated utility to create/modify token files
> 
>
> Key: HADOOP-12563
> URL: https://issues.apache.org/jira/browse/HADOOP-12563
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Matthew Paduano
> Fix For: 3.0.0
>
> Attachments: HADOOP-12563.01.patch, HADOOP-12563.02.patch, 
> HADOOP-12563.03.patch, HADOOP-12563.04.patch, HADOOP-12563.05.patch, 
> HADOOP-12563.06.patch, HADOOP-12563.07.patch, HADOOP-12563.07.patch, 
> HADOOP-12563.08.patch, HADOOP-12563.09.patch, HADOOP-12563.10.patch, 
> HADOOP-12563.11.patch, HADOOP-12563.12.patch, HADOOP-12563.13.patch, 
> HADOOP-12563.14.patch, HADOOP-12563.15.patch, HADOOP-12563.16.patch, 
> dtutil-test-out, example_dtutil_commands_and_output.txt, 
> generalized_token_case.pdf
>
>
> hdfs fetchdt is missing some critical features and is geared almost 
> exclusively towards HDFS operations.  Additionally, the token files that are 
> created use Java serializations which are hard/impossible to deal with in 
> other languages. It should be replaced with a better utility in common that 
> can read/write protobuf-based token files, has enough flexibility to be used 
> with other services, and offers key functionality such as append and rename. 
> The old version file format should still be supported for backward 
> compatibility, but will be effectively deprecated.
> A follow-on JIRA will deprecrate fetchdt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12563) Updated utility to create/modify token files

2016-04-26 Thread Matthew Paduano (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15259321#comment-15259321
 ] 

Matthew Paduano commented on HADOOP-12563:
--

I am sorry.  I cannot help with windows.

> Updated utility to create/modify token files
> 
>
> Key: HADOOP-12563
> URL: https://issues.apache.org/jira/browse/HADOOP-12563
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Matthew Paduano
> Fix For: 3.0.0
>
> Attachments: HADOOP-12563.01.patch, HADOOP-12563.02.patch, 
> HADOOP-12563.03.patch, HADOOP-12563.04.patch, HADOOP-12563.05.patch, 
> HADOOP-12563.06.patch, HADOOP-12563.07.patch, HADOOP-12563.07.patch, 
> HADOOP-12563.08.patch, HADOOP-12563.09.patch, HADOOP-12563.10.patch, 
> HADOOP-12563.11.patch, HADOOP-12563.12.patch, HADOOP-12563.13.patch, 
> HADOOP-12563.14.patch, HADOOP-12563.15.patch, dtutil-test-out, 
> example_dtutil_commands_and_output.txt, generalized_token_case.pdf
>
>
> hdfs fetchdt is missing some critical features and is geared almost 
> exclusively towards HDFS operations.  Additionally, the token files that are 
> created use Java serializations which are hard/impossible to deal with in 
> other languages. It should be replaced with a better utility in common that 
> can read/write protobuf-based token files, has enough flexibility to be used 
> with other services, and offers key functionality such as append and rename. 
> The old version file format should still be supported for backward 
> compatibility, but will be effectively deprecated.
> A follow-on JIRA will deprecrate fetchdt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12563) Updated utility to create/modify token files

2016-04-26 Thread Matthew Paduano (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthew Paduano updated HADOOP-12563:
-
Attachment: HADOOP-12563.15.patch

diff of patch 14,15

{code}
218c218
< +  readProto(in);
---
> +  readProtos(in);
333c333
< +  public void readProto(DataInput in) throws IOException {
---
> +  public void readProtos(DataInput in) throws IOException {



<  public class TestCredentials {
...
< +  @Test
< +  public void testBasicReadWriteProtoEmpty()
< +  throws IOException, NoSuchAlgorithmException {
< +String testname ="testBasicReadWriteProtoEmpty";
< +Credentials ts = new Credentials();
< +writeCredentialsProto(ts, testname);
< +Credentials ts2 = readCredentialsProto(testname);
< +assertEquals("test empty tokens", 0, ts2.numberOfTokens());
< +assertEquals("test empty keys", 0, ts2.numberOfSecretKeys());
< +  }
< +
< +  @Test
< +  public void testBasicReadWriteProto()
< +  throws IOException, NoSuchAlgorithmException {
< +String testname ="testBasicReadWriteProto";
< +Text tok1 = new Text("token1");
< +Text tok2 = new Text("token2");
< +Text key1 = new Text("key1");
< +Credentials ts = generateCredentials(tok1, tok2, key1);
< +writeCredentialsProto(ts, testname);
< +Credentials ts2 = readCredentialsProto(testname);
< +assertCredentials(testname, tok1, key1, ts, ts2);
< +assertCredentials(testname, tok2, key1, ts, ts2);
< +  }
< +
< +  @Test
< +  public void testBasicReadWriteStreamEmpty()
< +  throws IOException, NoSuchAlgorithmException {
< +String testname ="testBasicReadWriteStreamEmpty";
< +Credentials ts = new Credentials();
< +writeCredentialsStream(ts, testname);
< +Credentials ts2 = readCredentialsStream(testname);
< +assertEquals("test empty tokens", 0, ts2.numberOfTokens());
< +assertEquals("test empty keys", 0, ts2.numberOfSecretKeys());
< +  }
< +
< +  @Test
< +  public void testBasicReadWriteStream()
< +  throws IOException, NoSuchAlgorithmException {
< +String testname ="testBasicReadWriteStream";
< +Text tok1 = new Text("token1");
< +Text tok2 = new Text("token2");
< +Text key1 = new Text("key1");
< +Credentials ts = generateCredentials(tok1, tok2, key1);
< +writeCredentialsStream(ts, testname);
< +Credentials ts2 = readCredentialsStream(testname);
< +assertCredentials(testname, tok1, key1, ts, ts2);
< +assertCredentials(testname, tok2, key1, ts, ts2);
< +  }
< +
< +  @Test
< +  /**
< +   * Verify the suitability of read/writeProto for use with Writable interfac
< +   * This test uses only empty credentials.
< +   */
< +  public void testWritablePropertiesEmpty()
< +  throws IOException, NoSuchAlgorithmException {
< +String testname ="testWritablePropertiesEmpty";
< +Credentials ts = new Credentials();
< +Credentials ts2 = new Credentials();
< +writeCredentialsProtos(ts, ts2, testname);
< +List clist = readCredentialsProtos(testname);
< +assertEquals("test empty tokens 0", 0, clist.get(0).numberOfTokens());
< +assertEquals("test empty keys 0", 0, clist.get(0).numberOfSecretKeys());
< +assertEquals("test empty tokens 1", 0, clist.get(1).numberOfTokens());
< +assertEquals("test empty keys 1", 0, clist.get(1).numberOfSecretKeys());
< +  }
< +
< +  @Test
< +  /**
< +   * Verify the suitability of read/writeProto for use with Writable interfac
< +   */
< +  public void testWritableProperties()
< +  throws IOException, NoSuchAlgorithmException {
< +String testname ="testWritableProperties";
< +Text tok1 = new Text("token1");
< +Text tok2 = new Text("token2");
< +Text key1 = new Text("key1");
< +Credentials ts = generateCredentials(tok1, tok2, key1);
< +Text tok3 = new Text("token3");
< +Text key2 = new Text("key2");
< +Credentials ts2 = generateCredentials(tok1, tok3, key2);
< +writeCredentialsProtos(ts, ts2, testname);
< +List clist = readCredentialsProtos(testname);
< +assertCredentials(testname, tok1, key1, ts, clist.get(0));
< +assertCredentials(testname, tok2, key1, ts, clist.get(0));
< +assertCredentials(testname, tok1, key2, ts2, clist.get(1));
< +assertCredentials(testname, tok3, key2, ts2, clist.get(1));
< +  }
< +
< +  private Credentials generateCredentials(Text t1, Text t2, Text t3)
< +  throws NoSuchAlgorithmException {
< +Text kind = new Text("TESTTOK");
< +byte[] id1 = {0x69, 0x64, 0x65, 0x6e, 0x74, 0x69, 0x66, 0x69, 0x65, 0x72}
< +byte[] pass1 = {0x70, 0x61, 0x73, 0x73, 0x77, 0x6f, 0x72, 0x64};
< +byte[] id2 = {0x68, 0x63, 0x64, 0x6d, 0x73, 0x68, 0x65, 0x68, 0x64, 0x71}
< +byte[] pass2 = {0x6f, 0x60, 0x72, 0x72, 0x76, 0x6e, 0x71, 0x63};
< +Credentials ts = new Credentials();
< +generateToken(ts, id1, pass1, kind, t1);
< +generateToken(ts, id2, pass2, kind, t2);
< +generateKey(ts, t3);
< +

[jira] [Commented] (HADOOP-12957) Limit the number of outstanding async calls

2016-04-26 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15259283#comment-15259283
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-12957:
--

Thanks for the update.  I think the patch may not work for some exception case 
since connection.sendRpcRequest(call) may throw exception.  Then the count 
won't be decremented.

> Limit the number of outstanding async calls
> ---
>
> Key: HADOOP-12957
> URL: https://issues.apache.org/jira/browse/HADOOP-12957
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12957-HADOOP-12909.000.patch, 
> HADOOP-12957-combo.000.patch, HADOOP-12957.001.patch, HADOOP-12957.002.patch
>
>
> In async RPC, if the callers don't read replies fast enough, the buffer 
> storing replies could be used up. This is to propose limiting the number of 
> outstanding async calls to eliminate the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12579) Deprecate and remove WriteableRPCEngine

2016-04-26 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-12579:
---
Attachment: HADOOP-12579-v9.patch

Updated the patch to remove the entry for the old engine in the protocol as 
follows.
{code}
 /**
  * RpcKind determine the rpcEngine and the serialization of the rpc request
+ * Note: 1 for RPC_WRITABLE, WritableRpcEngine, obsolete and removed
  */
 enum RpcKindProto {
   RPC_BUILTIN  = 0;  // Used for built in calls by tests
-  RPC_WRITABLE = 1;  // Use WritableRpcEngine, the actual usage removed
   RPC_PROTOCOL_BUFFER  = 2;  // Use ProtobufRpcEngine
 }
{code}

> Deprecate and remove WriteableRPCEngine
> ---
>
> Key: HADOOP-12579
> URL: https://issues.apache.org/jira/browse/HADOOP-12579
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
> Attachments: HADOOP-12579-v1.patch, HADOOP-12579-v3.patch, 
> HADOOP-12579-v4.patch, HADOOP-12579-v5.patch, HADOOP-12579-v6.patch, 
> HADOOP-12579-v7.patch, HADOOP-12579-v8.patch, HADOOP-12579-v9.patch
>
>
> The {{WriteableRPCEninge}} depends on Java's serialization mechanisms for RPC 
> requests. Without proper checks, it has be shown that it can lead to security 
> vulnerabilities such as remote code execution (e.g., COLLECTIONS-580, 
> HADOOP-12577).
> The current implementation has migrated from {{WriteableRPCEngine}} to 
> {{ProtobufRPCEngine}} now. This jira proposes to deprecate 
> {{WriteableRPCEngine}} in branch-2 and to remove it in trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12911) Upgrade Hadoop MiniKDC with Kerby

2016-04-26 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15259169#comment-15259169
 ] 

Kai Zheng commented on HADOOP-12911:


Oh see. Thanks Andrew for the correction and clarifying for me.

> Upgrade Hadoop MiniKDC with Kerby
> -
>
> Key: HADOOP-12911
> URL: https://issues.apache.org/jira/browse/HADOOP-12911
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Jiajia Li
>Assignee: Jiajia Li
> Attachments: HADOOP-12911-v1.patch, HADOOP-12911-v2.patch, 
> HADOOP-12911-v3.patch, HADOOP-12911-v4.patch, HADOOP-12911-v5.patch, 
> HADOOP-12911-v6.patch
>
>
> As discussed in the mailing list, we’d like to introduce Apache Kerby into 
> Hadoop. Initially it’s good to start with upgrading Hadoop MiniKDC with Kerby 
> offerings. Apache Kerby (https://github.com/apache/directory-kerby), as an 
> Apache Directory sub project, is a Java Kerberos binding. It provides a 
> SimpleKDC server that borrowed ideas from MiniKDC and implemented all the 
> facilities existing in MiniKDC. Currently MiniKDC depends on the old Kerberos 
> implementation in Directory Server project, but the implementation is stopped 
> being maintained. Directory community has a plan to replace the 
> implementation using Kerby. MiniKDC can use Kerby SimpleKDC directly to avoid 
> depending on the full of Directory project. Kerby also provides nice identity 
> backends such as the lightweight memory based one and the very simple json 
> one for easy development and test environments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12911) Upgrade Hadoop MiniKDC with Kerby

2016-04-26 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12911:
-
Target Version/s: 3.0.0
   Fix Version/s: (was: 3.0.0)

Sounds good to me, thanks Kai. I unset "Fix Version" and set the "Target 
Version" to 3.0.0, we normally only set "Fix Version" when the patch is 
committed.

> Upgrade Hadoop MiniKDC with Kerby
> -
>
> Key: HADOOP-12911
> URL: https://issues.apache.org/jira/browse/HADOOP-12911
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Jiajia Li
>Assignee: Jiajia Li
> Attachments: HADOOP-12911-v1.patch, HADOOP-12911-v2.patch, 
> HADOOP-12911-v3.patch, HADOOP-12911-v4.patch, HADOOP-12911-v5.patch, 
> HADOOP-12911-v6.patch
>
>
> As discussed in the mailing list, we’d like to introduce Apache Kerby into 
> Hadoop. Initially it’s good to start with upgrading Hadoop MiniKDC with Kerby 
> offerings. Apache Kerby (https://github.com/apache/directory-kerby), as an 
> Apache Directory sub project, is a Java Kerberos binding. It provides a 
> SimpleKDC server that borrowed ideas from MiniKDC and implemented all the 
> facilities existing in MiniKDC. Currently MiniKDC depends on the old Kerberos 
> implementation in Directory Server project, but the implementation is stopped 
> being maintained. Directory community has a plan to replace the 
> implementation using Kerby. MiniKDC can use Kerby SimpleKDC directly to avoid 
> depending on the full of Directory project. Kerby also provides nice identity 
> backends such as the lightweight memory based one and the very simple json 
> one for easy development and test environments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12957) Limit the number of outstanding async calls

2016-04-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15259119#comment-15259119
 ] 

Hadoop QA commented on HADOOP-12957:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 53s 
{color} | {color:green} trunk passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 44s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 41s 
{color} | {color:green} the patch passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 45s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 48s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_92. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 1s {color} | 
{color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 44s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_92 Failed junit tests | hadoop.ipc.TestIPC |
| JDK v1.7.0_95 Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12800888/HADOOP-12957.002.patch
 |
| JIRA Issue | HADOOP-12957 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4dfa6a5f3377 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HADOOP-13010) Refactor raw erasure coders

2016-04-26 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15259120#comment-15259120
 ] 

Kai Zheng commented on HADOOP-13010:


Thanks [~lirui] for the taking of working on the other part doing the similar 
refactoring.

> Refactor raw erasure coders
> ---
>
> Key: HADOOP-13010
> URL: https://issues.apache.org/jira/browse/HADOOP-13010
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: HADOOP-13010-v1.patch, HADOOP-13010-v2.patch, 
> HADOOP-13010-v3.patch, HADOOP-13010-v4.patch, HADOOP-13010-v5.patch
>
>
> This will refactor raw erasure coders according to some comments received so 
> far.
> * As discussed in HADOOP-11540 and suggested by [~cmccabe], better not to 
> rely class inheritance to reuse the codes, instead they can be moved to some 
> utility.
> * Suggested by [~jingzhao] somewhere quite some time ago, better to have a 
> state holder to keep some checking results for later reuse during an 
> encode/decode call.
> This would not get rid of some inheritance levels as doing so isn't clear yet 
> for the moment and also incurs big impact. I do wish the end result by this 
> refactoring will make all the levels more clear and easier to follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12911) Upgrade Hadoop MiniKDC with Kerby

2016-04-26 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15259117#comment-15259117
 ] 

Kai Zheng commented on HADOOP-12911:


As [~jiajia] clarified it's still keeping the existing configuration interfaces 
and surveyed existing downstream clients are using the interface methods still 
kept, we don't have to mark it as incompatible. As [~ste...@apache.org] 
suggested previously, I target this to 3.0 release. [~andrew.wang] could you 
please comment if you think otherwise. Thanks!

> Upgrade Hadoop MiniKDC with Kerby
> -
>
> Key: HADOOP-12911
> URL: https://issues.apache.org/jira/browse/HADOOP-12911
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Jiajia Li
>Assignee: Jiajia Li
> Fix For: 3.0.0
>
> Attachments: HADOOP-12911-v1.patch, HADOOP-12911-v2.patch, 
> HADOOP-12911-v3.patch, HADOOP-12911-v4.patch, HADOOP-12911-v5.patch, 
> HADOOP-12911-v6.patch
>
>
> As discussed in the mailing list, we’d like to introduce Apache Kerby into 
> Hadoop. Initially it’s good to start with upgrading Hadoop MiniKDC with Kerby 
> offerings. Apache Kerby (https://github.com/apache/directory-kerby), as an 
> Apache Directory sub project, is a Java Kerberos binding. It provides a 
> SimpleKDC server that borrowed ideas from MiniKDC and implemented all the 
> facilities existing in MiniKDC. Currently MiniKDC depends on the old Kerberos 
> implementation in Directory Server project, but the implementation is stopped 
> being maintained. Directory community has a plan to replace the 
> implementation using Kerby. MiniKDC can use Kerby SimpleKDC directly to avoid 
> depending on the full of Directory project. Kerby also provides nice identity 
> backends such as the lightweight memory based one and the very simple json 
> one for easy development and test environments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12911) Upgrade Hadoop MiniKDC with Kerby

2016-04-26 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-12911:
---
Fix Version/s: 3.0.0

> Upgrade Hadoop MiniKDC with Kerby
> -
>
> Key: HADOOP-12911
> URL: https://issues.apache.org/jira/browse/HADOOP-12911
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Jiajia Li
>Assignee: Jiajia Li
> Fix For: 3.0.0
>
> Attachments: HADOOP-12911-v1.patch, HADOOP-12911-v2.patch, 
> HADOOP-12911-v3.patch, HADOOP-12911-v4.patch, HADOOP-12911-v5.patch, 
> HADOOP-12911-v6.patch
>
>
> As discussed in the mailing list, we’d like to introduce Apache Kerby into 
> Hadoop. Initially it’s good to start with upgrading Hadoop MiniKDC with Kerby 
> offerings. Apache Kerby (https://github.com/apache/directory-kerby), as an 
> Apache Directory sub project, is a Java Kerberos binding. It provides a 
> SimpleKDC server that borrowed ideas from MiniKDC and implemented all the 
> facilities existing in MiniKDC. Currently MiniKDC depends on the old Kerberos 
> implementation in Directory Server project, but the implementation is stopped 
> being maintained. Directory community has a plan to replace the 
> implementation using Kerby. MiniKDC can use Kerby SimpleKDC directly to avoid 
> depending on the full of Directory project. Kerby also provides nice identity 
> backends such as the lightweight memory based one and the very simple json 
> one for easy development and test environments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12579) Deprecate and remove WriteableRPCEngine

2016-04-26 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15259099#comment-15259099
 ] 

Kai Zheng commented on HADOOP-12579:


Ok, I agree, since we remain the enum values for other entries, just removing 
the obsolete entry. Will update it.

> Deprecate and remove WriteableRPCEngine
> ---
>
> Key: HADOOP-12579
> URL: https://issues.apache.org/jira/browse/HADOOP-12579
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
> Attachments: HADOOP-12579-v1.patch, HADOOP-12579-v3.patch, 
> HADOOP-12579-v4.patch, HADOOP-12579-v5.patch, HADOOP-12579-v6.patch, 
> HADOOP-12579-v7.patch, HADOOP-12579-v8.patch
>
>
> The {{WriteableRPCEninge}} depends on Java's serialization mechanisms for RPC 
> requests. Without proper checks, it has be shown that it can lead to security 
> vulnerabilities such as remote code execution (e.g., COLLECTIONS-580, 
> HADOOP-12577).
> The current implementation has migrated from {{WriteableRPCEngine}} to 
> {{ProtobufRPCEngine}} now. This jira proposes to deprecate 
> {{WriteableRPCEngine}} in branch-2 and to remove it in trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12847) hadoop daemonlog should support https and SPNEGO for Kerberized cluster

2016-04-26 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15259046#comment-15259046
 ] 

Yongjun Zhang commented on HADOOP-12847:


Hi [~jojochuang],

Thanks for reporting and working this issue!

A few quick comments:
1. Make sure the default behavior is the same as without passing the new 
switches.
2. I think it's better to move the param parsing part in doGetLevel to the 
dedicated param parsing method
3. Make "http" "https" contant, such as {{PROTOCOL_HTTP}}, {{PROTOCOL_HTTPS}}, 
and use them at the currently hardcoded places
4. I expect to see the following test result, would you please test it out?

||SSL-enabled||Kerberized||Test Output With -http||Test Output With -https||
|No|No|Pass|Fail|
|No|Yes|Pass|Fail|
|Yes|No|Fail|Pass|
|Yes|Yes|Fail|Pass|

Thanks.

   

> hadoop daemonlog should support https and SPNEGO for Kerberized cluster
> ---
>
> Key: HADOOP-12847
> URL: https://issues.apache.org/jira/browse/HADOOP-12847
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12847.001.patch, HADOOP-12847.002.patch
>
>
> {{hadoop daemonlog}} is a simple, yet useful tool for debugging.
> However, it does not support https, nor does it support a Kerberized Hadoop 
> cluster.
> Using {{AuthenticatedURL}}, it will be able to support SPNEGO negotiation 
> with a Kerberized name node web ui. It will also fall back to simple 
> authentication if the cluster is not Kerberized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12957) Limit the number of outstanding async calls

2016-04-26 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15259000#comment-15259000
 ] 

Xiaobing Zhou commented on HADOOP-12957:


Patch v002 used a different way to probe limit of async calls, e.g.
{code}
for (;;) {
try {
  doCall(idx, param);
  return;
} catch (AsyncCallLimitExceededException e) {
  /**
   * reached limit of async calls, fetch results of finished async calls
   * to let follow-on calls go
   */
  start = end;
  end = idx;
  waitForReturnValues(start, end);
}
  }
{code}

Thank you [~szetszwo] for the comment.

> Limit the number of outstanding async calls
> ---
>
> Key: HADOOP-12957
> URL: https://issues.apache.org/jira/browse/HADOOP-12957
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12957-HADOOP-12909.000.patch, 
> HADOOP-12957-combo.000.patch, HADOOP-12957.001.patch, HADOOP-12957.002.patch
>
>
> In async RPC, if the callers don't read replies fast enough, the buffer 
> storing replies could be used up. This is to propose limiting the number of 
> outstanding async calls to eliminate the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12974) Create a CachingGetSpaceUsed implementation that uses df

2016-04-26 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15259010#comment-15259010
 ] 

Elliott Clark commented on HADOOP-12974:


ping ?

> Create a CachingGetSpaceUsed implementation that uses df
> 
>
> Key: HADOOP-12974
> URL: https://issues.apache.org/jira/browse/HADOOP-12974
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12974v0.patch, HADOOP-12974v1.patch, 
> HADOOP-12974v2.patch, HADOOP-12974v3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12957) Limit the number of outstanding async calls

2016-04-26 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-12957:
---
Attachment: HADOOP-12957.002.patch

> Limit the number of outstanding async calls
> ---
>
> Key: HADOOP-12957
> URL: https://issues.apache.org/jira/browse/HADOOP-12957
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12957-HADOOP-12909.000.patch, 
> HADOOP-12957-combo.000.patch, HADOOP-12957.001.patch, HADOOP-12957.002.patch
>
>
> In async RPC, if the callers don't read replies fast enough, the buffer 
> storing replies could be used up. This is to propose limiting the number of 
> outstanding async calls to eliminate the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13064) LineReader reports incorrect number of bytes read resulting in correctness issues using LineRecordReader

2016-04-26 Thread Joe Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Ellis updated HADOOP-13064:
---
Attachment: LineReaderTest.java

Here's an example that fails.

> LineReader reports incorrect number of bytes read resulting in correctness 
> issues using LineRecordReader
> 
>
> Key: HADOOP-13064
> URL: https://issues.apache.org/jira/browse/HADOOP-13064
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Joe Ellis
>Priority: Critical
> Attachments: LineReaderTest.java
>
>
> The specific issue we were seeing with LineReader is that when we pass in 
> '\r\n' as the line delimiter the number of bytes that it claims to have read 
> is less than what it actually read. We narrowed this down to only happening 
> when the delimiter is split across the internal buffer boundary, so if 
> fillbuffer fills with "row\r" and the next call fills with "\n" then the 
> number of bytes reported would be 4 rather than 5.
> This results in correctness issues in LineRecordReader because if this off by 
> one issue is seen enough times when reading a split then it will continue to 
> read records past its split boundary, resulting in records appearing to come 
> from multiple splits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-13064) LineReader reports incorrect number of bytes read resulting in correctness issues using LineRecordReader

2016-04-26 Thread Joe Ellis (JIRA)
Joe Ellis created HADOOP-13064:
--

 Summary: LineReader reports incorrect number of bytes read 
resulting in correctness issues using LineRecordReader
 Key: HADOOP-13064
 URL: https://issues.apache.org/jira/browse/HADOOP-13064
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.1
Reporter: Joe Ellis
Priority: Critical


The specific issue we were seeing with LineReader is that when we pass in 
'\r\n' as the line delimiter the number of bytes that it claims to have read is 
less than what it actually read. We narrowed this down to only happening when 
the delimiter is split across the internal buffer boundary, so if fillbuffer 
fills with "row\r" and the next call fills with "\n" then the number of bytes 
reported would be 4 rather than 5.

This results in correctness issues in LineRecordReader because if this off by 
one issue is seen enough times when reading a split then it will continue to 
read records past its split boundary, resulting in records appearing to come 
from multiple splits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13028) add low level counter metrics for S3A; use in read performance tests

2016-04-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15258713#comment-15258713
 ] 

Colin Patrick McCabe commented on HADOOP-13028:
---

It looks really good, [~steve_l].

Just to avoid misunderstandings, I'll drop a -1 here until we finish discussing 
what the interface should be... 
I look forward to giving this a review as soon as we figure that out.

> add low level counter metrics for S3A; use in read performance tests
> 
>
> Key: HADOOP-13028
> URL: https://issues.apache.org/jira/browse/HADOOP-13028
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, metrics
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13028-001.patch, HADOOP-13028-002.patch, 
> HADOOP-13028-004.patch, HADOOP-13028-005.patch, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt
>
>
> against S3 (and other object stores), opening connections can be expensive, 
> closing connections may be expensive (a sign of a regression). 
> S3A FS and individual input streams should have counters of the # of 
> open/close/failure+reconnect operations, timers of how long things take. This 
> can be used downstream to measure efficiency of the code (how often 
> connections are being made), connection reliability, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12579) Deprecate and remove WriteableRPCEngine

2016-04-26 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15258666#comment-15258666
 ] 

Haohui Mai commented on HADOOP-12579:
-

{code}
enum RpcKindProto {
  RPC_BUILTIN  = 0;  // Used for built in calls by tests
-  RPC_WRITABLE = 1;  // Use WritableRpcEngine 
+  RPC_WRITABLE = 1;  // Use WritableRpcEngine, the actual usage removed
  RPC_PROTOCOL_BUFFER  = 2;  // Use ProtobufRpcEngine
}
{code}

Should be able to get rid of {{RPC_WRITABLE}} here as well?

> Deprecate and remove WriteableRPCEngine
> ---
>
> Key: HADOOP-12579
> URL: https://issues.apache.org/jira/browse/HADOOP-12579
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
> Attachments: HADOOP-12579-v1.patch, HADOOP-12579-v3.patch, 
> HADOOP-12579-v4.patch, HADOOP-12579-v5.patch, HADOOP-12579-v6.patch, 
> HADOOP-12579-v7.patch, HADOOP-12579-v8.patch
>
>
> The {{WriteableRPCEninge}} depends on Java's serialization mechanisms for RPC 
> requests. Without proper checks, it has be shown that it can lead to security 
> vulnerabilities such as remote code execution (e.g., COLLECTIONS-580, 
> HADOOP-12577).
> The current implementation has migrated from {{WriteableRPCEngine}} to 
> {{ProtobufRPCEngine}} now. This jira proposes to deprecate 
> {{WriteableRPCEngine}} in branch-2 and to remove it in trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13028) add low level counter metrics for S3A; use in read performance tests

2016-04-26 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15258651#comment-15258651
 ] 

Steve Loughran commented on HADOOP-13028:
-

Note the actual fix is to force a {{fs.list(/)}} after creating the FS; the 
failure of the first operation is viewed as transient and downgraded to a 
warning.

This is an interesting problem: we really only want to swallow transient 
network failures, not other issues. But any other issues will surface the next 
time someone tries to use the instance; by moving the checks out of the 
{{initialize()}} method we stop the FS setup itself breaking.



> add low level counter metrics for S3A; use in read performance tests
> 
>
> Key: HADOOP-13028
> URL: https://issues.apache.org/jira/browse/HADOOP-13028
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, metrics
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13028-001.patch, HADOOP-13028-002.patch, 
> HADOOP-13028-004.patch, HADOOP-13028-005.patch, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt
>
>
> against S3 (and other object stores), opening connections can be expensive, 
> closing connections may be expensive (a sign of a regression). 
> S3A FS and individual input streams should have counters of the # of 
> open/close/failure+reconnect operations, timers of how long things take. This 
> can be used downstream to measure efficiency of the code (how often 
> connections are being made), connection reliability, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13028) add low level counter metrics for S3A; use in read performance tests

2016-04-26 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13028:

Target Version/s: 2.8.0
  Status: Patch Available  (was: Open)

> add low level counter metrics for S3A; use in read performance tests
> 
>
> Key: HADOOP-13028
> URL: https://issues.apache.org/jira/browse/HADOOP-13028
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, metrics
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13028-001.patch, HADOOP-13028-002.patch, 
> HADOOP-13028-004.patch, HADOOP-13028-005.patch, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt
>
>
> against S3 (and other object stores), opening connections can be expensive, 
> closing connections may be expensive (a sign of a regression). 
> S3A FS and individual input streams should have counters of the # of 
> open/close/failure+reconnect operations, timers of how long things take. This 
> can be used downstream to measure efficiency of the code (how often 
> connections are being made), connection reliability, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13028) add low level counter metrics for S3A; use in read performance tests

2016-04-26 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13028:

Attachment: HADOOP-13028-005.patch

HADOOP-13028 patch 005: fixed the regression; cleaned up whole test class in 
the process.

Specifically
* unified create/check/assert on FS creation
* generic test utils for exception contents checks
* clean up the FS instances by closing the {{fs}} field after tests
* move all assignments of FS instances to that {{fs}} field to get picked up
* line width on one of the tests

> add low level counter metrics for S3A; use in read performance tests
> 
>
> Key: HADOOP-13028
> URL: https://issues.apache.org/jira/browse/HADOOP-13028
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, metrics
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13028-001.patch, HADOOP-13028-002.patch, 
> HADOOP-13028-004.patch, HADOOP-13028-005.patch, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt
>
>
> against S3 (and other object stores), opening connections can be expensive, 
> closing connections may be expensive (a sign of a regression). 
> S3A FS and individual input streams should have counters of the # of 
> open/close/failure+reconnect operations, timers of how long things take. This 
> can be used downstream to measure efficiency of the code (how often 
> connections are being made), connection reliability, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13028) add low level counter metrics for S3A; use in read performance tests

2016-04-26 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13028:

Status: Open  (was: Patch Available)

> add low level counter metrics for S3A; use in read performance tests
> 
>
> Key: HADOOP-13028
> URL: https://issues.apache.org/jira/browse/HADOOP-13028
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, metrics
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13028-001.patch, HADOOP-13028-002.patch, 
> HADOOP-13028-004.patch, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt
>
>
> against S3 (and other object stores), opening connections can be expensive, 
> closing connections may be expensive (a sign of a regression). 
> S3A FS and individual input streams should have counters of the # of 
> open/close/failure+reconnect operations, timers of how long things take. This 
> can be used downstream to measure efficiency of the code (how often 
> connections are being made), connection reliability, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13063) Incorrect error message while setting dfs.block.size to wrong value

2016-04-26 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15258442#comment-15258442
 ] 

Kihwal Lee commented on HADOOP-13063:
-

Please check trunk and create a patch against it if the issue is still present.

> Incorrect error message while setting dfs.block.size to wrong value
> ---
>
> Key: HADOOP-13063
> URL: https://issues.apache.org/jira/browse/HADOOP-13063
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Oleksiy Sayankin
>Assignee: Oleksiy Sayankin
> Attachments: HADOOP-13063.1.patch
>
>
> Execute in Hive
> {code}
> hive> SET dfs.block.size=3200; 
> hive> select count(*) from test;
> {code}
> See logs
> {code}
> Query ID = vagrant_20160408135656_fd1937b3-b330-4d54-842a-0f3ec544ceea
> Total jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapreduce.job.reduces=
> Starting Job = job_1460123221842_0001, Tracking URL = 
> http://cdh-master:8088/proxy/application_1460123221842_0001/
> Kill Command = /usr/lib/hadoop/bin/hadoop job  -kill job_1460123221842_0001
> Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 
> 1
> 2016-04-08 13:57:18,494 Stage-1 map = 0%,  reduce = 0%
> 2016-04-08 13:58:06,821 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_1460123221842_0001 with errors
> Error during job, obtaining debugging information...
> Examining task ID: task_1460123221842_0001_m_00 (and more) from job 
> job_1460123221842_0001
> Task with the most failures(4): 
> -
> Task ID:
>   task_1460123221842_0001_m_00
> URL:
>   
> http://cdh-master:8088/taskdetails.jsp?jobid=job_1460123221842_0001=task_1460123221842_0001_m_00
> -
> Diagnostic Messages for this Task:
> Exception from container-launch.
> Container id: container_1460123221842_0001_01_05
> Exit code: 1
> Stack trace: ExitCodeException exitCode=1: 
>   at org.apache.hadoop.util.Shell.runCommand(Shell.java:561)
>   at org.apache.hadoop.util.Shell.run(Shell.java:478)
>   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:738)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:210)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Container exited with a non-zero exit code 1
> FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask
> MapReduce Jobs Launched: 
> Stage-Stage-1: Map: 1  Reduce: 1   HDFS Read: 0 HDFS Write: 0 FAIL
> Total MapReduce CPU Time Spent: 0 msec
> {code}
> We need to have more informative error message here



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-5470) RunJar.unJar() should write the last modified time found in the jar entry to the uncompressed file

2016-04-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-5470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15258409#comment-15258409
 ] 

Hadoop QA commented on HADOOP-5470:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
38s {color} | {color:green} trunk passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 20s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s 
{color} | {color:green} trunk passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 18s 
{color} | {color:green} the patch passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 30s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 2s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_92. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 45s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
27s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 85m 21s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12800800/HADOOP-5470.05.patch |
| JIRA Issue | HADOOP-5470 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4c12242833a3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1a3f148 |
| Default Java | 1.7.0_95 |
| Multi-JDK versions |  

[jira] [Commented] (HADOOP-5470) RunJar.unJar() should write the last modified time found in the jar entry to the uncompressed file

2016-04-26 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-5470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15258400#comment-15258400
 ] 

Arpit Agarwal commented on HADOOP-5470:
---

Hi [~boky01], thanks for updating the patch. Yes this is what I meant.

One more thing I missed last time - we should invoke setLastModified after 
closing the file. setLastModified may fail for open files on some platforms.

> RunJar.unJar() should write the last modified time found in the jar entry to 
> the uncompressed file
> --
>
> Key: HADOOP-5470
> URL: https://issues.apache.org/jira/browse/HADOOP-5470
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 0.18.0, 0.18.1, 0.18.2, 0.18.3, 0.19.0, 0.19.1
>Reporter: Colin Evans
>Assignee: Andras Bokor
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-5470.01.patch, HADOOP-5470.02.patch, 
> HADOOP-5470.03.patch, HADOOP-5470.04.patch, HADOOP-5470.05.patch
>
>
> For tools like jruby and jython, last modified times determine if a script 
> gets recompiled.  Losing the correct last modified time causes some 
> unfortunate recompilation race conditions when a job is running.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13028) add low level counter metrics for S3A; use in read performance tests

2016-04-26 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15258265#comment-15258265
 ] 

Steve Loughran commented on HADOOP-13028:
-

This patch (the HADOOP-13059 robust init) bit breaks the proxy tests in 
{{TestS3AConfiguration}}; looks like those tests failed because the bucket 
check triggered a failure on proxy invocation

> add low level counter metrics for S3A; use in read performance tests
> 
>
> Key: HADOOP-13028
> URL: https://issues.apache.org/jira/browse/HADOOP-13028
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, metrics
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13028-001.patch, HADOOP-13028-002.patch, 
> HADOOP-13028-004.patch, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt
>
>
> against S3 (and other object stores), opening connections can be expensive, 
> closing connections may be expensive (a sign of a regression). 
> S3A FS and individual input streams should have counters of the # of 
> open/close/failure+reconnect operations, timers of how long things take. This 
> can be used downstream to measure efficiency of the code (how often 
> connections are being made), connection reliability, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12782) Faster LDAP group name resolution with ActiveDirectory

2016-04-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15258226#comment-15258226
 ] 

Hadoop QA commented on HADOOP-12782:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 49s 
{color} | {color:green} trunk passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 47s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 47s 
{color} | {color:green} the patch passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 46s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 32 unchanged - 2 fixed = 32 total (was 34) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 47s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_92. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 29s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 54s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_92 Failed junit tests | hadoop.net.TestClusterTopology |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12800779/HADOOP-12782.004.patch
 |
| JIRA Issue | HADOOP-12782 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 

[jira] [Commented] (HADOOP-13063) Incorrect error message while setting dfs.block.size to wrong value

2016-04-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15258195#comment-15258195
 ] 

Hadoop QA commented on HADOOP-13063:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s {color} 
| {color:red} HADOOP-13063 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12800798/HADOOP-13063.1.patch |
| JIRA Issue | HADOOP-13063 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9185/console |
| Powered by | Apache Yetus 0.3.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Incorrect error message while setting dfs.block.size to wrong value
> ---
>
> Key: HADOOP-13063
> URL: https://issues.apache.org/jira/browse/HADOOP-13063
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Oleksiy Sayankin
>Assignee: Oleksiy Sayankin
> Attachments: HADOOP-13063.1.patch
>
>
> Execute in Hive
> {code}
> hive> SET dfs.block.size=3200; 
> hive> select count(*) from test;
> {code}
> See logs
> {code}
> Query ID = vagrant_20160408135656_fd1937b3-b330-4d54-842a-0f3ec544ceea
> Total jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapreduce.job.reduces=
> Starting Job = job_1460123221842_0001, Tracking URL = 
> http://cdh-master:8088/proxy/application_1460123221842_0001/
> Kill Command = /usr/lib/hadoop/bin/hadoop job  -kill job_1460123221842_0001
> Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 
> 1
> 2016-04-08 13:57:18,494 Stage-1 map = 0%,  reduce = 0%
> 2016-04-08 13:58:06,821 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_1460123221842_0001 with errors
> Error during job, obtaining debugging information...
> Examining task ID: task_1460123221842_0001_m_00 (and more) from job 
> job_1460123221842_0001
> Task with the most failures(4): 
> -
> Task ID:
>   task_1460123221842_0001_m_00
> URL:
>   
> http://cdh-master:8088/taskdetails.jsp?jobid=job_1460123221842_0001=task_1460123221842_0001_m_00
> -
> Diagnostic Messages for this Task:
> Exception from container-launch.
> Container id: container_1460123221842_0001_01_05
> Exit code: 1
> Stack trace: ExitCodeException exitCode=1: 
>   at org.apache.hadoop.util.Shell.runCommand(Shell.java:561)
>   at org.apache.hadoop.util.Shell.run(Shell.java:478)
>   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:738)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:210)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Container exited with a non-zero exit code 1
> FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask
> MapReduce Jobs Launched: 
> Stage-Stage-1: Map: 1  Reduce: 1   HDFS Read: 0 HDFS Write: 0 FAIL
> Total MapReduce CPU Time Spent: 0 msec
> {code}
> We need to have more informative error message here



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-5470) RunJar.unJar() should write the last modified time found in the jar entry to the uncompressed file

2016-04-26 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-5470:
-
Attachment: HADOOP-5470.05.patch

[~arpitagarwal] Please check the new path. That was you meant?

> RunJar.unJar() should write the last modified time found in the jar entry to 
> the uncompressed file
> --
>
> Key: HADOOP-5470
> URL: https://issues.apache.org/jira/browse/HADOOP-5470
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 0.18.0, 0.18.1, 0.18.2, 0.18.3, 0.19.0, 0.19.1
>Reporter: Colin Evans
>Assignee: Andras Bokor
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-5470.01.patch, HADOOP-5470.02.patch, 
> HADOOP-5470.03.patch, HADOOP-5470.04.patch, HADOOP-5470.05.patch
>
>
> For tools like jruby and jython, last modified times determine if a script 
> gets recompiled.  Losing the correct last modified time causes some 
> unfortunate recompilation race conditions when a job is running.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13063) Incorrect error message while setting dfs.block.size to wrong value

2016-04-26 Thread Oleksiy Sayankin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksiy Sayankin updated HADOOP-13063:
--
Status: Patch Available  (was: In Progress)

> Incorrect error message while setting dfs.block.size to wrong value
> ---
>
> Key: HADOOP-13063
> URL: https://issues.apache.org/jira/browse/HADOOP-13063
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Oleksiy Sayankin
>Assignee: Oleksiy Sayankin
> Attachments: HADOOP-13063.1.patch
>
>
> Execute in Hive
> {code}
> hive> SET dfs.block.size=3200; 
> hive> select count(*) from test;
> {code}
> See logs
> {code}
> Query ID = vagrant_20160408135656_fd1937b3-b330-4d54-842a-0f3ec544ceea
> Total jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapreduce.job.reduces=
> Starting Job = job_1460123221842_0001, Tracking URL = 
> http://cdh-master:8088/proxy/application_1460123221842_0001/
> Kill Command = /usr/lib/hadoop/bin/hadoop job  -kill job_1460123221842_0001
> Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 
> 1
> 2016-04-08 13:57:18,494 Stage-1 map = 0%,  reduce = 0%
> 2016-04-08 13:58:06,821 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_1460123221842_0001 with errors
> Error during job, obtaining debugging information...
> Examining task ID: task_1460123221842_0001_m_00 (and more) from job 
> job_1460123221842_0001
> Task with the most failures(4): 
> -
> Task ID:
>   task_1460123221842_0001_m_00
> URL:
>   
> http://cdh-master:8088/taskdetails.jsp?jobid=job_1460123221842_0001=task_1460123221842_0001_m_00
> -
> Diagnostic Messages for this Task:
> Exception from container-launch.
> Container id: container_1460123221842_0001_01_05
> Exit code: 1
> Stack trace: ExitCodeException exitCode=1: 
>   at org.apache.hadoop.util.Shell.runCommand(Shell.java:561)
>   at org.apache.hadoop.util.Shell.run(Shell.java:478)
>   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:738)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:210)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Container exited with a non-zero exit code 1
> FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask
> MapReduce Jobs Launched: 
> Stage-Stage-1: Map: 1  Reduce: 1   HDFS Read: 0 HDFS Write: 0 FAIL
> Total MapReduce CPU Time Spent: 0 msec
> {code}
> We need to have more informative error message here



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13063) Incorrect error message while setting dfs.block.size to wrong value

2016-04-26 Thread Oleksiy Sayankin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksiy Sayankin updated HADOOP-13063:
--
Attachment: HADOOP-13063.1.patch

> Incorrect error message while setting dfs.block.size to wrong value
> ---
>
> Key: HADOOP-13063
> URL: https://issues.apache.org/jira/browse/HADOOP-13063
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Oleksiy Sayankin
>Assignee: Oleksiy Sayankin
> Attachments: HADOOP-13063.1.patch
>
>
> Execute in Hive
> {code}
> hive> SET dfs.block.size=3200; 
> hive> select count(*) from test;
> {code}
> See logs
> {code}
> Query ID = vagrant_20160408135656_fd1937b3-b330-4d54-842a-0f3ec544ceea
> Total jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapreduce.job.reduces=
> Starting Job = job_1460123221842_0001, Tracking URL = 
> http://cdh-master:8088/proxy/application_1460123221842_0001/
> Kill Command = /usr/lib/hadoop/bin/hadoop job  -kill job_1460123221842_0001
> Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 
> 1
> 2016-04-08 13:57:18,494 Stage-1 map = 0%,  reduce = 0%
> 2016-04-08 13:58:06,821 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_1460123221842_0001 with errors
> Error during job, obtaining debugging information...
> Examining task ID: task_1460123221842_0001_m_00 (and more) from job 
> job_1460123221842_0001
> Task with the most failures(4): 
> -
> Task ID:
>   task_1460123221842_0001_m_00
> URL:
>   
> http://cdh-master:8088/taskdetails.jsp?jobid=job_1460123221842_0001=task_1460123221842_0001_m_00
> -
> Diagnostic Messages for this Task:
> Exception from container-launch.
> Container id: container_1460123221842_0001_01_05
> Exit code: 1
> Stack trace: ExitCodeException exitCode=1: 
>   at org.apache.hadoop.util.Shell.runCommand(Shell.java:561)
>   at org.apache.hadoop.util.Shell.run(Shell.java:478)
>   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:738)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:210)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Container exited with a non-zero exit code 1
> FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask
> MapReduce Jobs Launched: 
> Stage-Stage-1: Map: 1  Reduce: 1   HDFS Read: 0 HDFS Write: 0 FAIL
> Total MapReduce CPU Time Spent: 0 msec
> {code}
> We need to have more informative error message here



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11786) Fix Javadoc typos in org.apache.hadoop.fs.FileSystem

2016-04-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15258126#comment-15258126
 ] 

Hadoop QA commented on HADOOP-11786:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 55s 
{color} | {color:green} trunk passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 45s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
46s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} trunk passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 48s 
{color} | {color:green} the patch passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 47s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 16s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_92. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 8s {color} | 
{color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
25s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 28s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_95 Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12798530/HADOOP-11786.patch |
| JIRA Issue | HADOOP-11786 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5e645e4a0cb5 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 

[jira] [Created] (HADOOP-13063) Incorrect error message while setting dfs.block.size to wrong value

2016-04-26 Thread Oleksiy Sayankin (JIRA)
Oleksiy Sayankin created HADOOP-13063:
-

 Summary: Incorrect error message while setting dfs.block.size to 
wrong value
 Key: HADOOP-13063
 URL: https://issues.apache.org/jira/browse/HADOOP-13063
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Oleksiy Sayankin
Assignee: Oleksiy Sayankin


Execute in Hive

{code}
hive> SET dfs.block.size=3200; 
hive> select count(*) from test;
{code}

See logs

{code}
Query ID = vagrant_20160408135656_fd1937b3-b330-4d54-842a-0f3ec544ceea
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=
In order to set a constant number of reducers:
  set mapreduce.job.reduces=
Starting Job = job_1460123221842_0001, Tracking URL = 
http://cdh-master:8088/proxy/application_1460123221842_0001/
Kill Command = /usr/lib/hadoop/bin/hadoop job  -kill job_1460123221842_0001
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2016-04-08 13:57:18,494 Stage-1 map = 0%,  reduce = 0%
2016-04-08 13:58:06,821 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_1460123221842_0001 with errors
Error during job, obtaining debugging information...
Examining task ID: task_1460123221842_0001_m_00 (and more) from job 
job_1460123221842_0001

Task with the most failures(4): 
-
Task ID:
  task_1460123221842_0001_m_00

URL:
  
http://cdh-master:8088/taskdetails.jsp?jobid=job_1460123221842_0001=task_1460123221842_0001_m_00
-
Diagnostic Messages for this Task:
Exception from container-launch.
Container id: container_1460123221842_0001_01_05
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
at org.apache.hadoop.util.Shell.runCommand(Shell.java:561)
at org.apache.hadoop.util.Shell.run(Shell.java:478)
at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:738)
at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:210)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1


FAILED: Execution Error, return code 2 from 
org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1  Reduce: 1   HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
{code}

We need to have more informative error message here



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HADOOP-13063) Incorrect error message while setting dfs.block.size to wrong value

2016-04-26 Thread Oleksiy Sayankin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-13063 started by Oleksiy Sayankin.
-
> Incorrect error message while setting dfs.block.size to wrong value
> ---
>
> Key: HADOOP-13063
> URL: https://issues.apache.org/jira/browse/HADOOP-13063
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Oleksiy Sayankin
>Assignee: Oleksiy Sayankin
>
> Execute in Hive
> {code}
> hive> SET dfs.block.size=3200; 
> hive> select count(*) from test;
> {code}
> See logs
> {code}
> Query ID = vagrant_20160408135656_fd1937b3-b330-4d54-842a-0f3ec544ceea
> Total jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapreduce.job.reduces=
> Starting Job = job_1460123221842_0001, Tracking URL = 
> http://cdh-master:8088/proxy/application_1460123221842_0001/
> Kill Command = /usr/lib/hadoop/bin/hadoop job  -kill job_1460123221842_0001
> Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 
> 1
> 2016-04-08 13:57:18,494 Stage-1 map = 0%,  reduce = 0%
> 2016-04-08 13:58:06,821 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_1460123221842_0001 with errors
> Error during job, obtaining debugging information...
> Examining task ID: task_1460123221842_0001_m_00 (and more) from job 
> job_1460123221842_0001
> Task with the most failures(4): 
> -
> Task ID:
>   task_1460123221842_0001_m_00
> URL:
>   
> http://cdh-master:8088/taskdetails.jsp?jobid=job_1460123221842_0001=task_1460123221842_0001_m_00
> -
> Diagnostic Messages for this Task:
> Exception from container-launch.
> Container id: container_1460123221842_0001_01_05
> Exit code: 1
> Stack trace: ExitCodeException exitCode=1: 
>   at org.apache.hadoop.util.Shell.runCommand(Shell.java:561)
>   at org.apache.hadoop.util.Shell.run(Shell.java:478)
>   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:738)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:210)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Container exited with a non-zero exit code 1
> FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask
> MapReduce Jobs Launched: 
> Stage-Stage-1: Map: 1  Reduce: 1   HDFS Read: 0 HDFS Write: 0 FAIL
> Total MapReduce CPU Time Spent: 0 msec
> {code}
> We need to have more informative error message here



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12782) Faster LDAP group name resolution with ActiveDirectory

2016-04-26 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12782:
-
Status: Patch Available  (was: Open)

> Faster LDAP group name resolution with ActiveDirectory
> --
>
> Key: HADOOP-12782
> URL: https://issues.apache.org/jira/browse/HADOOP-12782
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12782.001.patch, HADOOP-12782.002.patch, 
> HADOOP-12782.003.patch, HADOOP-12782.004.patch
>
>
> The typical LDAP group name resolution works well under typical scenarios. 
> However, we have seen cases where a user is mapped to many groups (in an 
> extreme case, a user is mapped to more than 100 groups). The way it's being 
> implemented now makes this case super slow resolving groups from 
> ActiveDirectory.
> The current LDAP group resolution implementation sends two queries to a 
> ActiveDirectory server. The first query returns a user object, which contains 
> DN (distinguished name). The second query looks for groups where the user DN 
> is a member. If a user is mapped to many groups, the second query returns all 
> group objects associated with the user, and is thus very slow.
> After studying a user object in ActiveDirectory, I found a user object 
> actually contains a "memberOf" field, which is the DN of all group objects 
> where the user belongs to. Assuming that an organization has no recursive 
> group relation (that is, a user A is a member of group G1, and group G1 is a 
> member of group G2), we can use this properties to avoid the second query, 
> which can potentially run very slow.
> I propose that we add a configuration to only enable this feature for users 
> who want to reduce group resolution time and who does not have recursive 
> groups, so that existing behavior will not be broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12782) Faster LDAP group name resolution with ActiveDirectory

2016-04-26 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12782:
-
Attachment: (was: HADOOP-12782.004.patch)

> Faster LDAP group name resolution with ActiveDirectory
> --
>
> Key: HADOOP-12782
> URL: https://issues.apache.org/jira/browse/HADOOP-12782
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12782.001.patch, HADOOP-12782.002.patch, 
> HADOOP-12782.003.patch, HADOOP-12782.004.patch
>
>
> The typical LDAP group name resolution works well under typical scenarios. 
> However, we have seen cases where a user is mapped to many groups (in an 
> extreme case, a user is mapped to more than 100 groups). The way it's being 
> implemented now makes this case super slow resolving groups from 
> ActiveDirectory.
> The current LDAP group resolution implementation sends two queries to a 
> ActiveDirectory server. The first query returns a user object, which contains 
> DN (distinguished name). The second query looks for groups where the user DN 
> is a member. If a user is mapped to many groups, the second query returns all 
> group objects associated with the user, and is thus very slow.
> After studying a user object in ActiveDirectory, I found a user object 
> actually contains a "memberOf" field, which is the DN of all group objects 
> where the user belongs to. Assuming that an organization has no recursive 
> group relation (that is, a user A is a member of group G1, and group G1 is a 
> member of group G2), we can use this properties to avoid the second query, 
> which can potentially run very slow.
> I propose that we add a configuration to only enable this feature for users 
> who want to reduce group resolution time and who does not have recursive 
> groups, so that existing behavior will not be broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12782) Faster LDAP group name resolution with ActiveDirectory

2016-04-26 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12782:
-
Status: Open  (was: Patch Available)

> Faster LDAP group name resolution with ActiveDirectory
> --
>
> Key: HADOOP-12782
> URL: https://issues.apache.org/jira/browse/HADOOP-12782
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12782.001.patch, HADOOP-12782.002.patch, 
> HADOOP-12782.003.patch, HADOOP-12782.004.patch
>
>
> The typical LDAP group name resolution works well under typical scenarios. 
> However, we have seen cases where a user is mapped to many groups (in an 
> extreme case, a user is mapped to more than 100 groups). The way it's being 
> implemented now makes this case super slow resolving groups from 
> ActiveDirectory.
> The current LDAP group resolution implementation sends two queries to a 
> ActiveDirectory server. The first query returns a user object, which contains 
> DN (distinguished name). The second query looks for groups where the user DN 
> is a member. If a user is mapped to many groups, the second query returns all 
> group objects associated with the user, and is thus very slow.
> After studying a user object in ActiveDirectory, I found a user object 
> actually contains a "memberOf" field, which is the DN of all group objects 
> where the user belongs to. Assuming that an organization has no recursive 
> group relation (that is, a user A is a member of group G1, and group G1 is a 
> member of group G2), we can use this properties to avoid the second query, 
> which can potentially run very slow.
> I propose that we add a configuration to only enable this feature for users 
> who want to reduce group resolution time and who does not have recursive 
> groups, so that existing behavior will not be broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12782) Faster LDAP group name resolution with ActiveDirectory

2016-04-26 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12782:
-
Attachment: HADOOP-12782.004.patch

Resubmit to kick off the precommit build.

> Faster LDAP group name resolution with ActiveDirectory
> --
>
> Key: HADOOP-12782
> URL: https://issues.apache.org/jira/browse/HADOOP-12782
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12782.001.patch, HADOOP-12782.002.patch, 
> HADOOP-12782.003.patch, HADOOP-12782.004.patch
>
>
> The typical LDAP group name resolution works well under typical scenarios. 
> However, we have seen cases where a user is mapped to many groups (in an 
> extreme case, a user is mapped to more than 100 groups). The way it's being 
> implemented now makes this case super slow resolving groups from 
> ActiveDirectory.
> The current LDAP group resolution implementation sends two queries to a 
> ActiveDirectory server. The first query returns a user object, which contains 
> DN (distinguished name). The second query looks for groups where the user DN 
> is a member. If a user is mapped to many groups, the second query returns all 
> group objects associated with the user, and is thus very slow.
> After studying a user object in ActiveDirectory, I found a user object 
> actually contains a "memberOf" field, which is the DN of all group objects 
> where the user belongs to. Assuming that an organization has no recursive 
> group relation (that is, a user A is a member of group G1, and group G1 is a 
> member of group G2), we can use this properties to avoid the second query, 
> which can potentially run very slow.
> I propose that we add a configuration to only enable this feature for users 
> who want to reduce group resolution time and who does not have recursive 
> groups, so that existing behavior will not be broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11786) Fix Javadoc typos in org.apache.hadoop.fs.FileSystem

2016-04-26 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-11786:
--
Assignee: Andras Bokor  (was: Yanjun Wang)
  Status: Patch Available  (was: Open)

> Fix Javadoc typos in org.apache.hadoop.fs.FileSystem
> 
>
> Key: HADOOP-11786
> URL: https://issues.apache.org/jira/browse/HADOOP-11786
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.6.0
>Reporter: Chen He
>Assignee: Andras Bokor
>Priority: Trivial
>  Labels: newbie++
> Attachments: HADOOP-11786.patch
>
>
> /**
>  * Resets all statistics to 0.
>  *
>  * In order to reset, we add up all the thread-local statistics data, and
>  * set rootData to the negative of that.
>  *
>  * This may seem like a counterintuitive way to reset the statsitics.  Why
>  * can't we just zero out all the thread-local data?  Well, thread-local
>  * data can only be modified by the thread that owns it.  If we tried to
>  * modify the thread-local data from this thread, our modification might 
> get
>  * interleaved with a read-modify-write operation done by the thread that
>  * owns the data.  That would result in our update getting lost.
>  *
>  * The approach used here avoids this problem because it only ever reads
>  * (not writes) the thread-local data.  Both reads and writes to rootData
>  * are done under the lock, so we're free to modify rootData from any 
> thread
>  * that holds the lock.
>  */
> etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12579) Deprecate and remove WriteableRPCEngine

2016-04-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15258011#comment-15258011
 ] 

Hadoop QA commented on HADOOP-12579:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 6m 36s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
49s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 54s 
{color} | {color:green} trunk passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 48s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 19s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
47s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 17s 
{color} | {color:green} trunk passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 7s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 5m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 2s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 2s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 19s 
{color} | {color:red} root: The patch generated 3 new + 590 unchanged - 28 
fixed = 593 total (was 618) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 13s 
{color} | {color:green} the patch passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 57s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_92. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 11s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_92. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 5m 36s 
{color} | {color:green} hadoop-mapreduce-client-hs in the patch passed with JDK 
v1.8.0_92. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 28s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| 

[jira] [Commented] (HADOOP-13060) While trying to perform a Distcp command, we see the error Exception in thread "main" java.lang.NoSuchMethodError: com.amazonaws.services.s3.transfer.TransferManager.

2016-04-26 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15257856#comment-15257856
 ] 

Steve Loughran commented on HADOOP-13060:
-

I've created HADOOP-13062 to propose using introspection in the 2.7.x branch to 
have it link to the newer SDKs.

[~jennydong272]: S3A is very much a community project: whoever uses it gets to 
help maintain it. If you could provide the patch for this I'll help nurture it 
into the 2.7 branch

> While trying to perform a Distcp command, we see the error Exception in 
> thread "main" java.lang.NoSuchMethodError: 
> com.amazonaws.services.s3.transfer.TransferManager.(Lcom/amazonaws/services/s3/AmazonS3;Ljava/util/concurrent/ThreadPoolExecutor;)V
> 
>
> Key: HADOOP-13060
> URL: https://issues.apache.org/jira/browse/HADOOP-13060
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.1
>Reporter: Jenny Dong
>
> While trying to do a distcp from our native HDFS cluster to S3, we get the 
> following error/stacktrace : 
> We are using hadoop-aws.jar version 2.7.1. We are using aws-java-sdk.jar 
> version 1.10.69 (we bumped this up from 2.7.4 because we were getting errors 
> seen in HADOOP-12420 + other authentication errors).
> Exception in thread "main" java.lang.NoSuchMethodError: 
> com.amazonaws.services.s3.transfer.TransferManager.(Lcom/amazonaws/services/s3/AmazonS3;Ljava/util/concurrent/ThreadPoolExecutor;)V
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:287)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
>   at org.apache.hadoop.tools.DistCp.configureOutputFormat(DistCp.java:333)
>   at org.apache.hadoop.tools.DistCp.createJob(DistCp.java:237)
>   at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:174)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
> I dug into both classes com.amazonaws.services.s3.transfer.TransferManager & 
> org.apache.hadoop.fs.s3a.S3AFileSystem. The only difference is S3AFileSystem 
> created a ThreadPoolExecutor (which implements AbstractExecutorService which 
> implements ExecutorService). I also checked on the classpath to make sure the 
> version of the jars being picked up is what I expected. 
> Help would be much appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13060) While trying to perform a Distcp command, we see the error Exception in thread "main" java.lang.NoSuchMethodError: com.amazonaws.services.s3.transfer.TransferManager.

2016-04-26 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13060:

Priority: Minor  (was: Major)

> While trying to perform a Distcp command, we see the error Exception in 
> thread "main" java.lang.NoSuchMethodError: 
> com.amazonaws.services.s3.transfer.TransferManager.(Lcom/amazonaws/services/s3/AmazonS3;Ljava/util/concurrent/ThreadPoolExecutor;)V
> 
>
> Key: HADOOP-13060
> URL: https://issues.apache.org/jira/browse/HADOOP-13060
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.1
>Reporter: Jenny Dong
>Priority: Minor
>
> While trying to do a distcp from our native HDFS cluster to S3, we get the 
> following error/stacktrace : 
> We are using hadoop-aws.jar version 2.7.1. We are using aws-java-sdk.jar 
> version 1.10.69 (we bumped this up from 2.7.4 because we were getting errors 
> seen in HADOOP-12420 + other authentication errors).
> Exception in thread "main" java.lang.NoSuchMethodError: 
> com.amazonaws.services.s3.transfer.TransferManager.(Lcom/amazonaws/services/s3/AmazonS3;Ljava/util/concurrent/ThreadPoolExecutor;)V
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:287)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
>   at org.apache.hadoop.tools.DistCp.configureOutputFormat(DistCp.java:333)
>   at org.apache.hadoop.tools.DistCp.createJob(DistCp.java:237)
>   at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:174)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
> I dug into both classes com.amazonaws.services.s3.transfer.TransferManager & 
> org.apache.hadoop.fs.s3a.S3AFileSystem. The only difference is S3AFileSystem 
> created a ThreadPoolExecutor (which implements AbstractExecutorService which 
> implements ExecutorService). I also checked on the classpath to make sure the 
> version of the jars being picked up is what I expected. 
> Help would be much appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13062) S3A Introspect to invoke incompatible AWS TransferManagerConfiguration methods

2016-04-26 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13062:

Issue Type: Improvement  (was: Sub-task)
Parent: (was: HADOOP-11694)

> S3A Introspect to invoke incompatible AWS TransferManagerConfiguration methods
> --
>
> Key: HADOOP-13062
> URL: https://issues.apache.org/jira/browse/HADOOP-13062
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>
> The AWS SDK changed the signature of the 
> {{TransferManagerConfiguration.setMultipartUploadThreshold}}, moving one 
> param from an int to a long. This is fixed at compile time, so S3a built 
> against the old library doesn't link to to the new one, and vice versa 
> —something leading to problems downstream.
> It may be possible to use reflection to make this binding, at least on the 
> 2.7 branch, so that dropping in to a later SDK doesn't break things



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-13062) S3A Introspect to invoke incompatible AWS TransferManagerConfiguration methods

2016-04-26 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13062:
---

 Summary: S3A Introspect to invoke incompatible AWS 
TransferManagerConfiguration methods
 Key: HADOOP-13062
 URL: https://issues.apache.org/jira/browse/HADOOP-13062
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 2.7.2
Reporter: Steve Loughran


The AWS SDK changed the signature of the 
{{TransferManagerConfiguration.setMultipartUploadThreshold}}, moving one param 
from an int to a long. This is fixed at compile time, so S3a built against the 
old library doesn't link to to the new one, and vice versa —something leading 
to problems downstream.

It may be possible to use reflection to make this binding, at least on the 2.7 
branch, so that dropping in to a later SDK doesn't break things



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13028) add low level counter metrics for S3A; use in read performance tests

2016-04-26 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15257830#comment-15257830
 ] 

Steve Loughran commented on HADOOP-13028:
-

checkstyle are all about code going ++ on volatiles. The InputStream API says 
"single thread only", and while we know HBase ignores that, we also know that 
HBase cannot ever work on S3, and also that these are just little counters, 
nothing critical...if someone does break the threading rules, well, the 
counters will end up inaccurate.

> add low level counter metrics for S3A; use in read performance tests
> 
>
> Key: HADOOP-13028
> URL: https://issues.apache.org/jira/browse/HADOOP-13028
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, metrics
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13028-001.patch, HADOOP-13028-002.patch, 
> HADOOP-13028-004.patch, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt, 
> org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance-output.txt
>
>
> against S3 (and other object stores), opening connections can be expensive, 
> closing connections may be expensive (a sign of a regression). 
> S3A FS and individual input streams should have counters of the # of 
> open/close/failure+reconnect operations, timers of how long things take. This 
> can be used downstream to measure efficiency of the code (how often 
> connections are being made), connection reliability, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12579) Deprecate and remove WriteableRPCEngine

2016-04-26 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15257782#comment-15257782
 ] 

Kai Zheng commented on HADOOP-12579:


Thanks [~wheat9] for the review! I thought it can be removed. In 
{{RpcHeader.proto}}, I made the simple change:
{code}
/**
 * RpcKind determine the rpcEngine and the serialization of the rpc request
 */
enum RpcKindProto {
  RPC_BUILTIN  = 0;  // Used for built in calls by tests
-  RPC_WRITABLE = 1;  // Use WritableRpcEngine 
+  RPC_WRITABLE = 1;  // Use WritableRpcEngine, the actual usage removed
  RPC_PROTOCOL_BUFFER  = 2;  // Use ProtobufRpcEngine
}
{code}

> Deprecate and remove WriteableRPCEngine
> ---
>
> Key: HADOOP-12579
> URL: https://issues.apache.org/jira/browse/HADOOP-12579
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
> Attachments: HADOOP-12579-v1.patch, HADOOP-12579-v3.patch, 
> HADOOP-12579-v4.patch, HADOOP-12579-v5.patch, HADOOP-12579-v6.patch, 
> HADOOP-12579-v7.patch, HADOOP-12579-v8.patch
>
>
> The {{WriteableRPCEninge}} depends on Java's serialization mechanisms for RPC 
> requests. Without proper checks, it has be shown that it can lead to security 
> vulnerabilities such as remote code execution (e.g., COLLECTIONS-580, 
> HADOOP-12577).
> The current implementation has migrated from {{WriteableRPCEngine}} to 
> {{ProtobufRPCEngine}} now. This jira proposes to deprecate 
> {{WriteableRPCEngine}} in branch-2 and to remove it in trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12579) Deprecate and remove WriteableRPCEngine

2016-04-26 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-12579:
---
Attachment: HADOOP-12579-v8.patch

Updated the patch according to review comment.

> Deprecate and remove WriteableRPCEngine
> ---
>
> Key: HADOOP-12579
> URL: https://issues.apache.org/jira/browse/HADOOP-12579
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
> Attachments: HADOOP-12579-v1.patch, HADOOP-12579-v3.patch, 
> HADOOP-12579-v4.patch, HADOOP-12579-v5.patch, HADOOP-12579-v6.patch, 
> HADOOP-12579-v7.patch, HADOOP-12579-v8.patch
>
>
> The {{WriteableRPCEninge}} depends on Java's serialization mechanisms for RPC 
> requests. Without proper checks, it has be shown that it can lead to security 
> vulnerabilities such as remote code execution (e.g., COLLECTIONS-580, 
> HADOOP-12577).
> The current implementation has migrated from {{WriteableRPCEngine}} to 
> {{ProtobufRPCEngine}} now. This jira proposes to deprecate 
> {{WriteableRPCEngine}} in branch-2 and to remove it in trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13060) While trying to perform a Distcp command, we see the error Exception in thread "main" java.lang.NoSuchMethodError: com.amazonaws.services.s3.transfer.TransferManager.

2016-04-26 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15257768#comment-15257768
 ] 

Steve Loughran commented on HADOOP-13060:
-

Jenny: you need to have the AWS versions of your libraries in sync with what 
Hadoop was built against; amazon changed the signature of a method, moving an 
argument from an int to a long, which is something that ends up getting frozen 
at compile time. 

If you are working with Hadoop 2.7.x, then you will need aws-java-sdk 1.7.4 on 
your classpath. Sorry, but we can't do anything about amazon quietly breaking 
binary compatibility on their JARs.

Now, if you really want the latest aws SDK against hadoop  2.7.x, you can 
actually rebuild hadoop (more specifically, tools/hadoop-aws) against the 
latest amazon JAR; it's just you need to do it at compile time, after which the 
library choice is frozen

> While trying to perform a Distcp command, we see the error Exception in 
> thread "main" java.lang.NoSuchMethodError: 
> com.amazonaws.services.s3.transfer.TransferManager.(Lcom/amazonaws/services/s3/AmazonS3;Ljava/util/concurrent/ThreadPoolExecutor;)V
> 
>
> Key: HADOOP-13060
> URL: https://issues.apache.org/jira/browse/HADOOP-13060
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.1
>Reporter: Jenny Dong
>
> While trying to do a distcp from our native HDFS cluster to S3, we get the 
> following error/stacktrace : 
> We are using hadoop-aws.jar version 2.7.1. We are using aws-java-sdk.jar 
> version 1.10.69 (we bumped this up from 2.7.4 because we were getting errors 
> seen in HADOOP-12420 + other authentication errors).
> Exception in thread "main" java.lang.NoSuchMethodError: 
> com.amazonaws.services.s3.transfer.TransferManager.(Lcom/amazonaws/services/s3/AmazonS3;Ljava/util/concurrent/ThreadPoolExecutor;)V
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:287)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
>   at org.apache.hadoop.tools.DistCp.configureOutputFormat(DistCp.java:333)
>   at org.apache.hadoop.tools.DistCp.createJob(DistCp.java:237)
>   at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:174)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
> I dug into both classes com.amazonaws.services.s3.transfer.TransferManager & 
> org.apache.hadoop.fs.s3a.S3AFileSystem. The only difference is S3AFileSystem 
> created a ThreadPoolExecutor (which implements AbstractExecutorService which 
> implements ExecutorService). I also checked on the classpath to make sure the 
> version of the jars being picked up is what I expected. 
> Help would be much appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13018) Make Kdiag check whether hadoop.token.files points to existent and valid files

2016-04-26 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15257756#comment-15257756
 ] 

Steve Loughran commented on HADOOP-13018:
-

I'm just trying to understand why you have to go through reflection, and not 
call Credentials itself? 

Admittedly I just don't understand these bits of the hadoop auth codebase 
properly...

> Make Kdiag check whether hadoop.token.files points to existent and valid files
> --
>
> Key: HADOOP-13018
> URL: https://issues.apache.org/jira/browse/HADOOP-13018
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Attachments: HADOOP-13018.01.patch, HADOOP-13018.02.patch
>
>
> Steve proposed that KDiag should fail fast to help debug the case that 
> hadoop.token.files points to a file not found. This JIRA is to affect that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12957) Limit the number of outstanding async calls

2016-04-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15257647#comment-15257647
 ] 

Hadoop QA commented on HADOOP-12957:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 8s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 17s 
{color} | {color:green} trunk passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 44s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 56s 
{color} | {color:green} the patch passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 46s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 46s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 21s 
{color} | {color:red} hadoop-common-project/hadoop-common: The patch generated 
2 new + 163 unchanged - 0 fixed = 165 total (was 163) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
1s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed with JDK v1.8.0_92 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 49s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_92. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 3s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 49s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12800704/HADOOP-12957.001.patch
 |
| JIRA Issue | HADOOP-12957 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d984e4af1cce 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 

[jira] [Commented] (HADOOP-13057) Async IPC server support

2016-04-26 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15257619#comment-15257619
 ] 

Siddharth Seth commented on HADOOP-13057:
-

It would - this is long pending. I don't think I'll be able to update the patch 
for the next several weeks. If anyone want to take a shot meanwhile, feel free 
to.

> Async IPC server support
> 
>
> Key: HADOOP-13057
> URL: https://issues.apache.org/jira/browse/HADOOP-13057
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 2.6.0
>Reporter: He Tianyi
>
> On some application, server may run out of handlers when performing many 
> blocking I/O operations during processing of each call (e.g. calling another 
> service, etc.). A viable solution is increasing number of handlers. 
> But this faces the problem that large amount of threads will consume much 
> memory (stack, etc.), and performance issues either.
> After HADOOP-12909, work on asynchronization has been done on caller-side. 
> This is a similar proposal on server-side.
> Suggesting the ability to handle requests asynchronously.
> For example, in such server, calls may return a Future object instead of 
> immediate value. Then sends response to client in {{onSuccess}} or 
> {{onFailed}} callbacks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)