[jira] [Commented] (HADOOP-13010) Refactor raw erasure coders

2016-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301561#comment-15301561
 ] 

Hudson commented on HADOOP-13010:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #9864 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9864/])
HADOOP-13010. Refactor raw erasure coders. Contributed by Kai Zheng (kai.zheng: 
rev 77202fa1035a54496d11d07472fbc399148ff630)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/DummyRawDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/ByteBufferDecodingState.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XORRawEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XORRawDecoder.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/rawcoder/TestDummyRawCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/RSErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/HHXORErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util/CoderUtil.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/package-info.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/ErasureCoderOptions.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/DummyRawEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/RSErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawDecoderLegacy.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/TestCodecRawCoderMapping.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util/GaloisField.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/AbstractRawErasureCoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/XORRawErasureCoderFactory.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoderFactory.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/CodecUtil.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawEncoderLegacy.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/CoderUtil.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/EncodingState.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/XORErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/ByteArrayDecodingState.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureEncoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/DummyRawErasureCoderFactory.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawErasureCoderFactoryLegacy.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/ByteBufferEncodingState.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/StripedReconstructor.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/CoderOption.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureDecoder.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/coder/HHXORErasureDecoder.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/StripedFileTestUtil.java
* 

[jira] [Updated] (HADOOP-13010) Refactor raw erasure coders

2016-05-25 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-13010:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha1
   Status: Resolved  (was: Patch Available)

> Refactor raw erasure coders
> ---
>
> Key: HADOOP-13010
> URL: https://issues.apache.org/jira/browse/HADOOP-13010
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha1
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-13010-v1.patch, HADOOP-13010-v2.patch, 
> HADOOP-13010-v3.patch, HADOOP-13010-v4.patch, HADOOP-13010-v5.patch, 
> HADOOP-13010-v6.patch, HADOOP-13010-v7.patch
>
>
> This will refactor raw erasure coders according to some comments received so 
> far.
> * As discussed in HADOOP-11540 and suggested by [~cmccabe], better not to 
> rely class inheritance to reuse the codes, instead they can be moved to some 
> utility.
> * Suggested by [~jingzhao] somewhere quite some time ago, better to have a 
> state holder to keep some checking results for later reuse during an 
> encode/decode call.
> This would not get rid of some inheritance levels as doing so isn't clear yet 
> for the moment and also incurs big impact. I do wish the end result by this 
> refactoring will make all the levels more clear and easier to follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13010) Refactor raw erasure coders

2016-05-25 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301555#comment-15301555
 ] 

Kai Zheng commented on HADOOP-13010:


Thanks [~cmccabe] for the extensive reviewing! Also thanks [~lirui] for working 
on a patch update. Committed to trunk.

> Refactor raw erasure coders
> ---
>
> Key: HADOOP-13010
> URL: https://issues.apache.org/jira/browse/HADOOP-13010
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha1
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-13010-v1.patch, HADOOP-13010-v2.patch, 
> HADOOP-13010-v3.patch, HADOOP-13010-v4.patch, HADOOP-13010-v5.patch, 
> HADOOP-13010-v6.patch, HADOOP-13010-v7.patch
>
>
> This will refactor raw erasure coders according to some comments received so 
> far.
> * As discussed in HADOOP-11540 and suggested by [~cmccabe], better not to 
> rely class inheritance to reuse the codes, instead they can be moved to some 
> utility.
> * Suggested by [~jingzhao] somewhere quite some time ago, better to have a 
> state holder to keep some checking results for later reuse during an 
> encode/decode call.
> This would not get rid of some inheritance levels as doing so isn't clear yet 
> for the moment and also incurs big impact. I do wish the end result by this 
> refactoring will make all the levels more clear and easier to follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls

2016-05-25 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301546#comment-15301546
 ] 

Chris Nauroth commented on HADOOP-12910:


Hello [~stack].

I'm very curious to learn more about the work happening in HBase.  If I 
understand correctly from a brief read of HBASE-14790, you're looking more at 
asynchrony at the data transfer protocol layer ("...implement our own 
{{DFSOutputStream}}..."), whereas the scope of this issue has focused on 
asynchronous NameNode metadata operations.  If my understanding is correct, 
then I'd suggest a separate HDFS JIRA for tracking separate scope.

Either way, it sounds like the HBase community has done some work that would 
ideally be provided directly within HDFS to benefit all users.  Would you or 
someone else from the HBase community mind sharing a high-level write-up of 
current state and some links to relevant portions of the HBase code, for the 
benefit of those of us who don't often read the HBase code?  I tried to piece 
it together by reading HBASE-14790 and its sub-tasks, but I think I'd benefit 
from hearing summarized information straight from the HBase experts.  Maybe 
that new HDFS JIRA is the right place to do this.

Thank you.

> Add new FileSystem API to support asynchronous method calls
> ---
>
> Key: HADOOP-12910
> URL: https://issues.apache.org/jira/browse/HADOOP-12910
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12910-HDFS-9924.000.patch, 
> HADOOP-12910-HDFS-9924.001.patch, HADOOP-12910-HDFS-9924.002.patch
>
>
> Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a 
> better name).  All the APIs in FutureFileSystem are the same as FileSystem 
> except that the return type is wrapped by Future, e.g.
> {code}
>   //FileSystem
>   public boolean rename(Path src, Path dst) throws IOException;
>   //FutureFileSystem
>   public Future rename(Path src, Path dst) throws IOException;
> {code}
> Note that FutureFileSystem does not extend FileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12892) fix/rewrite create-release

2016-05-25 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301334#comment-15301334
 ] 

Andrew Wang commented on HADOOP-12892:
--

I think Allen was the one who flagged it as incompatible (so he can correct me 
if I'm wrong), but my understanding is that it's only incompatible for people 
who are building releases. This might affect Bigtop or distribution vendors, 
but in the end it should be producing the same src and bin tarballs.

So I think it's safe to backport to 2.8, and probably doesn't need to be 
flagged as incompatible since it doesn't affect end users.

> fix/rewrite create-release
> --
>
> Key: HADOOP-12892
> URL: https://issues.apache.org/jira/browse/HADOOP-12892
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12892.00.patch, HADOOP-12892.01.patch, 
> HADOOP-12892.02.patch, HADOOP-12892.03.patch
>
>
> create-release needs some major surgery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls

2016-05-25 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301326#comment-15301326
 ] 

Andrew Wang commented on HADOOP-12910:
--

Per request, reposting my comment from HDFS-9924 here:

{quote}
I'm still not convinced enough to change my -1 on Future in 2.8.

Even if what's currently committed is marked Unstable, I don't want to rush 
ahead with an API we know is insufficient for async-style programming. Earlier 
in this JIRA's comments, others were asking about ListenableFuture for the same 
reasons. It's not fair to push the burden of supporting multiple APIs onto our 
downstreams, when we have a few possible solutions close at-hand:

* Use Deferred, which HBase and Kudu adopted due to the lack of 
CompletableFuture in JDK7. ListenableFuture might be good too.
* Target this for 3.0 and use CompletableFuture. We're actively working on 3.0, 
and the first 3.0.0 alpha is likely coming out around the same time as 2.8.0.
{quote}

> Add new FileSystem API to support asynchronous method calls
> ---
>
> Key: HADOOP-12910
> URL: https://issues.apache.org/jira/browse/HADOOP-12910
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12910-HDFS-9924.000.patch, 
> HADOOP-12910-HDFS-9924.001.patch, HADOOP-12910-HDFS-9924.002.patch
>
>
> Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a 
> better name).  All the APIs in FutureFileSystem are the same as FileSystem 
> except that the return type is wrapped by Future, e.g.
> {code}
>   //FileSystem
>   public boolean rename(Path src, Path dst) throws IOException;
>   //FutureFileSystem
>   public Future rename(Path src, Path dst) throws IOException;
> {code}
> Note that FutureFileSystem does not extend FileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13010) Refactor raw erasure coders

2016-05-25 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301319#comment-15301319
 ] 

Colin Patrick McCabe commented on HADOOP-13010:
---

Thanks for your work on this, [~drankye].  +1.  Let's continue the discussion 
on the follow-on JIRAs.

> Refactor raw erasure coders
> ---
>
> Key: HADOOP-13010
> URL: https://issues.apache.org/jira/browse/HADOOP-13010
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha1
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-13010-v1.patch, HADOOP-13010-v2.patch, 
> HADOOP-13010-v3.patch, HADOOP-13010-v4.patch, HADOOP-13010-v5.patch, 
> HADOOP-13010-v6.patch, HADOOP-13010-v7.patch
>
>
> This will refactor raw erasure coders according to some comments received so 
> far.
> * As discussed in HADOOP-11540 and suggested by [~cmccabe], better not to 
> rely class inheritance to reuse the codes, instead they can be moved to some 
> utility.
> * Suggested by [~jingzhao] somewhere quite some time ago, better to have a 
> state holder to keep some checking results for later reuse during an 
> encode/decode call.
> This would not get rid of some inheritance levels as doing so isn't clear yet 
> for the moment and also incurs big impact. I do wish the end result by this 
> refactoring will make all the levels more clear and easier to follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12911) Upgrade Hadoop MiniKDC with Kerby

2016-05-25 Thread Jiajia Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301262#comment-15301262
 ] 

Jiajia Li commented on HADOOP-12911:


bq.Looks like the reported findbug issue needs to be addressed.
This is the issue in Kerby, when stop the kdcserver, it did not wait until the 
network pool has terminated. If without the code "Thread.sleep()" in stop() 
function, will fail to restart MiniKdc. And I've fixed it in DIRKRB-552.

> Upgrade Hadoop MiniKDC with Kerby
> -
>
> Key: HADOOP-12911
> URL: https://issues.apache.org/jira/browse/HADOOP-12911
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Jiajia Li
>Assignee: Jiajia Li
> Attachments: HADOOP-12911-v1.patch, HADOOP-12911-v2.patch, 
> HADOOP-12911-v3.patch, HADOOP-12911-v4.patch, HADOOP-12911-v5.patch, 
> HADOOP-12911-v6.patch, HADOOP-12911-v7.patch, HaDOOP-12911-v8.patch
>
>
> As discussed in the mailing list, we’d like to introduce Apache Kerby into 
> Hadoop. Initially it’s good to start with upgrading Hadoop MiniKDC with Kerby 
> offerings. Apache Kerby (https://github.com/apache/directory-kerby), as an 
> Apache Directory sub project, is a Java Kerberos binding. It provides a 
> SimpleKDC server that borrowed ideas from MiniKDC and implemented all the 
> facilities existing in MiniKDC. Currently MiniKDC depends on the old Kerberos 
> implementation in Directory Server project, but the implementation is stopped 
> being maintained. Directory community has a plan to replace the 
> implementation using Kerby. MiniKDC can use Kerby SimpleKDC directly to avoid 
> depending on the full of Directory project. Kerby also provides nice identity 
> backends such as the lightweight memory based one and the very simple json 
> one for easy development and test environments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request

2016-05-25 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-13203:
--
Target Version/s: 2.8.0

> S3a: Consider reducing the number of connection aborts by setting correct 
> length in s3 request
> --
>
> Key: HADOOP-13203
> URL: https://issues.apache.org/jira/browse/HADOOP-13203
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13203-branch-2-001.patch
>
>
> Currently file's "contentLength" is set as the "requestedStreamLen", when 
> invoking S3AInputStream::reopen().  As a part of lazySeek(), sometimes the 
> stream had to be closed and reopened. But lots of times the stream was closed 
> with abort() causing the internal http connection to be unusable. This incurs 
> lots of connection establishment cost in some jobs.  It would be good to set 
> the correct value for the stream length to avoid connection aborts. 
> I will post the patch once aws tests passes in my machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12724) Let BufferedFSInputStream implement CanUnbuffer

2016-05-25 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HADOOP-12724:

Description: 
When trying to determine reason for test failure over in HBASE-9393, I saw the 
following exception:

{code}
testSeekTo[4](org.apache.hadoop.hbase.io.hfile.TestSeekTo)  Time elapsed: 0.033 
sec  <<< ERROR!
java.lang.UnsupportedOperationException: this stream does not support 
unbuffering.
at 
org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:229)
at 
org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:227)
at 
org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:518)
at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:562)
at 
org.apache.hadoop.hbase.io.hfile.TestSeekTo.testSeekToInternals(TestSeekTo.java:307)
at 
org.apache.hadoop.hbase.io.hfile.TestSeekTo.testSeekTo(TestSeekTo.java:298)
{code}
Here is the cause:
{code}
java.lang.ClassCastException: org.apache.hadoop.fs.BufferedFSInputStream cannot 
be cast to org.apache.hadoop.fs.CanUnbuffer
{code}
See the comments starting with 
https://issues.apache.org/jira/browse/HBASE-9393?focusedCommentId=15105939=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15105939
 for background on the HBase patch.

This issue is to make BufferedFSInputStream implement CanUnbuffer.

This would benefit hbase unit tests.

Thanks to [~cmccabe] for discussion.

  was:
When trying to determine reason for test failure over in HBASE-9393, I saw the 
following exception:
{code}
testSeekTo[4](org.apache.hadoop.hbase.io.hfile.TestSeekTo)  Time elapsed: 0.033 
sec  <<< ERROR!
java.lang.UnsupportedOperationException: this stream does not support 
unbuffering.
at 
org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:229)
at 
org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:227)
at 
org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:518)
at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:562)
at 
org.apache.hadoop.hbase.io.hfile.TestSeekTo.testSeekToInternals(TestSeekTo.java:307)
at 
org.apache.hadoop.hbase.io.hfile.TestSeekTo.testSeekTo(TestSeekTo.java:298)
{code}
Here is the cause:
{code}
java.lang.ClassCastException: org.apache.hadoop.fs.BufferedFSInputStream cannot 
be cast to org.apache.hadoop.fs.CanUnbuffer
{code}
See the comments starting with 
https://issues.apache.org/jira/browse/HBASE-9393?focusedCommentId=15105939=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15105939
 for background on the HBase patch.

This issue is to make BufferedFSInputStream implement CanUnbuffer.

This would benefit hbase unit tests.

Thanks to [~cmccabe] for discussion.


> Let BufferedFSInputStream implement CanUnbuffer
> ---
>
> Key: HADOOP-12724
> URL: https://issues.apache.org/jira/browse/HADOOP-12724
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ted Yu
>
> When trying to determine reason for test failure over in HBASE-9393, I saw 
> the following exception:
> {code}
> testSeekTo[4](org.apache.hadoop.hbase.io.hfile.TestSeekTo)  Time elapsed: 
> 0.033 sec  <<< ERROR!
> java.lang.UnsupportedOperationException: this stream does not support 
> unbuffering.
>   at 
> org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:229)
>   at 
> org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:227)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:518)
>   at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:562)
>   at 
> org.apache.hadoop.hbase.io.hfile.TestSeekTo.testSeekToInternals(TestSeekTo.java:307)
>   at 
> org.apache.hadoop.hbase.io.hfile.TestSeekTo.testSeekTo(TestSeekTo.java:298)
> {code}
> Here is the cause:
> {code}
> java.lang.ClassCastException: org.apache.hadoop.fs.BufferedFSInputStream 
> cannot be cast to org.apache.hadoop.fs.CanUnbuffer
> {code}
> See the comments starting with 
> https://issues.apache.org/jira/browse/HBASE-9393?focusedCommentId=15105939=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15105939
>  for background on the HBase patch.
> This issue is to make BufferedFSInputStream implement CanUnbuffer.
> This would benefit hbase unit tests.
> Thanks to [~cmccabe] for discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10720) KMS: Implement generateEncryptedKey and decryptEncryptedKey in the REST API

2016-05-25 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301149#comment-15301149
 ] 

Xiao Chen commented on HADOOP-10720:


Hi [~tucu00] and [~asuresh],
Thank you very much for the nice feature and great discussions on adding this.

I have 1 question:
Since the client side has {{encKeyVersionQueue}} to protect the KMS server, 
when generating EEKs most requests doesn't reach the KMS server. The ACLs 
however, are on KMS server side only. How could the ACL's be checked in the 
cached case?

Thanks!

> KMS: Implement generateEncryptedKey and decryptEncryptedKey in the REST API
> ---
>
> Key: HADOOP-10720
> URL: https://issues.apache.org/jira/browse/HADOOP-10720
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.0.0-alpha1
>Reporter: Alejandro Abdelnur
>Assignee: Arun Suresh
> Fix For: 2.6.0
>
> Attachments: COMBO.patch, COMBO.patch, COMBO.patch, COMBO.patch, 
> COMBO.patch, HADOOP-10720-10750.COMBO.patch, HADOOP-10720.1.patch, 
> HADOOP-10720.10.patch, HADOOP-10720.11.patch, HADOOP-10720.12.patch, 
> HADOOP-10720.13.patch, HADOOP-10720.14.patch, HADOOP-10720.15.patch, 
> HADOOP-10720.16.patch, HADOOP-10720.17.patch, HADOOP-10720.18.patch, 
> HADOOP-10720.19.patch, HADOOP-10720.2.patch, HADOOP-10720.20.patch, 
> HADOOP-10720.3.patch, HADOOP-10720.4.patch, HADOOP-10720.5.patch, 
> HADOOP-10720.6.patch, HADOOP-10720.7.patch, HADOOP-10720.8.patch, 
> HADOOP-10720.9.patch, HADOOP-10720.patch, HADOOP-10720.patch, 
> HADOOP-10720.patch, HADOOP-10720.patch, HADOOP-10720.patch
>
>
> KMS client/server should implement support for generating encrypted keys and 
> decrypting them via the REST API being introduced by HADOOP-10719.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13137) TraceAdmin should support Kerberized cluster

2016-05-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301130#comment-15301130
 ] 

Hadoop QA commented on HADOOP-13137:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 29s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 36s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 58s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 3s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 5s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
27s {color} | {color:green} root: The patch generated 0 new + 11 unchanged - 2 
fixed = 11 total (was 13) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 2s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 30s {color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 17s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 125m 23s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestDNS |
|   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806225/HADOOP-13137.005.patch
 |
| JIRA Issue | HADOOP-13137 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3d560cad5db3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1ba31fe |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9584/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9584/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  

[jira] [Commented] (HADOOP-12727) Minor cleanups needed for CMake 3.X

2016-05-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301090#comment-15301090
 ] 

Hadoop QA commented on HADOOP-12727:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 29s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 29s {color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 14s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12793718/HADOOP-12727.001.patch
 |
| JIRA Issue | HADOOP-12727 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux b2f02172c33d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3c83cee |
| Default Java | 1.8.0_91 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9586/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/9586/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9586/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9586/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Minor cleanups needed for CMake 3.X
> ---
>
> Key: HADOOP-12727
> URL: https://issues.apache.org/jira/browse/HADOOP-12727
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: native
>Affects Versions: 2.7.1
>Reporter: Alan Burlison
>Assignee: Alan Burlison
>Priority: Minor
> Attachments: HADOOP-12727.001.patch
>
>
> On switching from CMake 2.8.6 to 3.3.2 a couple of minor issues popped up:
> \\
> \\
> * There's a syntax error in 
> {{hadoop-common-project/hadoop-common/src/CMakeLists.txt}} that generates a 
> warning in 3.X
> * {{CMAKE_SHARED_LINKER_FLAGS}} is being incorrectly set in 
> {{hadoop-common-project/hadoop-common/HadoopCommon.cmake}} - despite the name 
> it 

[jira] [Commented] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls

2016-05-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301094#comment-15301094
 ] 

stack commented on HADOOP-12910:


The patch here converts one method only? Is the intent to do all methods (along 
w/ the spec suggested by [~steve_l]?)

Is this issue for an AsyncFileSystem or for an async rename only? Are we 
targeting H3 only or is there some thought that this could get pulled back into 
H2?

bq. Futures are a good match for the use case where the consumer wants to kick 
of a multitude of async requests and wait until they are all done to make 
progress, but we've found that there are also compelling use cases where you 
want a small amount of logic and further async I/O in a completion handler, so 
I might recommend supporting both Future-based results as well as 
callback-based results.

A few of us (mainly [~Apache9]), are looking at being able to go async against 
hdfs. There is already a stripped down async subset of dfsclient that we are 
using to write our WALs done by [~Apache9] that uses way less resources while 
going much faster (see HBASE-14790). As Duo says, we want to push this up into 
HDFS, and given our good experience with this effort, we want to convert over 
more of our HDFS connection to be async. Parking a resource waiting on a Future 
to complete or keeping some list/queue of Futures which we check on a period to 
see if it is 'done' is much less attractive (and less performant) to our being 
instead notified on completion -- a callback (as [~bobhansen] suggests above in 
the comment repeated here).  Ideally we'd like to move our interaction with 
HDFS to be event-driven (ultimately backing this up all the ways into the guts 
of the regionserver, but that is another story)

OK if we put up suggested patch that say presumes jdk8/h3 only and instead of 
returning Future, returns jdk8 CompletableFuture? Chatting yesterday, we think 
we could consume/feed HDFS in a non-blocking way if we got back a 
CompletableFuture (or we could add a callback handler as a param on a method if 
folks preferred that?). We'd put up a sketch patch, and if amenable, we could 
start up a sympathetic spec doc as a subtask so code and spec arrive at the 
same time?

Thanks.

> Add new FileSystem API to support asynchronous method calls
> ---
>
> Key: HADOOP-12910
> URL: https://issues.apache.org/jira/browse/HADOOP-12910
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12910-HDFS-9924.000.patch, 
> HADOOP-12910-HDFS-9924.001.patch, HADOOP-12910-HDFS-9924.002.patch
>
>
> Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a 
> better name).  All the APIs in FutureFileSystem are the same as FileSystem 
> except that the return type is wrapped by Future, e.g.
> {code}
>   //FileSystem
>   public boolean rename(Path src, Path dst) throws IOException;
>   //FutureFileSystem
>   public Future rename(Path src, Path dst) throws IOException;
> {code}
> Note that FutureFileSystem does not extend FileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13132) LoadBalancingKMSClientProvider ClassCastException on AuthenticationException

2016-05-25 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301099#comment-15301099
 ] 

Wei-Chiu Chuang commented on HADOOP-13132:
--

Ping. [~andrew.wang] can you review it? Thanks very much!

> LoadBalancingKMSClientProvider ClassCastException on AuthenticationException
> 
>
> Key: HADOOP-13132
> URL: https://issues.apache.org/jira/browse/HADOOP-13132
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Miklos Szurap
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-13132.001.patch, HADOOP-13132.002.patch, 
> HADOOP-13132.003.patch
>
>
> An Oozie job with a single shell action fails (may not be important, but if 
> you needs the exact details I can provide them) with an error message coming 
> from NodeManager:
> {code}
> 2016-05-10 11:10:14,290 ERROR 
> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler: Thread 
> Thread[LogAggregationService #652,5,main] threw an Exception.
> java.lang.ClassCastException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException 
> cannot be cast to java.security.GeneralSecurityException
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:189)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388)
> at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1419)
> at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1521)
> at org.apache.hadoop.fs.Hdfs.createInternal(Hdfs.java:108)
> at org.apache.hadoop.fs.Hdfs.createInternal(Hdfs.java:59)
> at org.apache.hadoop.fs.AbstractFileSystem.create(AbstractFileSystem.java:577)
> at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:683)
> at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:679)
> at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
> at org.apache.hadoop.fs.FileContext.create(FileContext.java:679)
> at 
> org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogWriter$1.run(AggregatedLogFormat.java:382)
> at 
> org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogWriter$1.run(AggregatedLogFormat.java:377)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> at 
> org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogWriter.(AggregatedLogFormat.java:376)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.uploadLogsForContainers(AppLogAggregatorImpl.java:246)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.doAppLogAggregation(AppLogAggregatorImpl.java:456)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.run(AppLogAggregatorImpl.java:421)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService$2.run(LogAggregationService.java:384)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The unsafe cast is here:
> https://github.com/apache/hadoop/blob/2e1d0ff4e901b8313c8d71869735b94ed8bc40a0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/LoadBalancingKMSClientProvider.java#L189
> Because of this ClassCastException:
> - an uncaught exception is raised
> - we do not see the exact "caused by" exception/message
> - the oozie job fails
> - YARN logs are not reported/saved



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13196) DF default interval value is not consistent

2016-05-25 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301089#comment-15301089
 ] 

Wei-Chiu Chuang commented on HADOOP-13196:
--

[~iwasakims] thanks you're right! I was confused initially.

> DF default interval value is not consistent
> ---
>
> Key: HADOOP-13196
> URL: https://issues.apache.org/jira/browse/HADOOP-13196
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Trivial
> Attachments: HADOOP-13196.001.patch
>
>
> In {{core-default.xml}}, the value of the property {{fs.df.interval}} is 
> 6. This value is defined in 
> {{CommonConfigurationKeysPublic.FS_DF_INTERVAL_DEFAULT}}, however, this value 
> is never used.
> When this property is used in {{DF}}, the default value is 
> {{DF.DF_INTERVAL_DEFAULT}} = 3000.
> This can cause potential confusion and should be fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10048) LocalDirAllocator should avoid holding locks while accessing the filesystem

2016-05-25 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301081#comment-15301081
 ] 

Junping Du commented on HADOOP-10048:
-

bq.  No, that could trigger an array bounds exception if we update it to a 
value that is past the number of directories in the other, unrelated context. 
Also we don't need to worry about this particular race. 
We can simply move {{dirNum % numDirs}} ahead of ctx.dirDF[dirNum] to get rid 
of array out of bound issue. However, I agree that this particular race is not 
important given the value of dirNumLastAccessed could mean something different 
in different context. 
Under the same context, mark dirNumLastAccessed as volatile could still cause 
multiple threads end up with the same dirNumLastAccessed in case {{int dirNum = 
ctx.dirNumLastAccessed;}} get accessed at almost the same time. In this case, 
previous round-robin pickup for disks with available capacity is broken, we may 
use random instead. Otherwise, accessing of disks could be aggregated on 
particular disk. Thoughts?

bq. Since this is not related to this change and could degrade the error 
diagnostics in some corner cases, I'm tempted to leave it as-is.  If we feel 
it's important to fix it then we can tackle it in a followup JIRA where it does 
the full file stat first, checks the corner cases, then calls mkdirs if 
necessary.
That sounds a reasonable plan. We can discuss this later in other JIRA.

> LocalDirAllocator should avoid holding locks while accessing the filesystem
> ---
>
> Key: HADOOP-10048
> URL: https://issues.apache.org/jira/browse/HADOOP-10048
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.3.0
>Reporter: Jason Lowe
>Assignee: Jason Lowe
> Attachments: HADOOP-10048.003.patch, HADOOP-10048.004.patch, 
> HADOOP-10048.005.patch, HADOOP-10048.patch, HADOOP-10048.trunk.patch
>
>
> As noted in MAPREDUCE-5584 and HADOOP-7016, LocalDirAllocator can be a 
> bottleneck for multithreaded setups like the ShuffleHandler.  We should 
> consider moving to a lockless design or minimizing the critical sections to a 
> very small amount of time that does not involve I/O operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12727) Minor cleanups needed for CMake 3.X

2016-05-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301044#comment-15301044
 ] 

Hadoop QA commented on HADOOP-12727:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 41s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 44s {color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m 2s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ipc.TestRPCWaitForProxy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12793718/HADOOP-12727.001.patch
 |
| JIRA Issue | HADOOP-12727 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 8de0e062b05f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1ba31fe |
| Default Java | 1.8.0_91 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9585/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/9585/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9585/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9585/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Minor cleanups needed for CMake 3.X
> ---
>
> Key: HADOOP-12727
> URL: https://issues.apache.org/jira/browse/HADOOP-12727
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: native
>Affects Versions: 2.7.1
>Reporter: Alan Burlison
>Assignee: Alan Burlison
>Priority: Minor
> Attachments: HADOOP-12727.001.patch
>
>
> On switching from CMake 2.8.6 to 3.3.2 a couple of minor issues popped up:
> \\
> \\
> * There's a syntax error in 
> {{hadoop-common-project/hadoop-common/src/CMakeLists.txt}} that generates a 
> warning in 3.X
> * {{CMAKE_SHARED_LINKER_FLAGS}} is being incorrectly set in 
> {{hadoop-common-project/hadoop-common/HadoopCommon.cmake}} - despite the name 
> it contains 

[jira] [Commented] (HADOOP-12727) Minor cleanups needed for CMake 3.X

2016-05-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301032#comment-15301032
 ] 

Hadoop QA commented on HADOOP-12727:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 00s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 00s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} machack {color} | {color:blue} 0m 01s 
{color} | {color:blue} Applied YARN-5121 so that YARN native on OS X works 
{color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
56s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 3m 56s 
{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
51s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 3m 36s 
{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 3m 36s {color} | 
{color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 3m 36s {color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 12s {color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 58s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.TestSymlinkLocalFSFileContext |
|   | hadoop.fs.TestSymlinkLocalFSFileSystem |
|   | hadoop.net.unix.TestDomainSocket |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12793718/HADOOP-12727.001.patch
 |
| JIRA Issue | HADOOP-12727 |
| Optional Tests |  compile  cc  javac  unit  |
| uname | Darwin Gavins-Mac-mini.local 13.2.0 Darwin Kernel Version 13.2.0: Thu 
Apr 17 23:03:13 PDT 2014; root:xnu-2422.100.13~1/RELEASE_X86_64 x86_64 |
| Build tool | maven |
| Personality | 
/Users/jenkins/jenkins-home/workspace/Precommit-HADOOP-OSX/patchprocess/apache-yetus-8bf2ab7/precommit/personality/hadoop.sh
 |
| git revision | trunk / 1ba31fe |
| Default Java | 1.8.0_74 |
| compile | 
https://builds.apache.org/job/Precommit-HADOOP-OSX/26/artifact/patchprocess/branch-compile-root.txt
 |
| compile | 
https://builds.apache.org/job/Precommit-HADOOP-OSX/26/artifact/patchprocess/patch-compile-root.txt
 |
| cc | 
https://builds.apache.org/job/Precommit-HADOOP-OSX/26/artifact/patchprocess/patch-compile-root.txt
 |
| javac | 
https://builds.apache.org/job/Precommit-HADOOP-OSX/26/artifact/patchprocess/patch-compile-root.txt
 |
| unit | 
https://builds.apache.org/job/Precommit-HADOOP-OSX/26/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit test logs |  
https://builds.apache.org/job/Precommit-HADOOP-OSX/26/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/Precommit-HADOOP-OSX/26/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/Precommit-HADOOP-OSX/26/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Minor cleanups needed for CMake 3.X
> ---
>
> Key: HADOOP-12727
> URL: https://issues.apache.org/jira/browse/HADOOP-12727
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: native
>Affects Versions: 2.7.1
>Reporter: Alan Burlison
>Assignee: Alan Burlison
>Priority: Minor
> Attachments: HADOOP-12727.001.patch
>
>
> On switching from CMake 2.8.6 to 3.3.2 a couple of minor issues popped up:
> \\
> \\
> * There's a syntax error in 
> {{hadoop-common-project/hadoop-common/src/CMakeLists.txt}} that generates a 
> warning in 3.X
> * {{CMAKE_SHARED_LINKER_FLAGS}} is being incorrectly set in 
> {{hadoop-common-project/hadoop-common/HadoopCommon.cmake}} - despite the name 
> it contains the flags passed to {{ar}} not to the linker. 2.8.6 ignores the 
> incorrect flags, 3.3.2 doesn't and building static libraries fails as a 
> result. See http://public.kitware.com/pipermail/cmake/2016-January/062447.html
> Patch to follow.



--
This 

[jira] [Commented] (HADOOP-12925) Checks for SPARC architecture need to include 64-bit SPARC

2016-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301024#comment-15301024
 ] 

Hudson commented on HADOOP-12925:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #9862 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9862/])
HADOOP-12925. Checks for SPARC architecture need to include 64-bit SPARC (aw: 
rev 3c83cee118137e3d5bbe0c942e92e179d1234d5b)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/FastByteComparisons.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeCrc32.java


> Checks for SPARC architecture need to include 64-bit SPARC
> --
>
> Key: HADOOP-12925
> URL: https://issues.apache.org/jira/browse/HADOOP-12925
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: conf
>Affects Versions: 2.7.2
> Environment: 64-bit SPARC
>Reporter: Alan Burlison
>Assignee: Alan Burlison
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12925.001.patch
>
>
> FastByteComparisons.java and NativeCrc32.java check for the SPARC platform by 
> comparing the os.arch property against "sparc". That doesn't detect 64-bit 
> SPARC ("sparcv9"), the test should be "startsWith", not "equals"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13114) DistCp should have option to compress data on write

2016-05-25 Thread Suraj Nayak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suraj Nayak updated HADOOP-13114:
-
Component/s: tools/distcp

> DistCp should have option to compress data on write
> ---
>
> Key: HADOOP-13114
> URL: https://issues.apache.org/jira/browse/HADOOP-13114
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.8.0, 2.7.3, 3.0.0-alpha1
>Reporter: Suraj Nayak
>Assignee: Suraj Nayak
>Priority: Minor
>  Labels: distcp
> Attachments: HADOOP-13114-trunk_2016-05-07-1.patch, 
> HADOOP-13114-trunk_2016-05-08-1.patch, HADOOP-13114-trunk_2016-05-10-1.patch, 
> HADOOP-13114-trunk_2016-05-12-1.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> DistCp utility should have capability to store data in user specified 
> compression format. This avoids one hop of compressing data after transfer. 
> Backup strategies to different cluster also get benefit of saving one IO 
> operation to and from HDFS, thus saving resources, time and effort.
> * Create an option -compressOutput defaulting to 
> {{org.apache.hadoop.io.compress.BZip2Codec}}. 
> * Users will be able to change codec with {{-D 
> mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.GzipCodec}}
> * If distcp compression is enabled, suffix the filenames with default codec 
> extension to indicate the file is compressed. Thus users can be aware of what 
> codec was used to compress the data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12911) Upgrade Hadoop MiniKDC with Kerby

2016-05-25 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301018#comment-15301018
 ] 

Kai Zheng commented on HADOOP-12911:


Hi [~jiajia],

Looks like the reported findbug issue needs to be addressed. Could you check? 
Thanks.
{noformat}
org.apache.hadoop.minikdc.MiniKdc.stop() calls Thread.sleep() with a lock held 
At MiniKdc.java:lock held At MiniKdc.java:[line 345]
{noformat}

It's very close and would be good to speed up to allow this to be included in 
the alpha1 release cut of Hadoop 3, per an off-line sync up with [~andrew.wang].

> Upgrade Hadoop MiniKDC with Kerby
> -
>
> Key: HADOOP-12911
> URL: https://issues.apache.org/jira/browse/HADOOP-12911
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Jiajia Li
>Assignee: Jiajia Li
> Attachments: HADOOP-12911-v1.patch, HADOOP-12911-v2.patch, 
> HADOOP-12911-v3.patch, HADOOP-12911-v4.patch, HADOOP-12911-v5.patch, 
> HADOOP-12911-v6.patch, HADOOP-12911-v7.patch, HaDOOP-12911-v8.patch
>
>
> As discussed in the mailing list, we’d like to introduce Apache Kerby into 
> Hadoop. Initially it’s good to start with upgrading Hadoop MiniKDC with Kerby 
> offerings. Apache Kerby (https://github.com/apache/directory-kerby), as an 
> Apache Directory sub project, is a Java Kerberos binding. It provides a 
> SimpleKDC server that borrowed ideas from MiniKDC and implemented all the 
> facilities existing in MiniKDC. Currently MiniKDC depends on the old Kerberos 
> implementation in Directory Server project, but the implementation is stopped 
> being maintained. Directory community has a plan to replace the 
> implementation using Kerby. MiniKDC can use Kerby SimpleKDC directly to avoid 
> depending on the full of Directory project. Kerby also provides nice identity 
> backends such as the lightweight memory based one and the very simple json 
> one for easy development and test environments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12782) Faster LDAP group name resolution with ActiveDirectory

2016-05-25 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301012#comment-15301012
 ] 

Kai Zheng commented on HADOOP-12782:


Agree to include this to branch-2, thanks WeiChiu for the additional patch for 
it. Will check and commit it today.

> Faster LDAP group name resolution with ActiveDirectory
> --
>
> Key: HADOOP-12782
> URL: https://issues.apache.org/jira/browse/HADOOP-12782
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12782.001.patch, HADOOP-12782.002.patch, 
> HADOOP-12782.003.patch, HADOOP-12782.004.patch, HADOOP-12782.005.patch, 
> HADOOP-12782.006.patch, HADOOP-12782.007.patch, HADOOP-12782.008.patch, 
> HADOOP-12782.009.patch, HADOOP-12782.branch-2.010.patch
>
>
> The typical LDAP group name resolution works well under typical scenarios. 
> However, we have seen cases where a user is mapped to many groups (in an 
> extreme case, a user is mapped to more than 100 groups). The way it's being 
> implemented now makes this case super slow resolving groups from 
> ActiveDirectory.
> The current LDAP group resolution implementation sends two queries to a 
> ActiveDirectory server. The first query returns a user object, which contains 
> DN (distinguished name). The second query looks for groups where the user DN 
> is a member. If a user is mapped to many groups, the second query returns all 
> group objects associated with the user, and is thus very slow.
> After studying a user object in ActiveDirectory, I found a user object 
> actually contains a "memberOf" field, which is the DN of all group objects 
> where the user belongs to. Assuming that an organization has no recursive 
> group relation (that is, a user A is a member of group G1, and group G1 is a 
> member of group G2), we can use this properties to avoid the second query, 
> which can potentially run very slow.
> I propose that we add a configuration to only enable this feature for users 
> who want to reduce group resolution time and who does not have recursive 
> groups, so that existing behavior will not be broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13200) Seeking a better approach allowing to customize and configure erasure coders

2016-05-25 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300997#comment-15300997
 ] 

Kai Zheng commented on HADOOP-13200:


Hi [~cmccabe] and [~andrew.wang],

Thanks a lot for your time and the nice off-line talk! It's great to meet you 
face to face personally.

As the refactoring work of HADOOP-13010 incurs significant change, the work 
(the ISA-L coder) in HADOOP-11540 needs to be changed much accordingly. I'm 
currently work on it. As we discussed, it would be great to include the work in 
HADOOP-13010 and HADOOP-11540 in the first cut of Hadoop 3 so I will try to 
speed up to meet with the general schedule. Meanwhile let's continue the 
discussion about how to support the customization and configuration of erasure 
coders, and guess the work could be included in a subsequent Hadoop 3 release 
like alpha2? Kindly help clarify if anything mistaken. Thanks!

> Seeking a better approach allowing to customize and configure erasure coders
> 
>
> Key: HADOOP-13200
> URL: https://issues.apache.org/jira/browse/HADOOP-13200
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>
> This is a follow-on task for HADOOP-13010 as discussed over there. There may 
> be some better approach allowing to customize and configure erasure coders 
> than the current having raw coder factory, as [~cmccabe] suggested. Will copy 
> the relevant comments here to continue the discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12925) Checks for SPARC architecture need to include 64-bit SPARC

2016-05-25 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12925:
--
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha1
   Status: Resolved  (was: Patch Available)

+1 committed

thanks!

> Checks for SPARC architecture need to include 64-bit SPARC
> --
>
> Key: HADOOP-12925
> URL: https://issues.apache.org/jira/browse/HADOOP-12925
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: conf
>Affects Versions: 2.7.2
> Environment: 64-bit SPARC
>Reporter: Alan Burlison
>Assignee: Alan Burlison
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12925.001.patch
>
>
> FastByteComparisons.java and NativeCrc32.java check for the SPARC platform by 
> comparing the os.arch property against "sparc". That doesn't detect 64-bit 
> SPARC ("sparcv9"), the test should be "startsWith", not "equals"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13200) Seeking a better approach allowing to customize and configure erasure coders

2016-05-25 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300964#comment-15300964
 ] 

Kai Zheng commented on HADOOP-13200:


Copied partly [here | 
https://issues.apache.org/jira/browse/HADOOP-13010?focusedCommentId=15298587=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15298587]
 by [~cmccabe]:
{quote}
I think these are really configuration questions, not questions about how the 
code should be structured. What does the user actually need to configure? If 
the user just configures a coder implementation, does that fully determine the 
codec which is being used? If so, we should have only one configuration knob-- 
coder. If a coder could be used for multiple codecs, then we need to have at 
least two knobs that the user can configure-- one for codec, and another for 
coder. Once we know what the configuration knobs are, we probably only need one 
or two functions to create the objects we need based on a Configuration object, 
not a whole mess of factory objects.
{quote}

> Seeking a better approach allowing to customize and configure erasure coders
> 
>
> Key: HADOOP-13200
> URL: https://issues.apache.org/jira/browse/HADOOP-13200
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>
> This is a follow-on task for HADOOP-13010 as discussed over there. There may 
> be some better approach allowing to customize and configure erasure coders 
> than the current having raw coder factory, as [~cmccabe] suggested. Will copy 
> the relevant comments here to continue the discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13200) Seeking a better approach allowing to customize and configure erasure coders

2016-05-25 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300958#comment-15300958
 ] 

Kai Zheng commented on HADOOP-13200:


Copied from [here | 
https://issues.apache.org/jira/browse/HADOOP-13010?focusedCommentId=15289544=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15289544]
 by [~drankye]:
{quote}
Hi Colin,

Thanks for the comments. About the factories, I have to clarify the real 
problem in details and hope this works since the f2f discussion isn't going 
into details due to time constraint.

We may have the following codecs in the 1st level:
rs-legacy, rs-default (both belonging to RS)
xor,
hh or hitchhiker,
lrc,
...

And for each codec, it may use one or more raw coders, but each of such coders 
may use different implementations. For example, for the rs-default codec, we 
have two coder implementations (the pure java one and the isa-l one). Users may 
add their own coder implementation for a codec, maybe for better performance.

So that's why I would have a configuration key like this:
o.a.h.io.erasurecode.codec.(codec-name).rawcoder: (whatever value to be used to 
create or load the coder).

Currently we configured the factory to create the encoder and decoder for a 
coder implementation, I agree there could be better option here, and while 
discussing about this in details with Andrew yesterday in the SF office, wonder 
if we could achieve the effect avoding the factories using java service loader.

First, we can add codec-name and coder-name to the raw coder, so each coder 
will have a codec-name and coder-name when it's created.

Then we have the built-in coders of fixed codec-name and coder-name. Customized 
coders will be loaded via service loader.

Eventually we will have all the raw erasure coders loaded and created, then we 
can setup a mapping between codec-name and coder-name, coder-name and the 
coder-class or instance.

Does this sound good to you? If it works, then we might do this in a follow-on 
task?

Thanks again!
{quote}

> Seeking a better approach allowing to customize and configure erasure coders
> 
>
> Key: HADOOP-13200
> URL: https://issues.apache.org/jira/browse/HADOOP-13200
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>
> This is a follow-on task for HADOOP-13010 as discussed over there. There may 
> be some better approach allowing to customize and configure erasure coders 
> than the current having raw coder factory, as [~cmccabe] suggested. Will copy 
> the relevant comments here to continue the discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12847) hadoop daemonlog should support https and SPNEGO for Kerberized cluster

2016-05-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300845#comment-15300845
 ] 

Hadoop QA commented on HADOOP-12847:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
8s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 31s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 9 unchanged - 13 fixed = 9 total (was 22) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 41s {color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 16s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806201/HADOOP-12847.009.patch
 |
| JIRA Issue | HADOOP-12847 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8f4dd4032486 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 77d5ce9 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9583/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/9583/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9583/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9583/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> hadoop daemonlog should support https and SPNEGO for Kerberized cluster
> ---
>
> 

[jira] [Commented] (HADOOP-12537) s3a: Add flag for session ID to allow Amazon STS temporary credentials

2016-05-25 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300872#comment-15300872
 ] 

Sean Mackrory commented on HADOOP-12537:


Okay I figured it out - I had forgotten about contract-test-options.xml and I 
still had a different bucket configured there, so it wasn't consistently using 
a bucket in the region I thought. The test has now passed for me in all 
regions, and it would appear you can use whatever STS endpoint you want, and 
Frankfurt and Seoul are just a bit more picky about bucket location.

> s3a: Add flag for session ID to allow Amazon STS temporary credentials
> --
>
> Key: HADOOP-12537
> URL: https://issues.apache.org/jira/browse/HADOOP-12537
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 2.7.1
>Reporter: Sean Mackrory
>Priority: Minor
> Attachments: HADOOP-12537.001.patch, HADOOP-12537.002.patch, 
> HADOOP-12537.003.patch, HADOOP-12537.004.patch, HADOOP-12537.diff, 
> HADOOP-12537.diff
>
>
> Amazon STS allows you to issue temporary access key id / secret key pairs for 
> your a user / role. However, using these credentials also requires specifying 
> a session ID. There is currently no such configuration property or the 
> required code to pass it through to the API (at least not that I can find) in 
> any of the S3 connectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12847) hadoop daemonlog should support https and SPNEGO for Kerberized cluster

2016-05-25 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300871#comment-15300871
 ] 

Wei-Chiu Chuang commented on HADOOP-12847:
--

The failed unit test is unrelated.

> hadoop daemonlog should support https and SPNEGO for Kerberized cluster
> ---
>
> Key: HADOOP-12847
> URL: https://issues.apache.org/jira/browse/HADOOP-12847
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12847.001.patch, HADOOP-12847.002.patch, 
> HADOOP-12847.003.patch, HADOOP-12847.004.patch, HADOOP-12847.005.patch, 
> HADOOP-12847.006.patch, HADOOP-12847.008.patch, HADOOP-12847.009.patch
>
>
> {{hadoop daemonlog}} is a simple, yet useful tool for debugging.
> However, it does not support https, nor does it support a Kerberized Hadoop 
> cluster.
> Using {{AuthenticatedURL}}, it will be able to support SPNEGO negotiation 
> with a Kerberized name node web ui. It will also fall back to simple 
> authentication if the cluster is not Kerberized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13137) TraceAdmin should support Kerberized cluster

2016-05-25 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-13137:
-
Attachment: HADOOP-13137.005.patch

Thanks [~steve_l], I've made a new patch based on your comment:
v05. 
# changed the argument order in assertEquals.
# removed {{final MiniDFSCluster clusterRef}}

> TraceAdmin should support Kerberized cluster
> 
>
> Key: HADOOP-13137
> URL: https://issues.apache.org/jira/browse/HADOOP-13137
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tracing
>Affects Versions: 2.6.0, 3.0.0-alpha1
> Environment: CDH5.5.1 cluster with Kerberos
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: Kerberos
> Attachments: HADOOP-13137.001.patch, HADOOP-13137.002.patch, 
> HADOOP-13137.003.patch, HADOOP-13137.004.patch, HADOOP-13137.005.patch
>
>
> When I run {{hadoop trace}} command for a Kerberized NameNode, it failed with 
> the following error:
> [hdfs@weichiu-encryption-1 root]$ hadoop trace -list  -host 
> weichiu-encryption-1.vpc.cloudera.com:802216/05/12 00:02:13 WARN ipc.Client: 
> Exception encountered while connecting to the server : 
> java.lang.IllegalArgumentException: Failed to specify server's Kerberos 
> principal name
> 16/05/12 00:02:13 WARN security.UserGroupInformation: 
> PriviledgedActionException as:h...@vpc.cloudera.com (auth:KERBEROS) 
> cause:java.io.IOException: java.lang.IllegalArgumentException: Failed to 
> specify server's Kerberos principal name
> Exception in thread "main" java.io.IOException: Failed on local exception: 
> java.io.IOException: java.lang.IllegalArgumentException: Failed to specify 
> server's Kerberos principal name; Host Details : local host is: 
> "weichiu-encryption-1.vpc.cloudera.com/172.26.8.185"; destination host is: 
> "weichiu-encryption-1.vpc.cloudera.com":8022;
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1470)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1403)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
>   at com.sun.proxy.$Proxy11.listSpanReceivers(Unknown Source)
>   at 
> org.apache.hadoop.tracing.TraceAdminProtocolTranslatorPB.listSpanReceivers(TraceAdminProtocolTranslatorPB.java:58)
>   at 
> org.apache.hadoop.tracing.TraceAdmin.listSpanReceivers(TraceAdmin.java:68)
>   at org.apache.hadoop.tracing.TraceAdmin.run(TraceAdmin.java:177)
>   at org.apache.hadoop.tracing.TraceAdmin.main(TraceAdmin.java:195)
> Caused by: java.io.IOException: java.lang.IllegalArgumentException: Failed to 
> specify server's Kerberos principal name
>   at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:682)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
>   at 
> org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:645)
>   at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:733)
>   at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:370)
>   at org.apache.hadoop.ipc.Client.getConnection(Client.java:1519)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1442)
>   ... 7 more
> Caused by: java.lang.IllegalArgumentException: Failed to specify server's 
> Kerberos principal name
>   at 
> org.apache.hadoop.security.SaslRpcClient.getServerPrincipal(SaslRpcClient.java:322)
>   at 
> org.apache.hadoop.security.SaslRpcClient.createSaslClient(SaslRpcClient.java:231)
>   at 
> org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:159)
>   at 
> org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:396)
>   at 
> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:555)
>   at org.apache.hadoop.ipc.Client$Connection.access$1800(Client.java:370)
>   at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:725)
>   at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:721)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
>   at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:720)
>   ... 10 more
> It is failing because {{TraceAdmin}} does not set up the property 
> {{CommonConfigurationKeys.HADOOP_SECURITY_SERVICE_USER_NAME_KEY}}
> Fixing it may require some restructuring, as the NameNode principal 
> 

[jira] [Commented] (HADOOP-12579) Deprecate and remove WriteableRPCEngine

2016-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300839#comment-15300839
 ] 

Hudson commented on HADOOP-12579:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #9860 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9860/])
HADOOP-12579. Deprecate and remove WriteableRPCEngine. Contributed by 
(kai.zheng: rev a6c79f92d503c664f2d109355b719124f29a30e5)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPCWaitForProxy.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ProtoUtil.java
* hadoop-common-project/hadoop-common/src/test/proto/test.proto
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestDoAsEffectiveUser.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestClientProtocolWithDelegationToken.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/WritableRpcEngine.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/RPCCallBenchmark.java
* hadoop-common-project/hadoop-common/src/main/proto/RpcHeader.proto
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRpcBase.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPCCallBenchmark.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/server/HSAdminServer.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufRpcEngine.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPCCompatibility.java
* hadoop-common-project/hadoop-common/src/test/proto/test_rpc_service.proto
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestMultipleProtocolServer.java


> Deprecate and remove WriteableRPCEngine
> ---
>
> Key: HADOOP-12579
> URL: https://issues.apache.org/jira/browse/HADOOP-12579
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Kai Zheng
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12579-v1.patch, HADOOP-12579-v10.patch, 
> HADOOP-12579-v11.patch, HADOOP-12579-v3.patch, HADOOP-12579-v4.patch, 
> HADOOP-12579-v5.patch, HADOOP-12579-v6.patch, HADOOP-12579-v7.patch, 
> HADOOP-12579-v8.patch, HADOOP-12579-v9.patch
>
>
> The {{WriteableRPCEninge}} depends on Java's serialization mechanisms for RPC 
> requests. Without proper checks, it has be shown that it can lead to security 
> vulnerabilities such as remote code execution (e.g., COLLECTIONS-580, 
> HADOOP-12577).
> The current implementation has migrated from {{WriteableRPCEngine}} to 
> {{ProtobufRPCEngine}} now. This jira proposes to deprecate 
> {{WriteableRPCEngine}} in branch-2 and to remove it in trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12579) Deprecate and remove WriteableRPCEngine

2016-05-25 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-12579:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha1
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks Haohui for the issue and reviewing. Thanks Colin, 
Chris and Steve for the discussion.

> Deprecate and remove WriteableRPCEngine
> ---
>
> Key: HADOOP-12579
> URL: https://issues.apache.org/jira/browse/HADOOP-12579
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Kai Zheng
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12579-v1.patch, HADOOP-12579-v10.patch, 
> HADOOP-12579-v11.patch, HADOOP-12579-v3.patch, HADOOP-12579-v4.patch, 
> HADOOP-12579-v5.patch, HADOOP-12579-v6.patch, HADOOP-12579-v7.patch, 
> HADOOP-12579-v8.patch, HADOOP-12579-v9.patch
>
>
> The {{WriteableRPCEninge}} depends on Java's serialization mechanisms for RPC 
> requests. Without proper checks, it has be shown that it can lead to security 
> vulnerabilities such as remote code execution (e.g., COLLECTIONS-580, 
> HADOOP-12577).
> The current implementation has migrated from {{WriteableRPCEngine}} to 
> {{ProtobufRPCEngine}} now. This jira proposes to deprecate 
> {{WriteableRPCEngine}} in branch-2 and to remove it in trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12488) DomainSocket: Solaris does not support timeouts on AF_UNIX sockets

2016-05-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300805#comment-15300805
 ] 

Hadoop QA commented on HADOOP-12488:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
50s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 6s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 6s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 27s 
{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 1m 41s 
{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 1m 41s {color} | 
{color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 41s {color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 30s 
{color} | {color:red} root: The patch generated 3 new + 145 unchanged - 4 fixed 
= 148 total (was 149) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 27s 
{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 29s 
{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 22s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 47s {color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 36s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 19s 
{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 25s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics |
|   | hadoop.security.ssl.TestReloadingX509TrustManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12793572/HADOOP-12488.002.patch
 |
| JIRA Issue | HADOOP-12488 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  javadoc  
mvninstall  findbugs  checkstyle  |
| uname | Linux 92ba6ca53ce8 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 77d5ce9 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9582/artifact/patchprocess/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| compile | 

[jira] [Updated] (HADOOP-12847) hadoop daemonlog should support https and SPNEGO for Kerberized cluster

2016-05-25 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12847:
-
Attachment: HADOOP-12847.009.patch

v09: 
# Changed the protocol command to "[-protocol (http|https)]" instead of 
"[-http|-https]".
# Also, changed the argument parsing method.
# Updated tests and documentation acoordingly.

> hadoop daemonlog should support https and SPNEGO for Kerberized cluster
> ---
>
> Key: HADOOP-12847
> URL: https://issues.apache.org/jira/browse/HADOOP-12847
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12847.001.patch, HADOOP-12847.002.patch, 
> HADOOP-12847.003.patch, HADOOP-12847.004.patch, HADOOP-12847.005.patch, 
> HADOOP-12847.006.patch, HADOOP-12847.008.patch, HADOOP-12847.009.patch
>
>
> {{hadoop daemonlog}} is a simple, yet useful tool for debugging.
> However, it does not support https, nor does it support a Kerberized Hadoop 
> cluster.
> Using {{AuthenticatedURL}}, it will be able to support SPNEGO negotiation 
> with a Kerberized name node web ui. It will also fall back to simple 
> authentication if the cluster is not Kerberized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10048) LocalDirAllocator should avoid holding locks while accessing the filesystem

2016-05-25 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300718#comment-15300718
 ] 

Jason Lowe commented on HADOOP-10048:
-

bq. Given the ctx could be local context, I think we want to update it to the 
currentContext which can accessed immediately - something like: 
currentContext.get().dirNumLastAccessed = dirNum. Isn't it?

No, that could trigger an array bounds exception if we update it to a value 
that is past the number of directories in the other, unrelated context.  Also 
we don't need to worry about this particular race.  When the new context is set 
it will compute a new random index for the next directory, so the index will 
still be random even if a call using the old context missed updating the new 
context.

bq. Shouldn't we check dir exists first then mkdir if not?

I was just preserving the existing behavior since it's unrelated to this 
change.  In practice the raw local fs mkdirs already does the exists check 
anyway.  Also switching them could lead to a weird error later if the local 
path ended up being an existing file.  localFS.exists will return true and we 
won't try the mkdirs, but mkdirs throws a useful error message explaining it's 
trying to create a directory where a file already exists.  Since this is not 
related to this change and could degrade the error diagnostics in some corner 
cases, I'm tempted to leave it as-is.  If we feel it's important to fix it then 
we can tackle it in a followup JIRA where it does the full file stat first, 
checks the corner cases, then calls mkdirs if necessary.


> LocalDirAllocator should avoid holding locks while accessing the filesystem
> ---
>
> Key: HADOOP-10048
> URL: https://issues.apache.org/jira/browse/HADOOP-10048
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.3.0
>Reporter: Jason Lowe
>Assignee: Jason Lowe
> Attachments: HADOOP-10048.003.patch, HADOOP-10048.004.patch, 
> HADOOP-10048.005.patch, HADOOP-10048.patch, HADOOP-10048.trunk.patch
>
>
> As noted in MAPREDUCE-5584 and HADOOP-7016, LocalDirAllocator can be a 
> bottleneck for multithreaded setups like the ShuffleHandler.  We should 
> consider moving to a lockless design or minimizing the critical sections to a 
> very small amount of time that does not involve I/O operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-12488) DomainSocket: Solaris does not support timeouts on AF_UNIX sockets

2016-05-25 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300713#comment-15300713
 ] 

Allen Wittenauer edited comment on HADOOP-12488 at 5/25/16 7:39 PM:


hadoop-common-project/hadoop-common/src/check_unix_sock_timeouts.c really does 
needs an ASF license header.  Also, is it possible to move this into 
src/main/native?


was (Author: aw):
hadoop-common-project/hadoop-common/src/check_unix_sock_timeouts.c really does 
needs an ASF license header.

> DomainSocket: Solaris does not support timeouts on AF_UNIX sockets
> --
>
> Key: HADOOP-12488
> URL: https://issues.apache.org/jira/browse/HADOOP-12488
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: net
>Affects Versions: 2.7.1
> Environment: Solaris
>Reporter: Alan Burlison
>Assignee: Alan Burlison
> Attachments: HADOOP-12488.001.patch, HADOOP-12488.002.patch
>
>
> From the hadoop-common-dev mailing list:
> http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201509.mbox/%3c560b99f6.6010...@oracle.com%3E
> http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201510.mbox/%3c560ea6bf.2070...@oracle.com%3E
> {quote}
> Now that the Hadoop native code builds on Solaris I've been chipping 
> away at all the test failures. About 50% of the failures involve 
> DomainSocket, either directly or indirectly. That seems to be mainly 
> because the tests use DomainSocket to do single-node testing, whereas in 
> production it seems that DomainSocket is less commonly used 
> (https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/ShortCircuitLocalReads.html).
> The particular problem on Solaris is that socket read/write timeouts 
> (the SO_SNDTIMEO and SO_RCVTIMEO socket options) are not supported for 
> UNIX domain (PF_UNIX) sockets. Those options are however supported for 
> PF_INET sockets. That's because the socket implementation on Solaris is 
> split roughly into two parts, for inet sockets and for STREAMS sockets, 
> and the STREAMS implementation lacks support for SO_SNDTIMEO and 
> SO_RCVTIMEO. As an aside, performance of sockets that use loopback or 
> the host's own IP is slightly better than that of UNIX domain sockets on 
> Solaris.
> I'm investigating getting timeouts supported for PF_UNIX sockets added 
> to Solaris, but in the meantime I'm also looking how this might be 
> worked around in Hadoop. One way would be to implement timeouts by 
> wrapping all the read/write/send/recv etc calls in DomainSocket.c with 
> either poll() or select().
> The basic idea is to add two new fields to DomainSocket.c to hold the 
> read/write timeouts. On platforms that support SO_SNDTIMEO and 
> SO_RCVTIMEO these would be unused as setsockopt() would be used to set 
> the socket timeouts. On platforms such as Solaris the JNI code would use 
> the values to implement the timeouts appropriately.
> To prevent the code in DomainSocket.c becoming a #ifdef hairball, the 
> current socket IO function calls such as accept(), send(), read() etc 
> would be replaced with a macros such as HD_ACCEPT. On platforms that 
> provide timeouts these would just expand to the normal socket functions, 
> on platforms that don't support timeouts it would expand to wrappers 
> that implements timeouts for them.
> The only caveats are that all code that does anything to a PF_UNIX 
> socket would *always* have to do so via DomainSocket. As far as I can 
> tell that's not an issue, but it would have to be borne in mind if any 
> changes were made in this area.
> Before I set about doing this, does the approach seem reasonable?
> {quote}
> {quote}
> Unfortunately it's not a simple as I'd hoped. For some reason I don't 
> really understand, nearly all the JNI methods are declared as static and 
> therefore don't get a "this" pointer and as a consequence all the class 
> data members that are needed by the JNI code have to be passed in as 
> parameters. That also means it's not possible to store the timeouts in 
> the DomainSocket fields from within the JNI code. Most of the JNI 
> methods should be instance methods rather than static ones, but making 
> that change would require some significant surgery to DomainSocket.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11127) Improve versioning and compatibility support in native library for downstream hadoop-common users.

2016-05-25 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300702#comment-15300702
 ] 

Allen Wittenauer commented on HADOOP-11127:
---

Ping.

It would be good to get this into 3.x.

> Improve versioning and compatibility support in native library for downstream 
> hadoop-common users.
> --
>
> Key: HADOOP-11127
> URL: https://issues.apache.org/jira/browse/HADOOP-11127
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: native
>Reporter: Chris Nauroth
>Assignee: Alan Burlison
> Attachments: HADOOP-11064.003.patch, proposal.01.txt
>
>
> There is no compatibility policy enforced on the JNI function signatures 
> implemented in the native library.  This library typically is deployed to all 
> nodes in a cluster, built from a specific source code version.  However, 
> downstream applications that want to run in that cluster might choose to 
> bundle a hadoop-common jar at a different version.  Since there is no 
> compatibility policy, this can cause link errors at runtime when the native 
> function signatures expected by hadoop-common.jar do not exist in 
> libhadoop.so/hadoop.dll.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12488) DomainSocket: Solaris does not support timeouts on AF_UNIX sockets

2016-05-25 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300713#comment-15300713
 ] 

Allen Wittenauer commented on HADOOP-12488:
---

hadoop-common-project/hadoop-common/src/check_unix_sock_timeouts.c really does 
needs an ASF license header.

> DomainSocket: Solaris does not support timeouts on AF_UNIX sockets
> --
>
> Key: HADOOP-12488
> URL: https://issues.apache.org/jira/browse/HADOOP-12488
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: net
>Affects Versions: 2.7.1
> Environment: Solaris
>Reporter: Alan Burlison
>Assignee: Alan Burlison
> Attachments: HADOOP-12488.001.patch, HADOOP-12488.002.patch
>
>
> From the hadoop-common-dev mailing list:
> http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201509.mbox/%3c560b99f6.6010...@oracle.com%3E
> http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201510.mbox/%3c560ea6bf.2070...@oracle.com%3E
> {quote}
> Now that the Hadoop native code builds on Solaris I've been chipping 
> away at all the test failures. About 50% of the failures involve 
> DomainSocket, either directly or indirectly. That seems to be mainly 
> because the tests use DomainSocket to do single-node testing, whereas in 
> production it seems that DomainSocket is less commonly used 
> (https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/ShortCircuitLocalReads.html).
> The particular problem on Solaris is that socket read/write timeouts 
> (the SO_SNDTIMEO and SO_RCVTIMEO socket options) are not supported for 
> UNIX domain (PF_UNIX) sockets. Those options are however supported for 
> PF_INET sockets. That's because the socket implementation on Solaris is 
> split roughly into two parts, for inet sockets and for STREAMS sockets, 
> and the STREAMS implementation lacks support for SO_SNDTIMEO and 
> SO_RCVTIMEO. As an aside, performance of sockets that use loopback or 
> the host's own IP is slightly better than that of UNIX domain sockets on 
> Solaris.
> I'm investigating getting timeouts supported for PF_UNIX sockets added 
> to Solaris, but in the meantime I'm also looking how this might be 
> worked around in Hadoop. One way would be to implement timeouts by 
> wrapping all the read/write/send/recv etc calls in DomainSocket.c with 
> either poll() or select().
> The basic idea is to add two new fields to DomainSocket.c to hold the 
> read/write timeouts. On platforms that support SO_SNDTIMEO and 
> SO_RCVTIMEO these would be unused as setsockopt() would be used to set 
> the socket timeouts. On platforms such as Solaris the JNI code would use 
> the values to implement the timeouts appropriately.
> To prevent the code in DomainSocket.c becoming a #ifdef hairball, the 
> current socket IO function calls such as accept(), send(), read() etc 
> would be replaced with a macros such as HD_ACCEPT. On platforms that 
> provide timeouts these would just expand to the normal socket functions, 
> on platforms that don't support timeouts it would expand to wrappers 
> that implements timeouts for them.
> The only caveats are that all code that does anything to a PF_UNIX 
> socket would *always* have to do so via DomainSocket. As far as I can 
> tell that's not an issue, but it would have to be borne in mind if any 
> changes were made in this area.
> Before I set about doing this, does the approach seem reasonable?
> {quote}
> {quote}
> Unfortunately it's not a simple as I'd hoped. For some reason I don't 
> really understand, nearly all the JNI methods are declared as static and 
> therefore don't get a "this" pointer and as a consequence all the class 
> data members that are needed by the JNI code have to be passed in as 
> parameters. That also means it's not possible to store the timeouts in 
> the DomainSocket fields from within the JNI code. Most of the JNI 
> methods should be instance methods rather than static ones, but making 
> that change would require some significant surgery to DomainSocket.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11820) aw jira testing, ignore

2016-05-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300612#comment-15300612
 ] 

Hadoop QA commented on HADOOP-11820:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 00s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
00s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} machack {color} | {color:blue} 0m 00s 
{color} | {color:blue} Applied YARN-5121 so that OS X works {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 08s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 35s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 13s {color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 32s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.TestSymlinkLocalFSFileContext |
|   | hadoop.fs.TestSymlinkLocalFSFileSystem |
|   | hadoop.net.unix.TestDomainSocket |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806168/socket.patch |
| JIRA Issue | HADOOP-11820 |
| Optional Tests |  compile  javac  mvninstall  unit  |
| uname | Darwin Gavins-Mac-mini.local 13.2.0 Darwin Kernel Version 13.2.0: Thu 
Apr 17 23:03:13 PDT 2014; root:xnu-2422.100.13~1/RELEASE_X86_64 x86_64 |
| Build tool | maven |
| Personality | 
/Users/jenkins/jenkins-home/workspace/Precommit-HADOOP-OSX/patchprocess/apache-yetus-8bf2ab7/precommit/personality/hadoop.sh
 |
| git revision | trunk / 77d5ce9 |
| Default Java | 1.8.0_74 |
| unit | 
https://builds.apache.org/job/Precommit-HADOOP-OSX/25/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit test logs |  
https://builds.apache.org/job/Precommit-HADOOP-OSX/25/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/Precommit-HADOOP-OSX/25/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/Precommit-HADOOP-OSX/25/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> aw jira testing, ignore
> ---
>
> Key: HADOOP-11820
> URL: https://issues.apache.org/jira/browse/HADOOP-11820
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
> Attachments: YARN-5132-v1.patch, socket.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11820) aw jira testing, ignore

2016-05-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300613#comment-15300613
 ] 

Hadoop QA commented on HADOOP-11820:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
4s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 36s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 40s 
{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 39s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806168/socket.patch |
| JIRA Issue | HADOOP-11820 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 827e14d8dd9a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 77d5ce9 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9581/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9581/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> aw jira testing, ignore
> ---
>
> Key: HADOOP-11820
> URL: https://issues.apache.org/jira/browse/HADOOP-11820
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
> Attachments: YARN-5132-v1.patch, socket.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Commented] (HADOOP-10048) LocalDirAllocator should avoid holding locks while accessing the filesystem

2016-05-25 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300600#comment-15300600
 ] 

Junping Du commented on HADOOP-10048:
-

Thanks [~jlowe] for updating the patch!
005 patch looks good in overall. Just two minor issues:
{noformat}
+  ctx.dirNumLastAccessed = dirNum;
{noformat}
Given the ctx could be local context, I think we want to update it to the 
currentContext which can accessed immediately - something like: 
currentContext.get().dirNumLastAccessed = dirNum. Isn't it?

{noformat}
+if(ctx.localFS.mkdirs(tmpDir)|| ctx.localFS.exists(tmpDir)) {
{noformat}
Shouldn't we check dir exists first then mkdir if not?


> LocalDirAllocator should avoid holding locks while accessing the filesystem
> ---
>
> Key: HADOOP-10048
> URL: https://issues.apache.org/jira/browse/HADOOP-10048
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.3.0
>Reporter: Jason Lowe
>Assignee: Jason Lowe
> Attachments: HADOOP-10048.003.patch, HADOOP-10048.004.patch, 
> HADOOP-10048.005.patch, HADOOP-10048.patch, HADOOP-10048.trunk.patch
>
>
> As noted in MAPREDUCE-5584 and HADOOP-7016, LocalDirAllocator can be a 
> bottleneck for multithreaded setups like the ShuffleHandler.  We should 
> consider moving to a lockless design or minimizing the critical sections to a 
> very small amount of time that does not involve I/O operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13198) Add support for OWASP's dependency-check

2016-05-25 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300555#comment-15300555
 ] 

Larry McCay commented on HADOOP-13198:
--

agreed - [~aw].

> Add support for OWASP's dependency-check
> 
>
> Key: HADOOP-13198
> URL: https://issues.apache.org/jira/browse/HADOOP-13198
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, security
>Affects Versions: 2.6.4
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13198.001.patch
>
>
> OWASP's Dependency-Check is a utility that identifies project
> dependencies and checks if there are any known, publicly disclosed,
> vulnerabilities.
> See https://www.owasp.org/index.php/OWASP_Dependency_Check
> This is very useful to stay on top of known vulnerabilities in third party 
> jars. Since it's a maven plugin it's pretty easy to drop in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-05-25 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300531#comment-15300531
 ] 

Xiao Chen commented on HADOOP-12893:


Thanks Andrew, understood, will post a new patch today.
Yeah the 'bundled' column was added by Akira by looking up all *.jar files in 
the build output. I'll make my next revision include only those with a 'Y' in 
the column.

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, 
> HADOOP-12893.004.patch, HADOOP-12893.005.patch, HADOOP-12893.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-11820) aw jira testing, ignore

2016-05-25 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 00s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 00s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} machack {color} | {color:blue} 0m 01s 
{color} | {color:blue} Applied YARN-5121 so that OS X works {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 19s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 48m 56s {color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 42s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806058/YARN-5132-v1.patch |
| JIRA Issue | HADOOP-11820 |
| Optional Tests |  compile  javac  mvninstall  unit  |
| uname | Darwin Gavins-Mac-mini.local 13.2.0 Darwin Kernel Version 13.2.0: Thu 
Apr 17 23:03:13 PDT 2014; root:xnu-2422.100.13~1/RELEASE_X86_64 x86_64 |
| Build tool | maven |
| Personality | 
/Users/jenkins/jenkins-home/workspace/Precommit-HADOOP-OSX/patchprocess/apache-yetus-bde9590/precommit/personality/hadoop.sh
 |
| git revision | trunk / 28bd63e |
| Default Java | 1.8.0_74 |
| unit | 
https://builds.apache.org/job/Precommit-HADOOP-OSX/21/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt
 |
|  Test Results | 
https://builds.apache.org/job/Precommit-HADOOP-OSX/21/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn U: 
hadoop-yarn-project/hadoop-yarn |
| Console output | 
https://builds.apache.org/job/Precommit-HADOOP-OSX/21/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.

)

> aw jira testing, ignore
> ---
>
> Key: HADOOP-11820
> URL: https://issues.apache.org/jira/browse/HADOOP-11820
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
> Attachments: YARN-5132-v1.patch, socket.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13160) Suppress checkstyle JavadocPackage check for test source

2016-05-25 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300473#comment-15300473
 ] 

John Zhuge commented on HADOOP-13160:
-

Thanks [~ste...@apache.org]!

> Suppress checkstyle JavadocPackage check for test source
> 
>
> Key: HADOOP-13160
> URL: https://issues.apache.org/jira/browse/HADOOP-13160
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: HADOOP-13160.001.patch, HADOOP-13160.002.patch, 
> HADOOP-13160.003.patch
>
>
> Suppress "Missing package-info.java" checkstyle error for test source files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11820) aw jira testing, ignore

2016-05-25 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11820:
--
Attachment: socket.patch

> aw jira testing, ignore
> ---
>
> Key: HADOOP-11820
> URL: https://issues.apache.org/jira/browse/HADOOP-11820
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
> Attachments: YARN-5132-v1.patch, socket.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13198) Add support for OWASP's dependency-check

2016-05-25 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300455#comment-15300455
 ] 

Allen Wittenauer commented on HADOOP-13198:
---

The inability to block out false positives, lack of CVE caching, and HTML 
output make this functionality less than ideal for any sort of automated job. :(

> Add support for OWASP's dependency-check
> 
>
> Key: HADOOP-13198
> URL: https://issues.apache.org/jira/browse/HADOOP-13198
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, security
>Affects Versions: 2.6.4
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13198.001.patch
>
>
> OWASP's Dependency-Check is a utility that identifies project
> dependencies and checks if there are any known, publicly disclosed,
> vulnerabilities.
> See https://www.owasp.org/index.php/OWASP_Dependency_Check
> This is very useful to stay on top of known vulnerabilities in third party 
> jars. Since it's a maven plugin it's pretty easy to drop in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-05-25 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300451#comment-15300451
 ] 

Andrew Wang commented on HADOOP-12893:
--

AFAIK we have all the source dependencies covered by the existing L 
information. "Source dependencies" means things included in the source tarball, 
which is still a bundle. It's not exactly the same thing as the source code. 

I wasn't aware of the "bundled" column, I assume it refers to whether something 
is actually included in the binary tarball? I'd prefer a different name, since 
both the src and bin tarballs could be considered "bundles". Hopefully whoever 
added this column (Akira?) can comment; else we can validate by building the 
release artifacts and filtering based on the contents.

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, 
> HADOOP-12893.004.patch, HADOOP-12893.005.patch, HADOOP-12893.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13198) Add support for OWASP's dependency-check

2016-05-25 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13198:
-
Attachment: (was: hadoop-all-dependency-check-report.html)

> Add support for OWASP's dependency-check
> 
>
> Key: HADOOP-13198
> URL: https://issues.apache.org/jira/browse/HADOOP-13198
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, security
>Affects Versions: 2.6.4
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13198.001.patch
>
>
> OWASP's Dependency-Check is a utility that identifies project
> dependencies and checks if there are any known, publicly disclosed,
> vulnerabilities.
> See https://www.owasp.org/index.php/OWASP_Dependency_Check
> This is very useful to stay on top of known vulnerabilities in third party 
> jars. Since it's a maven plugin it's pretty easy to drop in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10048) LocalDirAllocator should avoid holding locks while accessing the filesystem

2016-05-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300418#comment-15300418
 ] 

Hadoop QA commented on HADOOP-10048:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 8s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 17 unchanged - 6 fixed = 17 total (was 23) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 28s 
{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 12s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806151/HADOOP-10048.005.patch
 |
| JIRA Issue | HADOOP-10048 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c7929f7fc633 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 9a31e5d |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9580/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9580/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> LocalDirAllocator should avoid holding locks while accessing the filesystem
> ---
>
> Key: HADOOP-10048
> URL: https://issues.apache.org/jira/browse/HADOOP-10048
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.3.0
>Reporter: Jason Lowe
> 

[jira] [Commented] (HADOOP-13050) Upgrade to AWS SDK 10.10+ for Java 8u60+

2016-05-25 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300393#comment-15300393
 ] 

Steve Loughran commented on HADOOP-13050:
-

Note that I'm not seeing this fail on the version of java 8u9x I've got 
installed locally. It appears that what matters is having a Joda time > 2.8.0 
which makes the difference

> Upgrade to AWS SDK 10.10+ for Java 8u60+
> 
>
> Key: HADOOP-13050
> URL: https://issues.apache.org/jira/browse/HADOOP-13050
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
> Attachments: HADOOP-13050-001.patch
>
>
> HADOOP-13044 highlights that AWS SDK 10.6 —shipping in Hadoop 2.7+, doesn't 
> work on open jdk >= 8u60, because a change in the JDK broke the version of 
> Joda time that AWS uses.
> Fix, update the JDK. Though, that implies updating http components: 
> HADOOP-12767.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request

2016-05-25 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13203:
---
Assignee: Rajesh Balamohan

> S3a: Consider reducing the number of connection aborts by setting correct 
> length in s3 request
> --
>
> Key: HADOOP-13203
> URL: https://issues.apache.org/jira/browse/HADOOP-13203
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13203-branch-2-001.patch
>
>
> Currently file's "contentLength" is set as the "requestedStreamLen", when 
> invoking S3AInputStream::reopen().  As a part of lazySeek(), sometimes the 
> stream had to be closed and reopened. But lots of times the stream was closed 
> with abort() causing the internal http connection to be unusable. This incurs 
> lots of connection establishment cost in some jobs.  It would be good to set 
> the correct value for the stream length to avoid connection aborts. 
> I will post the patch once aws tests passes in my machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10048) LocalDirAllocator should avoid holding locks while accessing the filesystem

2016-05-25 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-10048:

Attachment: HADOOP-10048.005.patch

Yep, I agree that it would be better to be consistent given they passed an 
explicit conf, and I agree that returning the context to use from confChanged 
is a straightforward fix for that.  I updated the patch accordingly.

> LocalDirAllocator should avoid holding locks while accessing the filesystem
> ---
>
> Key: HADOOP-10048
> URL: https://issues.apache.org/jira/browse/HADOOP-10048
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.3.0
>Reporter: Jason Lowe
>Assignee: Jason Lowe
> Attachments: HADOOP-10048.003.patch, HADOOP-10048.004.patch, 
> HADOOP-10048.005.patch, HADOOP-10048.patch, HADOOP-10048.trunk.patch
>
>
> As noted in MAPREDUCE-5584 and HADOOP-7016, LocalDirAllocator can be a 
> bottleneck for multithreaded setups like the ShuffleHandler.  We should 
> consider moving to a lockless design or minimizing the critical sections to a 
> very small amount of time that does not involve I/O operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12537) s3a: Add flag for session ID to allow Amazon STS temporary credentials

2016-05-25 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300294#comment-15300294
 ] 

Steve Loughran commented on HADOOP-12537:
-

Frankfurt is AWSv4 signatures only: 
http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html
  

this is why it has a bit of a reputation

* 
http://stackoverflow.com/questions/33828588/spark-cannot-read-files-stored-on-aws-s3-in-frankfurt-region-ireland-region-wor
* 
https://community.cloudera.com/t5/Storage-Random-Access-HDFS/cloudera-does-not-support-access-to-s3-within-eu-frankfurt-Aws/td-p/32369


> s3a: Add flag for session ID to allow Amazon STS temporary credentials
> --
>
> Key: HADOOP-12537
> URL: https://issues.apache.org/jira/browse/HADOOP-12537
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 2.7.1
>Reporter: Sean Mackrory
>Priority: Minor
> Attachments: HADOOP-12537.001.patch, HADOOP-12537.002.patch, 
> HADOOP-12537.003.patch, HADOOP-12537.004.patch, HADOOP-12537.diff, 
> HADOOP-12537.diff
>
>
> Amazon STS allows you to issue temporary access key id / secret key pairs for 
> your a user / role. However, using these credentials also requires specifying 
> a session ID. There is currently no such configuration property or the 
> required code to pass it through to the API (at least not that I can find) in 
> any of the S3 connectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-05-25 Thread Esther Kundin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esther Kundin updated HADOOP-12291:
---
Status: Patch Available  (was: In Progress)

> Add support for nested groups in LdapGroupsMapping
> --
>
> Key: HADOOP-12291
> URL: https://issues.apache.org/jira/browse/HADOOP-12291
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Gautam Gopalakrishnan
>Assignee: Esther Kundin
>  Labels: features, patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, 
> HADOOP-12291.003.patch, HADOOP-12291.004.patch, HADOOP-12291.005.patch, 
> HADOOP-12291.006.patch
>
>
> When using {{LdapGroupsMapping}} with Hadoop, nested groups are not 
> supported. So for example if user {{jdoe}} is part of group A which is a 
> member of group B, the group mapping currently returns only group A.
> Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and 
> SSSD (or similar tools) but would be good to have this feature as part of 
> {{LdapGroupsMapping}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13092) RawLocalFileSystem is not case sensitive

2016-05-25 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-13092.
---
Resolution: Not A Problem

> RawLocalFileSystem is not case sensitive
> 
>
> Key: HADOOP-13092
> URL: https://issues.apache.org/jira/browse/HADOOP-13092
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.23.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>
> RawLocalFileSystem is not case sensitive but I am not sure if it should be.
> This class relays on the underlying OS filesystem so if it runs on a non case 
> sensitive OS the class will be insensitive as well.
> On Mac follow the following commands:
> # echo asdf > lower.txt
> # cat loWer.txt
> Do we need to make RawLocalFileSystem class case sensitive? Please help.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13160) Suppress checkstyle JavadocPackage check for test source

2016-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15299802#comment-15299802
 ] 

Hudson commented on HADOOP-13160:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #9856 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9856/])
HADOOP-13160. Suppress checkstyle JavadocPackage check for test source. 
(stevel: rev dcbb7009b6f94e655724f6a0320723e1279ebc79)
* dev-support/checkstyle/suppressions.xml
* pom.xml


> Suppress checkstyle JavadocPackage check for test source
> 
>
> Key: HADOOP-13160
> URL: https://issues.apache.org/jira/browse/HADOOP-13160
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: HADOOP-13160.001.patch, HADOOP-13160.002.patch, 
> HADOOP-13160.003.patch
>
>
> Suppress "Missing package-info.java" checkstyle error for test source files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request

2016-05-25 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-13203:
--
Attachment: HADOOP-13203-branch-2-001.patch

Yes [~steve_l]. In workloads like hive, there are lots of random seeks and lots 
of times the internal connection had to be aborted. It was a lot cheaper to 
reuse the connection with this patch.  Amount of data to be requested for in 
the request can be determined by "Math.max(targetPos + readahead, (targetPos + 
length))".  

>From the unit tests perspective for aws, following issues were there

Test timeout failures:
- TestS3ADeleteManyFiles.testBulkRenameAndDelete
- org.apache.hadoop.fs.contract.s3a.TestS3AContractDistCp.largeFilesToRemote, 
largeFilesFromRemote
- org.apache.hadoop.fs.s3a.scale.TestS3ADeleteManyFiles.testBulkRenameAndDelete


Other failures
- org.apache.hadoop.fs.contract.s3a.TestS3AContractRootDir (Root directory 
operation rejected) - This is already tracked in another jira.

- 
org.apache.hadoop.fs.s3a.scale.TestS3AInputStreamPerformance.testReadAheadDefault/testReadBigBlocksBigReadahead
 (earlier this expected 1 open, but now it can be multiple requestedStreamLen 
would no longer be the file's length. At the max, we would be able to save a 
single read ahead call. For rest, it has to open multiple times.
But this is ok compared with the connection restablishments in real workloads 
where it can be completely random set of ranges being requested for. E.g 
hive.).  I have not updated the patch to fix this failure. Based on inputs, I 
can revise the patch. 

> S3a: Consider reducing the number of connection aborts by setting correct 
> length in s3 request
> --
>
> Key: HADOOP-13203
> URL: https://issues.apache.org/jira/browse/HADOOP-13203
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13203-branch-2-001.patch
>
>
> Currently file's "contentLength" is set as the "requestedStreamLen", when 
> invoking S3AInputStream::reopen().  As a part of lazySeek(), sometimes the 
> stream had to be closed and reopened. But lots of times the stream was closed 
> with abort() causing the internal http connection to be unusable. This incurs 
> lots of connection establishment cost in some jobs.  It would be good to set 
> the correct value for the stream length to avoid connection aborts. 
> I will post the patch once aws tests passes in my machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13160) Suppress checkstyle JavadocPackage check for test source

2016-05-25 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15299768#comment-15299768
 ] 

Steve Loughran commented on HADOOP-13160:
-

+1, committed to branch-2 and trunk

> Suppress checkstyle JavadocPackage check for test source
> 
>
> Key: HADOOP-13160
> URL: https://issues.apache.org/jira/browse/HADOOP-13160
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: HADOOP-13160.001.patch, HADOOP-13160.002.patch, 
> HADOOP-13160.003.patch
>
>
> Suppress "Missing package-info.java" checkstyle error for test source files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13160) Suppress checkstyle JavadocPackage check for test source

2016-05-25 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13160:

   Resolution: Fixed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

> Suppress checkstyle JavadocPackage check for test source
> 
>
> Key: HADOOP-13160
> URL: https://issues.apache.org/jira/browse/HADOOP-13160
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: HADOOP-13160.001.patch, HADOOP-13160.002.patch, 
> HADOOP-13160.003.patch
>
>
> Suppress "Missing package-info.java" checkstyle error for test source files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13137) TraceAdmin should support Kerberized cluster

2016-05-25 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15299765#comment-15299765
 ] 

Steve Loughran commented on HADOOP-13137:
-

getting close. in {{assertEquals()}} the expected should come first, e.g. 
{{assertEquals(0, ret)}}

> TraceAdmin should support Kerberized cluster
> 
>
> Key: HADOOP-13137
> URL: https://issues.apache.org/jira/browse/HADOOP-13137
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tracing
>Affects Versions: 2.6.0, 3.0.0-alpha1
> Environment: CDH5.5.1 cluster with Kerberos
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: Kerberos
> Attachments: HADOOP-13137.001.patch, HADOOP-13137.002.patch, 
> HADOOP-13137.003.patch, HADOOP-13137.004.patch
>
>
> When I run {{hadoop trace}} command for a Kerberized NameNode, it failed with 
> the following error:
> [hdfs@weichiu-encryption-1 root]$ hadoop trace -list  -host 
> weichiu-encryption-1.vpc.cloudera.com:802216/05/12 00:02:13 WARN ipc.Client: 
> Exception encountered while connecting to the server : 
> java.lang.IllegalArgumentException: Failed to specify server's Kerberos 
> principal name
> 16/05/12 00:02:13 WARN security.UserGroupInformation: 
> PriviledgedActionException as:h...@vpc.cloudera.com (auth:KERBEROS) 
> cause:java.io.IOException: java.lang.IllegalArgumentException: Failed to 
> specify server's Kerberos principal name
> Exception in thread "main" java.io.IOException: Failed on local exception: 
> java.io.IOException: java.lang.IllegalArgumentException: Failed to specify 
> server's Kerberos principal name; Host Details : local host is: 
> "weichiu-encryption-1.vpc.cloudera.com/172.26.8.185"; destination host is: 
> "weichiu-encryption-1.vpc.cloudera.com":8022;
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1470)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1403)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
>   at com.sun.proxy.$Proxy11.listSpanReceivers(Unknown Source)
>   at 
> org.apache.hadoop.tracing.TraceAdminProtocolTranslatorPB.listSpanReceivers(TraceAdminProtocolTranslatorPB.java:58)
>   at 
> org.apache.hadoop.tracing.TraceAdmin.listSpanReceivers(TraceAdmin.java:68)
>   at org.apache.hadoop.tracing.TraceAdmin.run(TraceAdmin.java:177)
>   at org.apache.hadoop.tracing.TraceAdmin.main(TraceAdmin.java:195)
> Caused by: java.io.IOException: java.lang.IllegalArgumentException: Failed to 
> specify server's Kerberos principal name
>   at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:682)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
>   at 
> org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:645)
>   at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:733)
>   at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:370)
>   at org.apache.hadoop.ipc.Client.getConnection(Client.java:1519)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1442)
>   ... 7 more
> Caused by: java.lang.IllegalArgumentException: Failed to specify server's 
> Kerberos principal name
>   at 
> org.apache.hadoop.security.SaslRpcClient.getServerPrincipal(SaslRpcClient.java:322)
>   at 
> org.apache.hadoop.security.SaslRpcClient.createSaslClient(SaslRpcClient.java:231)
>   at 
> org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:159)
>   at 
> org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:396)
>   at 
> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:555)
>   at org.apache.hadoop.ipc.Client$Connection.access$1800(Client.java:370)
>   at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:725)
>   at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:721)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
>   at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:720)
>   ... 10 more
> It is failing because {{TraceAdmin}} does not set up the property 
> {{CommonConfigurationKeys.HADOOP_SECURITY_SERVICE_USER_NAME_KEY}}
> Fixing it may require some restructuring, as the NameNode principal 
> {{dfs.namenode.kerberos.principal}} is a HDFS property, but TraceAdmin is in 
> hadoop-common. 

[jira] [Commented] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request

2016-05-25 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15299721#comment-15299721
 ] 

Steve Loughran commented on HADOOP-13203:
-

So you are proposing some shorter block size for reads, on the basis that it 
allows for followon GETs to use the same SSL connection?

How do you know how much to ask for? Or: how do you handle the end of the 
connection and so start reading the next block? Presumably the cost of that 
will be lower (reused connection and all), but the stream reading will need to 
recognise premature EOFs and react

> S3a: Consider reducing the number of connection aborts by setting correct 
> length in s3 request
> --
>
> Key: HADOOP-13203
> URL: https://issues.apache.org/jira/browse/HADOOP-13203
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Priority: Minor
>
> Currently file's "contentLength" is set as the "requestedStreamLen", when 
> invoking S3AInputStream::reopen().  As a part of lazySeek(), sometimes the 
> stream had to be closed and reopened. But lots of times the stream was closed 
> with abort() causing the internal http connection to be unusable. This incurs 
> lots of connection establishment cost in some jobs.  It would be good to set 
> the correct value for the stream length to avoid connection aborts. 
> I will post the patch once aws tests passes in my machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12579) Deprecate and remove WriteableRPCEngine

2016-05-25 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15299682#comment-15299682
 ] 

Kai Zheng commented on HADOOP-12579:


[~wheat9] has given this a +1. The updates since then are rebase and minor 
check styles removing unused imports.

So if there isn't concern I will commit it to trunk tomorrow.

> Deprecate and remove WriteableRPCEngine
> ---
>
> Key: HADOOP-12579
> URL: https://issues.apache.org/jira/browse/HADOOP-12579
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Kai Zheng
> Attachments: HADOOP-12579-v1.patch, HADOOP-12579-v10.patch, 
> HADOOP-12579-v11.patch, HADOOP-12579-v3.patch, HADOOP-12579-v4.patch, 
> HADOOP-12579-v5.patch, HADOOP-12579-v6.patch, HADOOP-12579-v7.patch, 
> HADOOP-12579-v8.patch, HADOOP-12579-v9.patch
>
>
> The {{WriteableRPCEninge}} depends on Java's serialization mechanisms for RPC 
> requests. Without proper checks, it has be shown that it can lead to security 
> vulnerabilities such as remote code execution (e.g., COLLECTIONS-580, 
> HADOOP-12577).
> The current implementation has migrated from {{WriteableRPCEngine}} to 
> {{ProtobufRPCEngine}} now. This jira proposes to deprecate 
> {{WriteableRPCEngine}} in branch-2 and to remove it in trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12579) Deprecate and remove WriteableRPCEngine

2016-05-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15299679#comment-15299679
 ] 

Hadoop QA commented on HADOOP-12579:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
4s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 11s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 3s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 8s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 4s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 29s 
{color} | {color:red} root: The patch generated 3 new + 845 unchanged - 71 
fixed = 848 total (was 916) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 35s 
{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 55m 46s 
{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 5s 
{color} | {color:green} hadoop-mapreduce-client-hs in the patch passed. {color} 
|
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 111m 23s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806077/HADOOP-12579-v11.patch
 |
| JIRA Issue | HADOOP-12579 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 81b933118329 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 28bd63e |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9579/artifact/patchprocess/diff-checkstyle-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9579/testReport/ |
| modules | C: 

[jira] [Updated] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request

2016-05-25 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-13203:
--
Issue Type: Sub-task  (was: Bug)
Parent: HADOOP-11694

> S3a: Consider reducing the number of connection aborts by setting correct 
> length in s3 request
> --
>
> Key: HADOOP-13203
> URL: https://issues.apache.org/jira/browse/HADOOP-13203
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Priority: Minor
>
> Currently file's "contentLength" is set as the "requestedStreamLen", when 
> invoking S3AInputStream::reopen().  As a part of lazySeek(), sometimes the 
> stream had to be closed and reopened. But lots of times the stream was closed 
> with abort() causing the internal http connection to be unusable. This incurs 
> lots of connection establishment cost in some jobs.  It would be good to set 
> the correct value for the stream length to avoid connection aborts. 
> I will post the patch once aws tests passes in my machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request

2016-05-25 Thread Rajesh Balamohan (JIRA)
Rajesh Balamohan created HADOOP-13203:
-

 Summary: S3a: Consider reducing the number of connection aborts by 
setting correct length in s3 request
 Key: HADOOP-13203
 URL: https://issues.apache.org/jira/browse/HADOOP-13203
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Reporter: Rajesh Balamohan
Priority: Minor


Currently file's "contentLength" is set as the "requestedStreamLen", when 
invoking S3AInputStream::reopen().  As a part of lazySeek(), sometimes the 
stream had to be closed and reopened. But lots of times the stream was closed 
with abort() causing the internal http connection to be unusable. This incurs 
lots of connection establishment cost in some jobs.  It would be good to set 
the correct value for the stream length to avoid connection aborts. 

I will post the patch once aws tests passes in my machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13137) TraceAdmin should support Kerberized cluster

2016-05-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15299591#comment-15299591
 ] 

Hadoop QA commented on HADOOP-13137:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 35s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
50s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 37s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 41s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
40s {color} | {color:green} root: The patch generated 0 new + 13 unchanged - 2 
fixed = 13 total (was 15) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 54s 
{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 47s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 140m 3s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.TestAsyncDFSRename |
|   | hadoop.hdfs.shortcircuit.TestShortCircuitCache |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806062/HADOOP-13137.004.patch
 |
| JIRA Issue | HADOOP-13137 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d327efa2f553 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 28bd63e |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9577/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  

[jira] [Commented] (HADOOP-13160) Suppress checkstyle JavadocPackage check for test source

2016-05-25 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15299556#comment-15299556
 ] 

John Zhuge commented on HADOOP-13160:
-

[~ste...@apache.org] Could you please commit?

> Suppress checkstyle JavadocPackage check for test source
> 
>
> Key: HADOOP-13160
> URL: https://issues.apache.org/jira/browse/HADOOP-13160
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-13160.001.patch, HADOOP-13160.002.patch, 
> HADOOP-13160.003.patch
>
>
> Suppress "Missing package-info.java" checkstyle error for test source files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12579) Deprecate and remove WriteableRPCEngine

2016-05-25 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-12579:
---
Attachment: HADOOP-12579-v11.patch

Fixed some check styles of removing unused imports.

> Deprecate and remove WriteableRPCEngine
> ---
>
> Key: HADOOP-12579
> URL: https://issues.apache.org/jira/browse/HADOOP-12579
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Kai Zheng
> Attachments: HADOOP-12579-v1.patch, HADOOP-12579-v10.patch, 
> HADOOP-12579-v11.patch, HADOOP-12579-v3.patch, HADOOP-12579-v4.patch, 
> HADOOP-12579-v5.patch, HADOOP-12579-v6.patch, HADOOP-12579-v7.patch, 
> HADOOP-12579-v8.patch, HADOOP-12579-v9.patch
>
>
> The {{WriteableRPCEninge}} depends on Java's serialization mechanisms for RPC 
> requests. Without proper checks, it has be shown that it can lead to security 
> vulnerabilities such as remote code execution (e.g., COLLECTIONS-580, 
> HADOOP-12577).
> The current implementation has migrated from {{WriteableRPCEngine}} to 
> {{ProtobufRPCEngine}} now. This jira proposes to deprecate 
> {{WriteableRPCEngine}} in branch-2 and to remove it in trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-12579) Deprecate and remove WriteableRPCEngine

2016-05-25 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng reassigned HADOOP-12579:
--

Assignee: Kai Zheng

> Deprecate and remove WriteableRPCEngine
> ---
>
> Key: HADOOP-12579
> URL: https://issues.apache.org/jira/browse/HADOOP-12579
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Kai Zheng
> Attachments: HADOOP-12579-v1.patch, HADOOP-12579-v10.patch, 
> HADOOP-12579-v3.patch, HADOOP-12579-v4.patch, HADOOP-12579-v5.patch, 
> HADOOP-12579-v6.patch, HADOOP-12579-v7.patch, HADOOP-12579-v8.patch, 
> HADOOP-12579-v9.patch
>
>
> The {{WriteableRPCEninge}} depends on Java's serialization mechanisms for RPC 
> requests. Without proper checks, it has be shown that it can lead to security 
> vulnerabilities such as remote code execution (e.g., COLLECTIONS-580, 
> HADOOP-12577).
> The current implementation has migrated from {{WriteableRPCEngine}} to 
> {{ProtobufRPCEngine}} now. This jira proposes to deprecate 
> {{WriteableRPCEngine}} in branch-2 and to remove it in trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-05-25 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15299522#comment-15299522
 ] 

Xiao Chen commented on HADOOP-12893:


Thanks [~andrew.wang] reviewing.
bq. I thought we had all the source distribution items covered already, so 
these new additions would only apply to the binary distribution.
Maybe I comprehend the L wrong. I read [this comment from you 
above|https://issues.apache.org/jira/browse/HADOOP-12893?focusedCommentId=15283260=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15283260]
 and thought we want to list dependencies and say they are in the bundle, or in 
source code, or both.
Should we just list anything that has 'bundled?'==Y, and skip others? That will 
be simple, just wanna make sure before I make the change.

bq. There's also a few copies of the GPL and LGPL still in LICENSES. This 
content was supposed to be pulled from the Licenses tab on the spreadsheet, 
apparently not?
The script only groups from the dependencies tab, and list the license name 
currently. After that I went to the links, and copied the license and 
80-char-wrapped it. I may have forgotten to remove the GPL part from 
CDDL+GPL... :(
I guess one more automation we can do is to paste that into the {{License 
text}} column and automate.

So, LGPL will be gone once we get rid of jdiff.
GPL should be gone since they're all CDDL+GPL w/ CPE. We should be good using 
CDDL.

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, 
> HADOOP-12893.004.patch, HADOOP-12893.005.patch, HADOOP-12893.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11820) aw jira testing, ignore

2016-05-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15299521#comment-15299521
 ] 

Hadoop QA commented on HADOOP-11820:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 00s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 00s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} machack {color} | {color:blue} 0m 01s 
{color} | {color:blue} Applied YARN-5121 so that OS X works {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 19s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 48m 56s {color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 42s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806058/YARN-5132-v1.patch |
| JIRA Issue | HADOOP-11820 |
| Optional Tests |  compile  javac  mvninstall  unit  |
| uname | Darwin Gavins-Mac-mini.local 13.2.0 Darwin Kernel Version 13.2.0: Thu 
Apr 17 23:03:13 PDT 2014; root:xnu-2422.100.13~1/RELEASE_X86_64 x86_64 |
| Build tool | maven |
| Personality | 
/Users/jenkins/jenkins-home/workspace/Precommit-HADOOP-OSX/patchprocess/apache-yetus-bde9590/precommit/personality/hadoop.sh
 |
| git revision | trunk / 28bd63e |
| Default Java | 1.8.0_74 |
| unit | 
https://builds.apache.org/job/Precommit-HADOOP-OSX/21/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt
 |
|  Test Results | 
https://builds.apache.org/job/Precommit-HADOOP-OSX/21/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn U: 
hadoop-yarn-project/hadoop-yarn |
| Console output | 
https://builds.apache.org/job/Precommit-HADOOP-OSX/21/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> aw jira testing, ignore
> ---
>
> Key: HADOOP-11820
> URL: https://issues.apache.org/jira/browse/HADOOP-11820
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
> Attachments: YARN-5132-v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org