[jira] [Comment Edited] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-06 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317914#comment-15317914
 ] 

Xiao Chen edited comment on HADOOP-12893 at 6/7/16 5:58 AM:


Patch 10 addresses what Allen suggested:
- Use the existing hadoop-build-tools to avoid adding a new project
- Add the snippet to make maven validate happy
- Side effect 1: fixes the same problem in build-tools, since now they really 
become the same problem
- Side effect 2: I bet my 2 cents on hadoop-project-dist and hadoop-project 
would pass now.

Sorry not being able to work on this on Friday, please review and feel free to 
comment. Thanks again.


was (Author: xiaochen):
Patch 10 addresses what Allen suggested:
- Use the existing hadoop-build-tools to avoid adding a new project
- Add the snippet to make maven validate happy
- Side effect 1: fixes the same problem in build-tools, since now they really 
become the same problem
- Side effect 2: I bet my 2 cents on hadoop-project-dist and hadoop-project 
would pass now.

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, 
> HADOOP-12893.004.patch, HADOOP-12893.005.patch, HADOOP-12893.006.patch, 
> HADOOP-12893.007.patch, HADOOP-12893.008.patch, HADOOP-12893.009.patch, 
> HADOOP-12893.01.patch, HADOOP-12893.10.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-06 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-12893:
---
Attachment: HADOOP-12893.10.patch

Patch 10 addresses what Allen suggested:
- Use the existing hadoop-build-tools to avoid adding a new project
- Add the snippet to make maven validate happy
- Side effect 1: fixes the same problem in build-tools, since now they really 
become the same problem
- Side effect 2: I bet my 2 cents on hadoop-project-dist and hadoop-project 
would pass now.

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, 
> HADOOP-12893.004.patch, HADOOP-12893.005.patch, HADOOP-12893.006.patch, 
> HADOOP-12893.007.patch, HADOOP-12893.008.patch, HADOOP-12893.009.patch, 
> HADOOP-12893.01.patch, HADOOP-12893.10.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls

2016-06-06 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317891#comment-15317891
 ] 

Jitendra Nath Pandey commented on HADOOP-12910:
---

+1 to use CompletableFuture in trunk. I see that a 
CompletableFutureWithCallback needs to be defined only so that we can have a 
FutureWithCallback in branch-2. We will need to evaluate how complicated that 
gets, but if its too hard, I would suggest, not to backport the user-facing API 
in branch-2 at all.
  Since CompletableFuture implements Future, there is still no incompatibility 
with the current code in branch-2. It is valid that branch-2 will not have all 
the cool features of CompletableFuture, but some early experimenters may be 
willing to live with that limitation, and will not run into incompatibility 
once upgrade to trunk. 

> Add new FileSystem API to support asynchronous method calls
> ---
>
> Key: HADOOP-12910
> URL: https://issues.apache.org/jira/browse/HADOOP-12910
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12910-HDFS-9924.000.patch, 
> HADOOP-12910-HDFS-9924.001.patch, HADOOP-12910-HDFS-9924.002.patch
>
>
> Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a 
> better name).  All the APIs in FutureFileSystem are the same as FileSystem 
> except that the return type is wrapped by Future, e.g.
> {code}
>   //FileSystem
>   public boolean rename(Path src, Path dst) throws IOException;
>   //FutureFileSystem
>   public Future rename(Path src, Path dst) throws IOException;
> {code}
> Note that FutureFileSystem does not extend FileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13184) Add "Apache" to Hadoop project logo

2016-06-06 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-13184:
---
Assignee: Abhishek

> Add "Apache" to Hadoop project logo
> ---
>
> Key: HADOOP-13184
> URL: https://issues.apache.org/jira/browse/HADOOP-13184
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Chris Douglas
>Assignee: Abhishek
>
> Many ASF projects include "Apache" in their logo. We should add it to Hadoop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls

2016-06-06 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317848#comment-15317848
 ] 

stack commented on HADOOP-12910:


bq. For branch-2, There are two possible ways to use Deferred (or 
ListenableFuture) :

IMO, it is user-abuse if there is one way to consume the HDFS API 
asynchronously in hadoop2 and another manner when hadoop3. Users and 
downstreamers have better things to do w/ their time than build a lattice of 
probing, brittle reflection and alternate code paths to match a wandering HDFS 
API.

bq. Any comments?

IMO, there is nothing undesirable about your choices #1 or #2 above. If you 
don't want a dependency, do #2. Regards #2, it doesn't matter that the Hadoop 
Deferred no longer matches 'other projects'. It doesn't have to. It is the 
proven API with a provenance and the clean documentation on how to use and what 
the contract is that we are after.

As to your FutureWithCallback, where does this come from? Have you built any 
event-driven apps with it? At first blush, it is lacking in vocabulary at least 
when put against Deferred or CompletableFuture. Thanks.

> Add new FileSystem API to support asynchronous method calls
> ---
>
> Key: HADOOP-12910
> URL: https://issues.apache.org/jira/browse/HADOOP-12910
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12910-HDFS-9924.000.patch, 
> HADOOP-12910-HDFS-9924.001.patch, HADOOP-12910-HDFS-9924.002.patch
>
>
> Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a 
> better name).  All the APIs in FutureFileSystem are the same as FileSystem 
> except that the return type is wrapped by Future, e.g.
> {code}
>   //FileSystem
>   public boolean rename(Path src, Path dst) throws IOException;
>   //FutureFileSystem
>   public Future rename(Path src, Path dst) throws IOException;
> {code}
> Note that FutureFileSystem does not extend FileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls

2016-06-06 Thread Benoit Sigoure (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317847#comment-15317847
 ] 

Benoit Sigoure commented on HADOOP-12910:
-

One of the key things that {{Deferred}} and [Finagle's 
{{Future}}|https://twitter.github.io/finagle/guide/Futures.html#sequential-composition]
 both enable is composition.  This allows you to chain multiple asynchronous 
operations in a type-safe fashion.

For example, when making an RPC call to HBase, a client might first need to do 
a lookup in ZooKeeper, which asynchronously returns a byte array, then parse a 
RegionInfo out of the byte array and make an RPC call to an HBase server, which 
returns some response of type {{FooResponse}}.  In this semi-hypothetical 
pipeline, {{Deferred}} and {{Future}} allow you to build a string of 
asynchronous operations, in which the (successful) output of one operation is 
the input of the next operation.  I don't see how the proposed 
{{FutureWithCallback}} interface can cater to such use cases, which will lead 
to asynchronous APIs that are difficult to compose.

In the proposal for trunk at least with {{CompletionStage}} you get that with 
{{thenApply}}, but I hope this means that not too many APIs will be created 
with {{FutureWithCallback}}, as they'll probably stick around for a while and 
be hard to get rid of.

> Add new FileSystem API to support asynchronous method calls
> ---
>
> Key: HADOOP-12910
> URL: https://issues.apache.org/jira/browse/HADOOP-12910
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12910-HDFS-9924.000.patch, 
> HADOOP-12910-HDFS-9924.001.patch, HADOOP-12910-HDFS-9924.002.patch
>
>
> Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a 
> better name).  All the APIs in FutureFileSystem are the same as FileSystem 
> except that the return type is wrapped by Future, e.g.
> {code}
>   //FileSystem
>   public boolean rename(Path src, Path dst) throws IOException;
>   //FutureFileSystem
>   public Future rename(Path src, Path dst) throws IOException;
> {code}
> Note that FutureFileSystem does not extend FileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13236) truncate will fail when we use viewfilesystem

2016-06-06 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-13236:
--
Summary: truncate will fail when we use viewfilesystem  (was: truncate will 
fail when we use viewFS.)

> truncate will fail when we use viewfilesystem
> -
>
> Key: HADOOP-13236
> URL: https://issues.apache.org/jira/browse/HADOOP-13236
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>
> truncate will fail when use viewFS.
> {code}
>   @Override
>   public boolean truncate(final Path f, final long newLength)
>   throws IOException {
> InodeTree.ResolveResult res =
> fsState.resolve(getUriPath(f), true);
> return res.targetFileSystem.truncate(f, newLength);
>   }
>   {code}
>  *Path should be like below:* 
> {{return res.targetFileSystem.truncate(f, newLength);}}  *should be*  
> {{return res.targetFileSystem.truncate(res.remainingPath, newLength);}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13227) AsyncCallHandler should use a event driven architecture to handle async calls

2016-06-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317781#comment-15317781
 ] 

Hadoop QA commented on HADOOP-13227:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
29s{color} | {color:red} root: The patch generated 1 new + 213 unchanged - 1 
fixed = 214 total (was 214) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 55s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 28s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}128m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.ssl.TestReloadingX509TrustManager |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808526/c13227_20160607.patch 
|
| JIRA Issue | HADOOP-13227 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux eae3da16e471 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6de9213 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9673/artifact/patchprocess/diff-checkstyle-root.txt
 |
| unit | 

[jira] [Comment Edited] (HADOOP-13184) Add "Apache" to Hadoop project logo

2016-06-06 Thread Abhishek (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317736#comment-15317736
 ] 

Abhishek edited comment on HADOOP-13184 at 6/7/16 3:13 AM:
---

Options for the new logo with APACHE included:

*Option 1*

!https://ske0hq-bn1305.files.1drv.com/y3mfuiXQm9OGG3-dQGy2hTzYQPu1Xdv0C7wAA2rAA0uoEHS08BQxlReQoL_sLPyy_1JNi04LFfrpEGzyhLUpNJXXfY8mkSDezepfzY2rfo_aw3sEYesjbb_DTdJP3xCmSZ4X66UrJM85HPIa4AoTxjl_Nbjzx4V6HQFVv64rmqPSYc?width=1000=267=none!

\\

*Option 2*

!https://skevhq-bn1305.files.1drv.com/y3m3jGYvgeu2uIHc5C98M3dGA5-pUo_ZgNDCANBWJYEQqZeYdyGFwV1UWOIFrpD56FnUNAJkUJOywicSIG_nBdG6v1RvI3BGGkEBlnLcbH5Kz7QoU5j7gI6vghNkDD3HSTSaNDK2PVMqivI005IRdrqTJfduaImaVy4ZyTn_CaJMNY?width=1000=291=none!

(empty line)

*Option 3*

!https://skeuhq-bn1305.files.1drv.com/y3mwENxSi1zzJ6g0hX9wyZu-7wFj77cz6NWYKuhvFyn67Uo7boeqbqw4YPCP8DW05h8lQAEt4XDyC9c_yNspOkwuPnMqFeK_chXzjZBGVPAD7t1UP5iw7TtGmmMn70H1W7hjR-kByyJHvuA3Y4Gjbm6ZzQv8peMLxvggE6dUSVMIZc?width=1000=267=none!

(empty line)

*Option 4*

!https://ske1hq-bn1305.files.1drv.com/y3msqsFXMBqxWYj8kk_-_ShZb1spGcfIzuYD5ShOT4oQB-EMVE2_18GrQPS8rc8K4Gh4Zo6dP76dGkSfvEj7vNoAOU3FAe3HNcpJxl5MZPA8hOx-1aLus_UtzE62bviKkmp9-NcHY_eVWf4EmKtJ5aMcy5jiDk7tBUXRl27YUSvy8Q?width=1000=298=none!

(empty line)

*Option 5*

!https://r6eshq-bn1305.files.1drv.com/y3myRrrctLCNgHqbkjN85nDVzZUwKsGnN3mFMVlJf3uKnMFtMEzrkR4mI8A2bOZfiF3tPXrYkw5DOcYtfbnbwolbjwGusgc3kjovtmiCR8yYElqj6H3uLzeFSNxSgcAA0mAQLkGJOTH4fR89xCWGUSrRaw9vDToaWIGGaY662nE0MA?width=1000=291=none!

(empty line)



was (Author: kspk):
Options for the new logo with APACHE included:

*Option 1*

!https://ske0hq-bn1305.files.1drv.com/y3mfuiXQm9OGG3-dQGy2hTzYQPu1Xdv0C7wAA2rAA0uoEHS08BQxlReQoL_sLPyy_1JNi04LFfrpEGzyhLUpNJXXfY8mkSDezepfzY2rfo_aw3sEYesjbb_DTdJP3xCmSZ4X66UrJM85HPIa4AoTxjl_Nbjzx4V6HQFVv64rmqPSYc?width=1000=267=none!

()

*Option 2*

!https://skevhq-bn1305.files.1drv.com/y3m3jGYvgeu2uIHc5C98M3dGA5-pUo_ZgNDCANBWJYEQqZeYdyGFwV1UWOIFrpD56FnUNAJkUJOywicSIG_nBdG6v1RvI3BGGkEBlnLcbH5Kz7QoU5j7gI6vghNkDD3HSTSaNDK2PVMqivI005IRdrqTJfduaImaVy4ZyTn_CaJMNY?width=1000=291=none!

(empty line)

*Option 3*

!https://skeuhq-bn1305.files.1drv.com/y3mwENxSi1zzJ6g0hX9wyZu-7wFj77cz6NWYKuhvFyn67Uo7boeqbqw4YPCP8DW05h8lQAEt4XDyC9c_yNspOkwuPnMqFeK_chXzjZBGVPAD7t1UP5iw7TtGmmMn70H1W7hjR-kByyJHvuA3Y4Gjbm6ZzQv8peMLxvggE6dUSVMIZc?width=1000=267=none!

(empty line)

*Option 4*

!https://ske1hq-bn1305.files.1drv.com/y3msqsFXMBqxWYj8kk_-_ShZb1spGcfIzuYD5ShOT4oQB-EMVE2_18GrQPS8rc8K4Gh4Zo6dP76dGkSfvEj7vNoAOU3FAe3HNcpJxl5MZPA8hOx-1aLus_UtzE62bviKkmp9-NcHY_eVWf4EmKtJ5aMcy5jiDk7tBUXRl27YUSvy8Q?width=1000=298=none!

(empty line)

*Option 5*

!https://r6eshq-bn1305.files.1drv.com/y3myRrrctLCNgHqbkjN85nDVzZUwKsGnN3mFMVlJf3uKnMFtMEzrkR4mI8A2bOZfiF3tPXrYkw5DOcYtfbnbwolbjwGusgc3kjovtmiCR8yYElqj6H3uLzeFSNxSgcAA0mAQLkGJOTH4fR89xCWGUSrRaw9vDToaWIGGaY662nE0MA?width=1000=291=none!

(empty line)


> Add "Apache" to Hadoop project logo
> ---
>
> Key: HADOOP-13184
> URL: https://issues.apache.org/jira/browse/HADOOP-13184
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Chris Douglas
>
> Many ASF projects include "Apache" in their logo. We should add it to Hadoop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13184) Add "Apache" to Hadoop project logo

2016-06-06 Thread Abhishek (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317736#comment-15317736
 ] 

Abhishek edited comment on HADOOP-13184 at 6/7/16 3:14 AM:
---

Options for the new logo with APACHE included:

*Option 1*

!https://ske0hq-bn1305.files.1drv.com/y3mfuiXQm9OGG3-dQGy2hTzYQPu1Xdv0C7wAA2rAA0uoEHS08BQxlReQoL_sLPyy_1JNi04LFfrpEGzyhLUpNJXXfY8mkSDezepfzY2rfo_aw3sEYesjbb_DTdJP3xCmSZ4X66UrJM85HPIa4AoTxjl_Nbjzx4V6HQFVv64rmqPSYc?width=1000=267=none!

\\
\\

*Option 2*

!https://skevhq-bn1305.files.1drv.com/y3m3jGYvgeu2uIHc5C98M3dGA5-pUo_ZgNDCANBWJYEQqZeYdyGFwV1UWOIFrpD56FnUNAJkUJOywicSIG_nBdG6v1RvI3BGGkEBlnLcbH5Kz7QoU5j7gI6vghNkDD3HSTSaNDK2PVMqivI005IRdrqTJfduaImaVy4ZyTn_CaJMNY?width=1000=291=none!

\\
\\

*Option 3*

!https://skeuhq-bn1305.files.1drv.com/y3mwENxSi1zzJ6g0hX9wyZu-7wFj77cz6NWYKuhvFyn67Uo7boeqbqw4YPCP8DW05h8lQAEt4XDyC9c_yNspOkwuPnMqFeK_chXzjZBGVPAD7t1UP5iw7TtGmmMn70H1W7hjR-kByyJHvuA3Y4Gjbm6ZzQv8peMLxvggE6dUSVMIZc?width=1000=267=none!

\\
\\

*Option 4*

!https://ske1hq-bn1305.files.1drv.com/y3msqsFXMBqxWYj8kk_-_ShZb1spGcfIzuYD5ShOT4oQB-EMVE2_18GrQPS8rc8K4Gh4Zo6dP76dGkSfvEj7vNoAOU3FAe3HNcpJxl5MZPA8hOx-1aLus_UtzE62bviKkmp9-NcHY_eVWf4EmKtJ5aMcy5jiDk7tBUXRl27YUSvy8Q?width=1000=298=none!

\\
\\

*Option 5*

!https://r6eshq-bn1305.files.1drv.com/y3myRrrctLCNgHqbkjN85nDVzZUwKsGnN3mFMVlJf3uKnMFtMEzrkR4mI8A2bOZfiF3tPXrYkw5DOcYtfbnbwolbjwGusgc3kjovtmiCR8yYElqj6H3uLzeFSNxSgcAA0mAQLkGJOTH4fR89xCWGUSrRaw9vDToaWIGGaY662nE0MA?width=1000=291=none!

\\



was (Author: kspk):
Options for the new logo with APACHE included:

*Option 1*

!https://ske0hq-bn1305.files.1drv.com/y3mfuiXQm9OGG3-dQGy2hTzYQPu1Xdv0C7wAA2rAA0uoEHS08BQxlReQoL_sLPyy_1JNi04LFfrpEGzyhLUpNJXXfY8mkSDezepfzY2rfo_aw3sEYesjbb_DTdJP3xCmSZ4X66UrJM85HPIa4AoTxjl_Nbjzx4V6HQFVv64rmqPSYc?width=1000=267=none!

\\

*Option 2*

!https://skevhq-bn1305.files.1drv.com/y3m3jGYvgeu2uIHc5C98M3dGA5-pUo_ZgNDCANBWJYEQqZeYdyGFwV1UWOIFrpD56FnUNAJkUJOywicSIG_nBdG6v1RvI3BGGkEBlnLcbH5Kz7QoU5j7gI6vghNkDD3HSTSaNDK2PVMqivI005IRdrqTJfduaImaVy4ZyTn_CaJMNY?width=1000=291=none!

(empty line)

*Option 3*

!https://skeuhq-bn1305.files.1drv.com/y3mwENxSi1zzJ6g0hX9wyZu-7wFj77cz6NWYKuhvFyn67Uo7boeqbqw4YPCP8DW05h8lQAEt4XDyC9c_yNspOkwuPnMqFeK_chXzjZBGVPAD7t1UP5iw7TtGmmMn70H1W7hjR-kByyJHvuA3Y4Gjbm6ZzQv8peMLxvggE6dUSVMIZc?width=1000=267=none!

(empty line)

*Option 4*

!https://ske1hq-bn1305.files.1drv.com/y3msqsFXMBqxWYj8kk_-_ShZb1spGcfIzuYD5ShOT4oQB-EMVE2_18GrQPS8rc8K4Gh4Zo6dP76dGkSfvEj7vNoAOU3FAe3HNcpJxl5MZPA8hOx-1aLus_UtzE62bviKkmp9-NcHY_eVWf4EmKtJ5aMcy5jiDk7tBUXRl27YUSvy8Q?width=1000=298=none!

(empty line)

*Option 5*

!https://r6eshq-bn1305.files.1drv.com/y3myRrrctLCNgHqbkjN85nDVzZUwKsGnN3mFMVlJf3uKnMFtMEzrkR4mI8A2bOZfiF3tPXrYkw5DOcYtfbnbwolbjwGusgc3kjovtmiCR8yYElqj6H3uLzeFSNxSgcAA0mAQLkGJOTH4fR89xCWGUSrRaw9vDToaWIGGaY662nE0MA?width=1000=291=none!

(empty line)


> Add "Apache" to Hadoop project logo
> ---
>
> Key: HADOOP-13184
> URL: https://issues.apache.org/jira/browse/HADOOP-13184
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Chris Douglas
>
> Many ASF projects include "Apache" in their logo. We should add it to Hadoop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls

2016-06-06 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317739#comment-15317739
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-12910:
--

It seems we all agree that we should use CompletableFuture (a Java 8 class 
which implements both Future and CompletionStage) with callback support as a 
return type in trunk.  Since we are talking about API, we should actually talk 
about interfaces instead of classes.  Therefore, we should return an 
sub-interface of Future and CompletionStage.

For branch-2, There are two possible ways to use Deferred (or ListenableFuture) 
:
# Using it directly (i.e. import the external jar and use 
com.stumbleupon.async.Deferred in the code).  Then we have an external 
dependency.
# Copy & Paste Deferred to Hadoop, say 
org.apache.hadoop.util.concurrent.Deferred.  Then we can the Deferred 
functionality but our Deferred is incompatible of the 
com.stumbleupon.async.Deferred used in the other projects.  Also, it may be 
harder to support CompletableFuture in trunk since we have to support both 
Deferred and CompletableFuture.

Both choices seem undesirable. Therefore I suggest to create our own interface 
to support callbacks for branch-2 as below.
{code}
  public interface Callback {
void processReturnValue(V returnValue);

void handleException(Exception exception);
  }

  // branch-2 return type
  public interface FutureWithCallback extends Future {
void addCallback(Callback callback);
  }
{code}
For trunk, we have
{code}
  // trunk return type
  public interface CompletableFutureWithCallback
  extends FutureWithCallback, CompletionStage {
  }
{code}
Any comments?

> Add new FileSystem API to support asynchronous method calls
> ---
>
> Key: HADOOP-12910
> URL: https://issues.apache.org/jira/browse/HADOOP-12910
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12910-HDFS-9924.000.patch, 
> HADOOP-12910-HDFS-9924.001.patch, HADOOP-12910-HDFS-9924.002.patch
>
>
> Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a 
> better name).  All the APIs in FutureFileSystem are the same as FileSystem 
> except that the return type is wrapped by Future, e.g.
> {code}
>   //FileSystem
>   public boolean rename(Path src, Path dst) throws IOException;
>   //FutureFileSystem
>   public Future rename(Path src, Path dst) throws IOException;
> {code}
> Note that FutureFileSystem does not extend FileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13184) Add "Apache" to Hadoop project logo

2016-06-06 Thread Abhishek (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317736#comment-15317736
 ] 

Abhishek edited comment on HADOOP-13184 at 6/7/16 3:12 AM:
---

Options for the new logo with APACHE included:

*Option 1*

!https://ske0hq-bn1305.files.1drv.com/y3mfuiXQm9OGG3-dQGy2hTzYQPu1Xdv0C7wAA2rAA0uoEHS08BQxlReQoL_sLPyy_1JNi04LFfrpEGzyhLUpNJXXfY8mkSDezepfzY2rfo_aw3sEYesjbb_DTdJP3xCmSZ4X66UrJM85HPIa4AoTxjl_Nbjzx4V6HQFVv64rmqPSYc?width=1000=267=none!

(empty line)

*Option 2*

!https://skevhq-bn1305.files.1drv.com/y3m3jGYvgeu2uIHc5C98M3dGA5-pUo_ZgNDCANBWJYEQqZeYdyGFwV1UWOIFrpD56FnUNAJkUJOywicSIG_nBdG6v1RvI3BGGkEBlnLcbH5Kz7QoU5j7gI6vghNkDD3HSTSaNDK2PVMqivI005IRdrqTJfduaImaVy4ZyTn_CaJMNY?width=1000=291=none!

(empty line)

*Option 3*

!https://skeuhq-bn1305.files.1drv.com/y3mwENxSi1zzJ6g0hX9wyZu-7wFj77cz6NWYKuhvFyn67Uo7boeqbqw4YPCP8DW05h8lQAEt4XDyC9c_yNspOkwuPnMqFeK_chXzjZBGVPAD7t1UP5iw7TtGmmMn70H1W7hjR-kByyJHvuA3Y4Gjbm6ZzQv8peMLxvggE6dUSVMIZc?width=1000=267=none!

(empty line)

*Option 4*

!https://ske1hq-bn1305.files.1drv.com/y3msqsFXMBqxWYj8kk_-_ShZb1spGcfIzuYD5ShOT4oQB-EMVE2_18GrQPS8rc8K4Gh4Zo6dP76dGkSfvEj7vNoAOU3FAe3HNcpJxl5MZPA8hOx-1aLus_UtzE62bviKkmp9-NcHY_eVWf4EmKtJ5aMcy5jiDk7tBUXRl27YUSvy8Q?width=1000=298=none!

(empty line)

*Option 5*

!https://r6eshq-bn1305.files.1drv.com/y3myRrrctLCNgHqbkjN85nDVzZUwKsGnN3mFMVlJf3uKnMFtMEzrkR4mI8A2bOZfiF3tPXrYkw5DOcYtfbnbwolbjwGusgc3kjovtmiCR8yYElqj6H3uLzeFSNxSgcAA0mAQLkGJOTH4fR89xCWGUSrRaw9vDToaWIGGaY662nE0MA?width=1000=291=none!

(empty line)



was (Author: kspk):
Options for the new logo with APACHE included:

*Option 1*

!https://ske0hq-bn1305.files.1drv.com/y3mfuiXQm9OGG3-dQGy2hTzYQPu1Xdv0C7wAA2rAA0uoEHS08BQxlReQoL_sLPyy_1JNi04LFfrpEGzyhLUpNJXXfY8mkSDezepfzY2rfo_aw3sEYesjbb_DTdJP3xCmSZ4X66UrJM85HPIa4AoTxjl_Nbjzx4V6HQFVv64rmqPSYc?width=1000=267=none!


*Option 2*

!https://skevhq-bn1305.files.1drv.com/y3m3jGYvgeu2uIHc5C98M3dGA5-pUo_ZgNDCANBWJYEQqZeYdyGFwV1UWOIFrpD56FnUNAJkUJOywicSIG_nBdG6v1RvI3BGGkEBlnLcbH5Kz7QoU5j7gI6vghNkDD3HSTSaNDK2PVMqivI005IRdrqTJfduaImaVy4ZyTn_CaJMNY?width=1000=291=none!


*Option 3*

!https://skeuhq-bn1305.files.1drv.com/y3mwENxSi1zzJ6g0hX9wyZu-7wFj77cz6NWYKuhvFyn67Uo7boeqbqw4YPCP8DW05h8lQAEt4XDyC9c_yNspOkwuPnMqFeK_chXzjZBGVPAD7t1UP5iw7TtGmmMn70H1W7hjR-kByyJHvuA3Y4Gjbm6ZzQv8peMLxvggE6dUSVMIZc?width=1000=267=none!


*Option 4*

!https://ske1hq-bn1305.files.1drv.com/y3msqsFXMBqxWYj8kk_-_ShZb1spGcfIzuYD5ShOT4oQB-EMVE2_18GrQPS8rc8K4Gh4Zo6dP76dGkSfvEj7vNoAOU3FAe3HNcpJxl5MZPA8hOx-1aLus_UtzE62bviKkmp9-NcHY_eVWf4EmKtJ5aMcy5jiDk7tBUXRl27YUSvy8Q?width=1000=298=none!


*Option 5*

!https://r6eshq-bn1305.files.1drv.com/y3myRrrctLCNgHqbkjN85nDVzZUwKsGnN3mFMVlJf3uKnMFtMEzrkR4mI8A2bOZfiF3tPXrYkw5DOcYtfbnbwolbjwGusgc3kjovtmiCR8yYElqj6H3uLzeFSNxSgcAA0mAQLkGJOTH4fR89xCWGUSrRaw9vDToaWIGGaY662nE0MA?width=1000=291=none!




> Add "Apache" to Hadoop project logo
> ---
>
> Key: HADOOP-13184
> URL: https://issues.apache.org/jira/browse/HADOOP-13184
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Chris Douglas
>
> Many ASF projects include "Apache" in their logo. We should add it to Hadoop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13184) Add "Apache" to Hadoop project logo

2016-06-06 Thread Abhishek (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317736#comment-15317736
 ] 

Abhishek edited comment on HADOOP-13184 at 6/7/16 3:13 AM:
---

Options for the new logo with APACHE included:

*Option 1*

!https://ske0hq-bn1305.files.1drv.com/y3mfuiXQm9OGG3-dQGy2hTzYQPu1Xdv0C7wAA2rAA0uoEHS08BQxlReQoL_sLPyy_1JNi04LFfrpEGzyhLUpNJXXfY8mkSDezepfzY2rfo_aw3sEYesjbb_DTdJP3xCmSZ4X66UrJM85HPIa4AoTxjl_Nbjzx4V6HQFVv64rmqPSYc?width=1000=267=none!

()

*Option 2*

!https://skevhq-bn1305.files.1drv.com/y3m3jGYvgeu2uIHc5C98M3dGA5-pUo_ZgNDCANBWJYEQqZeYdyGFwV1UWOIFrpD56FnUNAJkUJOywicSIG_nBdG6v1RvI3BGGkEBlnLcbH5Kz7QoU5j7gI6vghNkDD3HSTSaNDK2PVMqivI005IRdrqTJfduaImaVy4ZyTn_CaJMNY?width=1000=291=none!

(empty line)

*Option 3*

!https://skeuhq-bn1305.files.1drv.com/y3mwENxSi1zzJ6g0hX9wyZu-7wFj77cz6NWYKuhvFyn67Uo7boeqbqw4YPCP8DW05h8lQAEt4XDyC9c_yNspOkwuPnMqFeK_chXzjZBGVPAD7t1UP5iw7TtGmmMn70H1W7hjR-kByyJHvuA3Y4Gjbm6ZzQv8peMLxvggE6dUSVMIZc?width=1000=267=none!

(empty line)

*Option 4*

!https://ske1hq-bn1305.files.1drv.com/y3msqsFXMBqxWYj8kk_-_ShZb1spGcfIzuYD5ShOT4oQB-EMVE2_18GrQPS8rc8K4Gh4Zo6dP76dGkSfvEj7vNoAOU3FAe3HNcpJxl5MZPA8hOx-1aLus_UtzE62bviKkmp9-NcHY_eVWf4EmKtJ5aMcy5jiDk7tBUXRl27YUSvy8Q?width=1000=298=none!

(empty line)

*Option 5*

!https://r6eshq-bn1305.files.1drv.com/y3myRrrctLCNgHqbkjN85nDVzZUwKsGnN3mFMVlJf3uKnMFtMEzrkR4mI8A2bOZfiF3tPXrYkw5DOcYtfbnbwolbjwGusgc3kjovtmiCR8yYElqj6H3uLzeFSNxSgcAA0mAQLkGJOTH4fR89xCWGUSrRaw9vDToaWIGGaY662nE0MA?width=1000=291=none!

(empty line)



was (Author: kspk):
Options for the new logo with APACHE included:

*Option 1*

!https://ske0hq-bn1305.files.1drv.com/y3mfuiXQm9OGG3-dQGy2hTzYQPu1Xdv0C7wAA2rAA0uoEHS08BQxlReQoL_sLPyy_1JNi04LFfrpEGzyhLUpNJXXfY8mkSDezepfzY2rfo_aw3sEYesjbb_DTdJP3xCmSZ4X66UrJM85HPIa4AoTxjl_Nbjzx4V6HQFVv64rmqPSYc?width=1000=267=none!

(empty line)

*Option 2*

!https://skevhq-bn1305.files.1drv.com/y3m3jGYvgeu2uIHc5C98M3dGA5-pUo_ZgNDCANBWJYEQqZeYdyGFwV1UWOIFrpD56FnUNAJkUJOywicSIG_nBdG6v1RvI3BGGkEBlnLcbH5Kz7QoU5j7gI6vghNkDD3HSTSaNDK2PVMqivI005IRdrqTJfduaImaVy4ZyTn_CaJMNY?width=1000=291=none!

(empty line)

*Option 3*

!https://skeuhq-bn1305.files.1drv.com/y3mwENxSi1zzJ6g0hX9wyZu-7wFj77cz6NWYKuhvFyn67Uo7boeqbqw4YPCP8DW05h8lQAEt4XDyC9c_yNspOkwuPnMqFeK_chXzjZBGVPAD7t1UP5iw7TtGmmMn70H1W7hjR-kByyJHvuA3Y4Gjbm6ZzQv8peMLxvggE6dUSVMIZc?width=1000=267=none!

(empty line)

*Option 4*

!https://ske1hq-bn1305.files.1drv.com/y3msqsFXMBqxWYj8kk_-_ShZb1spGcfIzuYD5ShOT4oQB-EMVE2_18GrQPS8rc8K4Gh4Zo6dP76dGkSfvEj7vNoAOU3FAe3HNcpJxl5MZPA8hOx-1aLus_UtzE62bviKkmp9-NcHY_eVWf4EmKtJ5aMcy5jiDk7tBUXRl27YUSvy8Q?width=1000=298=none!

(empty line)

*Option 5*

!https://r6eshq-bn1305.files.1drv.com/y3myRrrctLCNgHqbkjN85nDVzZUwKsGnN3mFMVlJf3uKnMFtMEzrkR4mI8A2bOZfiF3tPXrYkw5DOcYtfbnbwolbjwGusgc3kjovtmiCR8yYElqj6H3uLzeFSNxSgcAA0mAQLkGJOTH4fR89xCWGUSrRaw9vDToaWIGGaY662nE0MA?width=1000=291=none!

(empty line)


> Add "Apache" to Hadoop project logo
> ---
>
> Key: HADOOP-13184
> URL: https://issues.apache.org/jira/browse/HADOOP-13184
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Chris Douglas
>
> Many ASF projects include "Apache" in their logo. We should add it to Hadoop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13184) Add "Apache" to Hadoop project logo

2016-06-06 Thread Abhishek (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317736#comment-15317736
 ] 

Abhishek edited comment on HADOOP-13184 at 6/7/16 3:11 AM:
---

Options for the new logo with APACHE included:

*Option 1*

!https://ske0hq-bn1305.files.1drv.com/y3mfuiXQm9OGG3-dQGy2hTzYQPu1Xdv0C7wAA2rAA0uoEHS08BQxlReQoL_sLPyy_1JNi04LFfrpEGzyhLUpNJXXfY8mkSDezepfzY2rfo_aw3sEYesjbb_DTdJP3xCmSZ4X66UrJM85HPIa4AoTxjl_Nbjzx4V6HQFVv64rmqPSYc?width=1000=267=none!


*Option 2*

!https://skevhq-bn1305.files.1drv.com/y3m3jGYvgeu2uIHc5C98M3dGA5-pUo_ZgNDCANBWJYEQqZeYdyGFwV1UWOIFrpD56FnUNAJkUJOywicSIG_nBdG6v1RvI3BGGkEBlnLcbH5Kz7QoU5j7gI6vghNkDD3HSTSaNDK2PVMqivI005IRdrqTJfduaImaVy4ZyTn_CaJMNY?width=1000=291=none!


*Option 3*

!https://skeuhq-bn1305.files.1drv.com/y3mwENxSi1zzJ6g0hX9wyZu-7wFj77cz6NWYKuhvFyn67Uo7boeqbqw4YPCP8DW05h8lQAEt4XDyC9c_yNspOkwuPnMqFeK_chXzjZBGVPAD7t1UP5iw7TtGmmMn70H1W7hjR-kByyJHvuA3Y4Gjbm6ZzQv8peMLxvggE6dUSVMIZc?width=1000=267=none!


*Option 4*

!https://ske1hq-bn1305.files.1drv.com/y3msqsFXMBqxWYj8kk_-_ShZb1spGcfIzuYD5ShOT4oQB-EMVE2_18GrQPS8rc8K4Gh4Zo6dP76dGkSfvEj7vNoAOU3FAe3HNcpJxl5MZPA8hOx-1aLus_UtzE62bviKkmp9-NcHY_eVWf4EmKtJ5aMcy5jiDk7tBUXRl27YUSvy8Q?width=1000=298=none!


*Option 5*

!https://r6eshq-bn1305.files.1drv.com/y3myRrrctLCNgHqbkjN85nDVzZUwKsGnN3mFMVlJf3uKnMFtMEzrkR4mI8A2bOZfiF3tPXrYkw5DOcYtfbnbwolbjwGusgc3kjovtmiCR8yYElqj6H3uLzeFSNxSgcAA0mAQLkGJOTH4fR89xCWGUSrRaw9vDToaWIGGaY662nE0MA?width=1000=291=none!





was (Author: kspk):
Options for the new logo with APACHE included:

*strong*Option 1*strong*

!https://ske0hq-bn1305.files.1drv.com/y3mfuiXQm9OGG3-dQGy2hTzYQPu1Xdv0C7wAA2rAA0uoEHS08BQxlReQoL_sLPyy_1JNi04LFfrpEGzyhLUpNJXXfY8mkSDezepfzY2rfo_aw3sEYesjbb_DTdJP3xCmSZ4X66UrJM85HPIa4AoTxjl_Nbjzx4V6HQFVv64rmqPSYc?width=1000=267=none!


*strong*Option 2*strong*

!https://skevhq-bn1305.files.1drv.com/y3m3jGYvgeu2uIHc5C98M3dGA5-pUo_ZgNDCANBWJYEQqZeYdyGFwV1UWOIFrpD56FnUNAJkUJOywicSIG_nBdG6v1RvI3BGGkEBlnLcbH5Kz7QoU5j7gI6vghNkDD3HSTSaNDK2PVMqivI005IRdrqTJfduaImaVy4ZyTn_CaJMNY?width=1000=291=none!


*strong*Option 3*strong*

!https://skeuhq-bn1305.files.1drv.com/y3mwENxSi1zzJ6g0hX9wyZu-7wFj77cz6NWYKuhvFyn67Uo7boeqbqw4YPCP8DW05h8lQAEt4XDyC9c_yNspOkwuPnMqFeK_chXzjZBGVPAD7t1UP5iw7TtGmmMn70H1W7hjR-kByyJHvuA3Y4Gjbm6ZzQv8peMLxvggE6dUSVMIZc?width=1000=267=none!


*strong*Option 4*strong*

!https://ske1hq-bn1305.files.1drv.com/y3msqsFXMBqxWYj8kk_-_ShZb1spGcfIzuYD5ShOT4oQB-EMVE2_18GrQPS8rc8K4Gh4Zo6dP76dGkSfvEj7vNoAOU3FAe3HNcpJxl5MZPA8hOx-1aLus_UtzE62bviKkmp9-NcHY_eVWf4EmKtJ5aMcy5jiDk7tBUXRl27YUSvy8Q?width=1000=298=none!


*strong*Option 5*strong*

!https://r6eshq-bn1305.files.1drv.com/y3myRrrctLCNgHqbkjN85nDVzZUwKsGnN3mFMVlJf3uKnMFtMEzrkR4mI8A2bOZfiF3tPXrYkw5DOcYtfbnbwolbjwGusgc3kjovtmiCR8yYElqj6H3uLzeFSNxSgcAA0mAQLkGJOTH4fR89xCWGUSrRaw9vDToaWIGGaY662nE0MA?width=1000=291=none!




> Add "Apache" to Hadoop project logo
> ---
>
> Key: HADOOP-13184
> URL: https://issues.apache.org/jira/browse/HADOOP-13184
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Chris Douglas
>
> Many ASF projects include "Apache" in their logo. We should add it to Hadoop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13184) Add "Apache" to Hadoop project logo

2016-06-06 Thread Abhishek (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317736#comment-15317736
 ] 

Abhishek commented on HADOOP-13184:
---

Options for the new logo with APACHE included:

*strong*Option 1*strong*

!https://ske0hq-bn1305.files.1drv.com/y3mfuiXQm9OGG3-dQGy2hTzYQPu1Xdv0C7wAA2rAA0uoEHS08BQxlReQoL_sLPyy_1JNi04LFfrpEGzyhLUpNJXXfY8mkSDezepfzY2rfo_aw3sEYesjbb_DTdJP3xCmSZ4X66UrJM85HPIa4AoTxjl_Nbjzx4V6HQFVv64rmqPSYc?width=1000=267=none!


*strong*Option 2*strong*

!https://skevhq-bn1305.files.1drv.com/y3m3jGYvgeu2uIHc5C98M3dGA5-pUo_ZgNDCANBWJYEQqZeYdyGFwV1UWOIFrpD56FnUNAJkUJOywicSIG_nBdG6v1RvI3BGGkEBlnLcbH5Kz7QoU5j7gI6vghNkDD3HSTSaNDK2PVMqivI005IRdrqTJfduaImaVy4ZyTn_CaJMNY?width=1000=291=none!


*strong*Option 3*strong*

!https://skeuhq-bn1305.files.1drv.com/y3mwENxSi1zzJ6g0hX9wyZu-7wFj77cz6NWYKuhvFyn67Uo7boeqbqw4YPCP8DW05h8lQAEt4XDyC9c_yNspOkwuPnMqFeK_chXzjZBGVPAD7t1UP5iw7TtGmmMn70H1W7hjR-kByyJHvuA3Y4Gjbm6ZzQv8peMLxvggE6dUSVMIZc?width=1000=267=none!


*strong*Option 4*strong*

!https://ske1hq-bn1305.files.1drv.com/y3msqsFXMBqxWYj8kk_-_ShZb1spGcfIzuYD5ShOT4oQB-EMVE2_18GrQPS8rc8K4Gh4Zo6dP76dGkSfvEj7vNoAOU3FAe3HNcpJxl5MZPA8hOx-1aLus_UtzE62bviKkmp9-NcHY_eVWf4EmKtJ5aMcy5jiDk7tBUXRl27YUSvy8Q?width=1000=298=none!


*strong*Option 5*strong*

!https://r6eshq-bn1305.files.1drv.com/y3myRrrctLCNgHqbkjN85nDVzZUwKsGnN3mFMVlJf3uKnMFtMEzrkR4mI8A2bOZfiF3tPXrYkw5DOcYtfbnbwolbjwGusgc3kjovtmiCR8yYElqj6H3uLzeFSNxSgcAA0mAQLkGJOTH4fR89xCWGUSrRaw9vDToaWIGGaY662nE0MA?width=1000=291=none!




> Add "Apache" to Hadoop project logo
> ---
>
> Key: HADOOP-13184
> URL: https://issues.apache.org/jira/browse/HADOOP-13184
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Chris Douglas
>
> Many ASF projects include "Apache" in their logo. We should add it to Hadoop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13079) Add dfs -ls -q to print ? instead of non-printable characters

2016-06-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317731#comment-15317731
 ] 

Hadoop QA commented on HADOOP-13079:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 47s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808538/HADOOP-13079.004.patch
 |
| JIRA Issue | HADOOP-13079 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 5f385b64f5d4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6de9213 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9674/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9674/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9674/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add dfs -ls -q to print ? instead of non-printable characters
> -
>
> Key: HADOOP-13079
> URL: 

[jira] [Commented] (HADOOP-13240) TestAclCommands.testSetfaclValidations fail

2016-06-06 Thread linbao111 (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317704#comment-15317704
 ] 

linbao111 commented on HADOOP-13240:


i run test only on my hadoop2.4.1,and i am sure it will be failed on trunk or 
2.7 version

> TestAclCommands.testSetfaclValidations fail
> ---
>
> Key: HADOOP-13240
> URL: https://issues.apache.org/jira/browse/HADOOP-13240
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1
> Environment: hadoop 2.4.1,as6.5
>Reporter: linbao111
>Assignee: John Zhuge
>Priority: Minor
>
> mvn test -Djava.net.preferIPv4Stack=true -Dlog4j.rootLogger=DEBUG,console 
> -Dtest=TestAclCommands#testSetfaclValidations failed with following message:
> ---
> Test set: org.apache.hadoop.fs.shell.TestAclCommands
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.599 sec <<< 
> FAILURE! - in org.apache.hadoop.fs.shell.TestAclCommands
> testSetfaclValidations(org.apache.hadoop.fs.shell.TestAclCommands)  Time 
> elapsed: 0.534 sec  <<< FAILURE!
> java.lang.AssertionError: setfacl should fail ACL spec missing
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at org.junit.Assert.assertFalse(Assert.java:68)
> at 
> org.apache.hadoop.fs.shell.TestAclCommands.testSetfaclValidations(TestAclCommands.java:81)
> i notice from 
> HADOOP-10277,hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
>  code changed
> should 
> hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.javabe
>  changed to:
> diff --git 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
>  
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
> index b14cd37..463bfcd
> --- 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
> @@ -80,7 +80,7 @@ public void testSetfaclValidations() throws Exception {
>  "/path" }));
>  assertFalse("setfacl should fail ACL spec missing",
>  0 == runCommand(new String[] { "-setfacl", "-m",
> -"", "/path" }));
> +":", "/path" }));
>}
>  
>@Test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13240) TestAclCommands.testSetfaclValidations fail

2016-06-06 Thread linbao111 (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

linbao111 updated HADOOP-13240:
---
Affects Version/s: (was: 2.7.2)

> TestAclCommands.testSetfaclValidations fail
> ---
>
> Key: HADOOP-13240
> URL: https://issues.apache.org/jira/browse/HADOOP-13240
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1
> Environment: hadoop 2.4.1,as6.5
>Reporter: linbao111
>Assignee: John Zhuge
>Priority: Minor
>
> mvn test -Djava.net.preferIPv4Stack=true -Dlog4j.rootLogger=DEBUG,console 
> -Dtest=TestAclCommands#testSetfaclValidations failed with following message:
> ---
> Test set: org.apache.hadoop.fs.shell.TestAclCommands
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.599 sec <<< 
> FAILURE! - in org.apache.hadoop.fs.shell.TestAclCommands
> testSetfaclValidations(org.apache.hadoop.fs.shell.TestAclCommands)  Time 
> elapsed: 0.534 sec  <<< FAILURE!
> java.lang.AssertionError: setfacl should fail ACL spec missing
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at org.junit.Assert.assertFalse(Assert.java:68)
> at 
> org.apache.hadoop.fs.shell.TestAclCommands.testSetfaclValidations(TestAclCommands.java:81)
> i notice from 
> HADOOP-10277,hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
>  code changed
> should 
> hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.javabe
>  changed to:
> diff --git 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
>  
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
> index b14cd37..463bfcd
> --- 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
> @@ -80,7 +80,7 @@ public void testSetfaclValidations() throws Exception {
>  "/path" }));
>  assertFalse("setfacl should fail ACL spec missing",
>  0 == runCommand(new String[] { "-setfacl", "-m",
> -"", "/path" }));
> +":", "/path" }));
>}
>  
>@Test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13240) TestAclCommands.testSetfaclValidations fail

2016-06-06 Thread linbao111 (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

linbao111 updated HADOOP-13240:
---
Description: 
mvn test -Djava.net.preferIPv4Stack=true -Dlog4j.rootLogger=DEBUG,console 
-Dtest=TestAclCommands#testSetfaclValidations failed with following message:
---
Test set: org.apache.hadoop.fs.shell.TestAclCommands
---
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.599 sec <<< 
FAILURE! - in org.apache.hadoop.fs.shell.TestAclCommands
testSetfaclValidations(org.apache.hadoop.fs.shell.TestAclCommands)  Time 
elapsed: 0.534 sec  <<< FAILURE!
java.lang.AssertionError: setfacl should fail ACL spec missing
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.hadoop.fs.shell.TestAclCommands.testSetfaclValidations(TestAclCommands.java:81)

i notice from 
HADOOP-10277,hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
 code changed

should 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.javabe
 changed to:
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
index b14cd37..463bfcd
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
@@ -80,7 +80,7 @@ public void testSetfaclValidations() throws Exception {
 "/path" }));
 assertFalse("setfacl should fail ACL spec missing",
 0 == runCommand(new String[] { "-setfacl", "-m",
-"", "/path" }));
+":", "/path" }));
   }
 
   @Test

  was:
mvn test -Djava.net.preferIPv4Stack=true -Dlog4j.rootLogger=DEBUG,console 
-Dtest=TestAclCommands#testSetfaclValidations failed with following message:
---
Test set: org.apache.hadoop.fs.shell.TestAclCommands
---
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.599 sec <<< 
FAILURE! - in org.apache.hadoop.fs.shell.TestAclCommands
testSetfaclValidations(org.apache.hadoop.fs.shell.TestAclCommands)  Time 
elapsed: 0.534 sec  <<< FAILURE!
java.lang.AssertionError: setfacl should fail ACL spec missing
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.hadoop.fs.shell.TestAclCommands.testSetfaclValidations(TestAclCommands.java:81)

i notice from 
HADOOP-10277,hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
 code changed

should 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.javabe
 changed to:
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
 b/hadoop-common-project/hadoop-common/src/test/java/org/
index b14cd37..463bfcd
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
@@ -80,7 +80,7 @@ public void testSetfaclValidations() throws Exception {
 "/path" }));
 assertFalse("setfacl should fail ACL spec missing",
 0 == runCommand(new String[] { "-setfacl", "-m",
-"", "/path" }));
+":", "/path" }));
   }
 
   @Test


> TestAclCommands.testSetfaclValidations fail
> ---
>
> Key: HADOOP-13240
> URL: https://issues.apache.org/jira/browse/HADOOP-13240
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1, 2.7.2
> Environment: hadoop 2.4.1,as6.5
>Reporter: linbao111
>Assignee: John Zhuge
>Priority: Minor
>
> mvn test -Djava.net.preferIPv4Stack=true -Dlog4j.rootLogger=DEBUG,console 
> -Dtest=TestAclCommands#testSetfaclValidations failed with following message:
> ---
> Test set: org.apache.hadoop.fs.shell.TestAclCommands
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.599 sec <<< 
> FAILURE! - in 

[jira] [Updated] (HADOOP-13079) Add dfs -ls -q to print ? instead of non-printable characters

2016-06-06 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13079:

Attachment: HADOOP-13079.004.patch

Patch 004:
* Rename "questionMark" to "hideNonPrintable"

> Add dfs -ls -q to print ? instead of non-printable characters
> -
>
> Key: HADOOP-13079
> URL: https://issues.apache.org/jira/browse/HADOOP-13079
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-13079.001.patch, HADOOP-13079.002.patch, 
> HADOOP-13079.003.patch, HADOOP-13079.004.patch
>
>
> Add option {{-q}} to "hdfs dfs -ls" to print non-printable characters as "?". 
> Non-printable characters are defined by 
> [isprint(3)|http://linux.die.net/man/3/isprint] according to the current 
> locale.
> Default to {{-q}} behavior on terminal; otherwise, print raw characters. See 
> the difference in these 2 command lines:
> * {{hadoop fs -ls /dir}}
> * {{hadoop fs -ls /dir | od -c}}
> In C, {{isatty(STDOUT_FILENO)}} is used to find out whether the output is a 
> terminal. Since Java doesn't have {{isatty}}, I will use JNI to call C 
> {{isatty()}} because the closest test {{System.console() == null}} does not 
> work in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13079) Add dfs -ls -q to print ? instead of non-printable characters

2016-06-06 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317650#comment-15317650
 ] 

John Zhuge commented on HADOOP-13079:
-

Thanks [~andrew.wang] for the encouragement!

I can see your point about the question mark which is only what is used to hide 
the non-printable characters. How about renaming it to "hideNonPrintable" which 
is consistent to help message and the jira title?

> Add dfs -ls -q to print ? instead of non-printable characters
> -
>
> Key: HADOOP-13079
> URL: https://issues.apache.org/jira/browse/HADOOP-13079
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-13079.001.patch, HADOOP-13079.002.patch, 
> HADOOP-13079.003.patch
>
>
> Add option {{-q}} to "hdfs dfs -ls" to print non-printable characters as "?". 
> Non-printable characters are defined by 
> [isprint(3)|http://linux.die.net/man/3/isprint] according to the current 
> locale.
> Default to {{-q}} behavior on terminal; otherwise, print raw characters. See 
> the difference in these 2 command lines:
> * {{hadoop fs -ls /dir}}
> * {{hadoop fs -ls /dir | od -c}}
> In C, {{isatty(STDOUT_FILENO)}} is used to find out whether the output is a 
> terminal. Since Java doesn't have {{isatty}}, I will use JNI to call C 
> {{isatty()}} because the closest test {{System.console() == null}} does not 
> work in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13227) AsyncCallHandler should use a event driven architecture to handle async calls

2016-06-06 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317646#comment-15317646
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-13227:
--

> Can we use ConcurrentLinkedQueue's iterator for the scanning here? We will 
> not get any ConcurrentModificationException ...

Honestly, I didn't know that it won't throw ConcurrentModificationException in 
this case.  I learned something here.  Thanks a lot!  Here is a new patch.

c13227_20160607.patch

> AsyncCallHandler should use a event driven architecture to handle async calls
> -
>
> Key: HADOOP-13227
> URL: https://issues.apache.org/jira/browse/HADOOP-13227
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io, ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: c13227_20160602.patch, c13227_20160606.patch, 
> c13227_20160607.patch
>
>
> This JIRA is to address [Jing's 
> comments|https://issues.apache.org/jira/browse/HADOOP-13226?focusedCommentId=15308630=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15308630]
>  in HADOOP-13226.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13227) AsyncCallHandler should use a event driven architecture to handle async calls

2016-06-06 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-13227:
-
Attachment: c13227_20160607.patch

> AsyncCallHandler should use a event driven architecture to handle async calls
> -
>
> Key: HADOOP-13227
> URL: https://issues.apache.org/jira/browse/HADOOP-13227
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io, ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: c13227_20160602.patch, c13227_20160606.patch, 
> c13227_20160607.patch
>
>
> This JIRA is to address [Jing's 
> comments|https://issues.apache.org/jira/browse/HADOOP-13226?focusedCommentId=15308630=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15308630]
>  in HADOOP-13226.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13189) FairCallQueue makes callQueue larger than the configured capacity.

2016-06-06 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317628#comment-15317628
 ] 

Arpit Agarwal commented on HADOOP-13189:


cc some folks who may be interested in FairCallQueue as FYI [~chrilisf], 
[~mingma], [~xyao].

> FairCallQueue makes callQueue larger than the configured capacity.
> --
>
> Key: HADOOP-13189
> URL: https://issues.apache.org/jira/browse/HADOOP-13189
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.6.0
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
> Attachments: HADOOP-13189.001.patch
>
>
> {{FairCallQueue}} divides {{callQueue}} into multiple (4 by default) 
> sub-queues, with each sub-queue corresponding to a different level of 
> priority. The constructor for {{FairCallQueue}} takes the same parameter 
> {{capacity}} as the default CallQueue implementation, and allocates all its 
> sub-queues of size {{capacity}}. With 4 levels of priority (sub-queues) by 
> default it results in the total callQueue size 4 times larger than it should 
> be based on the configuration.
> {{capacity}} should be divided by the number of sub-queues at some place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13079) Add dfs -ls -q to print ? instead of non-printable characters

2016-06-06 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317617#comment-15317617
 ] 

Andrew Wang commented on HADOOP-13079:
--

Also thanks for adding all these new tests (and Allen for suggesting), it's 
great to see this level of thoroughness come with a contribution.

> Add dfs -ls -q to print ? instead of non-printable characters
> -
>
> Key: HADOOP-13079
> URL: https://issues.apache.org/jira/browse/HADOOP-13079
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-13079.001.patch, HADOOP-13079.002.patch, 
> HADOOP-13079.003.patch
>
>
> Add option {{-q}} to "hdfs dfs -ls" to print non-printable characters as "?". 
> Non-printable characters are defined by 
> [isprint(3)|http://linux.die.net/man/3/isprint] according to the current 
> locale.
> Default to {{-q}} behavior on terminal; otherwise, print raw characters. See 
> the difference in these 2 command lines:
> * {{hadoop fs -ls /dir}}
> * {{hadoop fs -ls /dir | od -c}}
> In C, {{isatty(STDOUT_FILENO)}} is used to find out whether the output is a 
> terminal. Since Java doesn't have {{isatty}}, I will use JNI to call C 
> {{isatty()}} because the closest test {{System.console() == null}} does not 
> work in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13079) Add dfs -ls -q to print ? instead of non-printable characters

2016-06-06 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317615#comment-15317615
 ] 

Andrew Wang commented on HADOOP-13079:
--

Hi John, overall patch LGTM. Only tiny nit, could we rename "questionMark" and 
"isQuestionMark" and etc to something else? I feel like "isQuestionMark" in 
particular is confusing, since what we're testing is whether to substitute 
non-printable characters, not whether something is a question mark.

FWIW my ls man page calls this "--hide-control-characters", which seems like a 
reasonable variable name.

> Add dfs -ls -q to print ? instead of non-printable characters
> -
>
> Key: HADOOP-13079
> URL: https://issues.apache.org/jira/browse/HADOOP-13079
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-13079.001.patch, HADOOP-13079.002.patch, 
> HADOOP-13079.003.patch
>
>
> Add option {{-q}} to "hdfs dfs -ls" to print non-printable characters as "?". 
> Non-printable characters are defined by 
> [isprint(3)|http://linux.die.net/man/3/isprint] according to the current 
> locale.
> Default to {{-q}} behavior on terminal; otherwise, print raw characters. See 
> the difference in these 2 command lines:
> * {{hadoop fs -ls /dir}}
> * {{hadoop fs -ls /dir | od -c}}
> In C, {{isatty(STDOUT_FILENO)}} is used to find out whether the output is a 
> terminal. Since Java doesn't have {{isatty}}, I will use JNI to call C 
> {{isatty()}} because the closest test {{System.console() == null}} does not 
> work in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13189) FairCallQueue makes callQueue larger than the configured capacity.

2016-06-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317611#comment-15317611
 ] 

Hadoop QA commented on HADOOP-13189:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  0m  
5s{color} | {color:red} Docker failed to build yetus/hadoop:2c91fd8. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808426/HADOOP-13189.001.patch
 |
| JIRA Issue | HADOOP-13189 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9672/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> FairCallQueue makes callQueue larger than the configured capacity.
> --
>
> Key: HADOOP-13189
> URL: https://issues.apache.org/jira/browse/HADOOP-13189
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.6.0
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
> Attachments: HADOOP-13189.001.patch
>
>
> {{FairCallQueue}} divides {{callQueue}} into multiple (4 by default) 
> sub-queues, with each sub-queue corresponding to a different level of 
> priority. The constructor for {{FairCallQueue}} takes the same parameter 
> {{capacity}} as the default CallQueue implementation, and allocates all its 
> sub-queues of size {{capacity}}. With 4 levels of priority (sub-queues) by 
> default it results in the total callQueue size 4 times larger than it should 
> be based on the configuration.
> {{capacity}} should be divided by the number of sub-queues at some place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13189) FairCallQueue makes callQueue larger than the configured capacity.

2016-06-06 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HADOOP-13189:
-
Status: Patch Available  (was: Open)

Submitting to Jenkins.
Patch looks good. Two minor improvements:
# Could you add logging of queue size in {{CallQueueManager}}, so that we could 
see the size in the logs for any implementation of call queue.
# In {{TestFairCallQueue}} we can remove all {{junit.Assert}} imports, as they 
are unused.

> FairCallQueue makes callQueue larger than the configured capacity.
> --
>
> Key: HADOOP-13189
> URL: https://issues.apache.org/jira/browse/HADOOP-13189
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.6.0
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
> Attachments: HADOOP-13189.001.patch
>
>
> {{FairCallQueue}} divides {{callQueue}} into multiple (4 by default) 
> sub-queues, with each sub-queue corresponding to a different level of 
> priority. The constructor for {{FairCallQueue}} takes the same parameter 
> {{capacity}} as the default CallQueue implementation, and allocates all its 
> sub-queues of size {{capacity}}. With 4 levels of priority (sub-queues) by 
> default it results in the total callQueue size 4 times larger than it should 
> be based on the configuration.
> {{capacity}} should be divided by the number of sub-queues at some place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13175) Remove hadoop-ant from hadoop-tools

2016-06-06 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang reopened HADOOP-13175:
--
  Assignee: Chris Douglas

Reopening based on Jason's latest comment.

FWIW patch LGTM +1. Chris, anything else we need to do before checking this in 
for 3.0?

> Remove hadoop-ant from hadoop-tools
> ---
>
> Key: HADOOP-13175
> URL: https://issues.apache.org/jira/browse/HADOOP-13175
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Chris Douglas
>Assignee: Chris Douglas
> Attachments: HADOOP-13175.001.patch
>
>
> The hadoop-ant code is an ancient kludge unlikely to have any users, still. 
> We can delete it from trunk as a "scream test" for 3.x.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13031) Refactor rack-aware counters from FileSystemStorageStatistics to HFDS specific StorageStatistics

2016-06-06 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317565#comment-15317565
 ] 

Ming Ma commented on HADOOP-13031:
--

Thanks [~liuml07]. Do you have a MAPREDUCE jira to consume any 
StorageStatistics and expose them to MR counters? MAPREDUCE-6660 was created 
specifically for network distance, but maybe it is better to solve the general 
case.

> Refactor rack-aware counters from FileSystemStorageStatistics to HFDS 
> specific StorageStatistics
> 
>
> Key: HADOOP-13031
> URL: https://issues.apache.org/jira/browse/HADOOP-13031
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.9.0
>Reporter: Mingliang Liu
>
> [HADOOP-13065] added a new interface for retrieving FS and FC Statistics. 
> This jira is to refactor the code that maintains rack-aware read metrics to 
> use the newly added StorageStatistics. Specially,
> # Rack-aware read bytes metrics is mostly specific to HDFS. For example, 
> local file system doesn't need that. We consider to move it from base 
> FileSystemStorageStatistics to a dedicated HDFS specific StorageStatistics 
> sub-class.
> # We would have to develop an optimized thread-local mechanism to do this, to 
> avoid causing a performance regression in HDFS stream performance.
> Optionally, it would be better to simply move this to HDFS's existing 
> per-stream {{ReadStatistics}} for now. As [HDFS-9579] states, ReadStatistics 
> metrics are only accessible via {{DFSClient}} or {{DFSInputStream}}. Not 
> something that application framework such as MR and Tez can get to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13243) TestRollingFileSystemSink.testSetInitialFlushTime() fails intermittently

2016-06-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317515#comment-15317515
 ] 

Hadoop QA commented on HADOOP-13243:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 17s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808498/HADOOP-13243.001.patch
 |
| JIRA Issue | HADOOP-13243 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux cc584a68b859 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6de9213 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9671/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9671/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9671/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestRollingFileSystemSink.testSetInitialFlushTime() fails intermittently
> 
>
> Key: HADOOP-13243
> URL: https://issues.apache.org/jira/browse/HADOOP-13243
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.9.0
>   

[jira] [Commented] (HADOOP-13243) TestRollingFileSystemSink.testSetInitialFlushTime() fails intermittently

2016-06-06 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317509#comment-15317509
 ] 

Daniel Templeton commented on HADOOP-13243:
---

The test has not failed in 180 test runs.  I'll let it go overnight to be extra 
sure.

> TestRollingFileSystemSink.testSetInitialFlushTime() fails intermittently
> 
>
> Key: HADOOP-13243
> URL: https://issues.apache.org/jira/browse/HADOOP-13243
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Minor
> Attachments: HADOOP-13243.001.patch
>
>
> Because of poor checking of boundary conditions, the test fails 1% of the 
> time:
> {noformat}
> The initial flush time was calculated incorrectly: 0
> Stacktrace
> java.lang.AssertionError: The initial flush time was calculated incorrectly: 0
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.metrics2.sink.TestRollingFileSystemSink.testSetInitialFlushTime(TestRollingFileSystemSink.java:120)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13243) TestRollingFileSystemSink.testSetInitialFlushTime() fails intermittently

2016-06-06 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-13243:
--
Assignee: Daniel Templeton

> TestRollingFileSystemSink.testSetInitialFlushTime() fails intermittently
> 
>
> Key: HADOOP-13243
> URL: https://issues.apache.org/jira/browse/HADOOP-13243
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Minor
> Attachments: HADOOP-13243.001.patch
>
>
> Because of poor checking of boundary conditions, the test fails 1% of the 
> time:
> {noformat}
> The initial flush time was calculated incorrectly: 0
> Stacktrace
> java.lang.AssertionError: The initial flush time was calculated incorrectly: 0
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.metrics2.sink.TestRollingFileSystemSink.testSetInitialFlushTime(TestRollingFileSystemSink.java:120)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-06-06 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated HADOOP-12893:

Target Version/s: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1  (was: 2.8.0, 2.7.3, 
2.6.5)

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, 
> HADOOP-12893.004.patch, HADOOP-12893.005.patch, HADOOP-12893.006.patch, 
> HADOOP-12893.007.patch, HADOOP-12893.008.patch, HADOOP-12893.009.patch, 
> HADOOP-12893.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13243) TestRollingFileSystemSink.testSetInitialFlushTime() fails intermittently

2016-06-06 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HADOOP-13243:
--
Attachment: HADOOP-13243.001.patch

> TestRollingFileSystemSink.testSetInitialFlushTime() fails intermittently
> 
>
> Key: HADOOP-13243
> URL: https://issues.apache.org/jira/browse/HADOOP-13243
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Priority: Minor
> Attachments: HADOOP-13243.001.patch
>
>
> Because of poor checking of boundary conditions, the test fails 1% of the 
> time:
> {noformat}
> The initial flush time was calculated incorrectly: 0
> Stacktrace
> java.lang.AssertionError: The initial flush time was calculated incorrectly: 0
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.metrics2.sink.TestRollingFileSystemSink.testSetInitialFlushTime(TestRollingFileSystemSink.java:120)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13243) TestRollingFileSystemSink.testSetInitialFlushTime() fails intermittently

2016-06-06 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HADOOP-13243:
--
Status: Patch Available  (was: Open)

> TestRollingFileSystemSink.testSetInitialFlushTime() fails intermittently
> 
>
> Key: HADOOP-13243
> URL: https://issues.apache.org/jira/browse/HADOOP-13243
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Priority: Minor
> Attachments: HADOOP-13243.001.patch
>
>
> Because of poor checking of boundary conditions, the test fails 1% of the 
> time:
> {noformat}
> The initial flush time was calculated incorrectly: 0
> Stacktrace
> java.lang.AssertionError: The initial flush time was calculated incorrectly: 0
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.metrics2.sink.TestRollingFileSystemSink.testSetInitialFlushTime(TestRollingFileSystemSink.java:120)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13242) Authenticate to Azure Data Lake using client ID and keys

2016-06-06 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317442#comment-15317442
 ] 

Chris Nauroth commented on HADOOP-13242:


[~ASikaria], thank you for the patch.  I tried a full test run against an Azure 
Data Lake account, using the contract tests from HADOOP-12875.  Everything 
worked great.

I just have one further request.  Would you please update the documentation in 
hadoop-tools/hadoop-azure-datalake/src/site/markdown/index.md to provide 
step-by-step instructions for testing with a client ID and keys?

> Authenticate to Azure Data Lake using client ID and keys
> 
>
> Key: HADOOP-13242
> URL: https://issues.apache.org/jira/browse/HADOOP-13242
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
> Environment: All
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Attachments: HDFS-10462-001.patch, HDFS-10462-002.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Current OAuth2 support (used by HADOOP-12666) supports getting a token using 
> client creds. However, the client creds support does not pass the "resource" 
> parameter required by Azure AD. This work adds support for the "resource" 
> parameter when acquring the OAuth2 token from Azure AD, so the client 
> credentials can be used to authenticate to Azure Data Lake. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12875) [Azure Data Lake] Support for contract test and unit test cases

2016-06-06 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12875:
---
Hadoop Flags: Reviewed

+1 for patch 004, pending a pre-commit run after commit of HADOOP-12666.  I did 
a successful full test run integrated against the Azure Data Lake back-end.  I 
plan to commit this to trunk and branch-2 later this week.

> [Azure Data Lake] Support for contract test and unit test cases
> ---
>
> Key: HADOOP-12875
> URL: https://issues.apache.org/jira/browse/HADOOP-12875
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: Hadoop-12875-001.patch, Hadoop-12875-002.patch, 
> Hadoop-12875-003.patch, Hadoop-12875-004.patch
>
>
> This JIRA describes contract test and unit test cases support for azure data 
> lake file system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12666) Support Microsoft Azure Data Lake - as a file system in Hadoop

2016-06-06 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12666:
---
Hadoop Flags: Reviewed

At this point, I am +1 for patch 016.  I completed a successful full test run 
integrated against the Azure Data Lake service using the contract tests from 
HADOOP-12875.  The remaining Checkstyle warning is not worthwhile to address.

[~vishwajeet.dusane], thank you for responding to the feedback here and 
promptly revising the patch when requested.  Thank you also to all of the code 
reviewers who participated.

I plan to commit this to trunk and branch-2 later this week.

> Support Microsoft Azure Data Lake - as a file system in Hadoop
> --
>
> Key: HADOOP-12666
> URL: https://issues.apache.org/jira/browse/HADOOP-12666
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: Create_Read_Hadoop_Adl_Store_Semantics.pdf, 
> HADOOP-12666-002.patch, HADOOP-12666-003.patch, HADOOP-12666-004.patch, 
> HADOOP-12666-005.patch, HADOOP-12666-006.patch, HADOOP-12666-007.patch, 
> HADOOP-12666-008.patch, HADOOP-12666-009.patch, HADOOP-12666-010.patch, 
> HADOOP-12666-011.patch, HADOOP-12666-012.patch, HADOOP-12666-013.patch, 
> HADOOP-12666-014.patch, HADOOP-12666-015.patch, HADOOP-12666-016.patch, 
> HADOOP-12666-1.patch
>
>   Original Estimate: 336h
>  Time Spent: 336h
>  Remaining Estimate: 0h
>
> h2. Description
> This JIRA describes a new file system implementation for accessing Microsoft 
> Azure Data Lake Store (ADL) from within Hadoop. This would enable existing 
> Hadoop applications such has MR, HIVE, Hbase etc..,  to use ADL store as 
> input or output.
>  
> ADL is ultra-high capacity, Optimized for massive throughput with rich 
> management and security features. More details available at 
> https://azure.microsoft.com/en-us/services/data-lake-store/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12875) [Azure Data Lake] Support for contract test and unit test cases

2016-06-06 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12875:
---
Status: Open  (was: Patch Available)

I am canceling this patch until its pre-requisites are committed.

> [Azure Data Lake] Support for contract test and unit test cases
> ---
>
> Key: HADOOP-12875
> URL: https://issues.apache.org/jira/browse/HADOOP-12875
> Project: Hadoop Common
>  Issue Type: Test
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: Hadoop-12875-001.patch, Hadoop-12875-002.patch, 
> Hadoop-12875-003.patch, Hadoop-12875-004.patch
>
>
> This JIRA describes contract test and unit test cases support for azure data 
> lake file system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13242) Authenticate to Azure Data Lake using client ID and keys

2016-06-06 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13242:
---
Status: Open  (was: Patch Available)

I am canceling this patch until its pre-requisites are committed.

> Authenticate to Azure Data Lake using client ID and keys
> 
>
> Key: HADOOP-13242
> URL: https://issues.apache.org/jira/browse/HADOOP-13242
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
> Environment: All
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Attachments: HDFS-10462-001.patch, HDFS-10462-002.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Current OAuth2 support (used by HADOOP-12666) supports getting a token using 
> client creds. However, the client creds support does not pass the "resource" 
> parameter required by Azure AD. This work adds support for the "resource" 
> parameter when acquring the OAuth2 token from Azure AD, so the client 
> credentials can be used to authenticate to Azure Data Lake. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13227) AsyncCallHandler should use a event driven architecture to handle async calls

2016-06-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317419#comment-15317419
 ] 

Hadoop QA commented on HADOOP-13227:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
7s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
44s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
24s{color} | {color:red} root: The patch generated 5 new + 213 unchanged - 1 
fixed = 218 total (was 214) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
31s{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 58s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 36s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}131m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-common-project/hadoop-common |
|  |  org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce() calls 
Thread.sleep() with a lock held  At RetryInvocationHandler.java:lock held  At 
RetryInvocationHandler.java:[line 107] |
| Failed junit tests | hadoop.hdfs.server.namenode.TestEditLog |
|   | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
| Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808462/c13227_20160606.patch 
|
| JIRA Issue | HADOOP-13227 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 258f00ed56a8 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 35f255b |
| Default Java | 

[jira] [Commented] (HADOOP-12718) Incorrect error message by fs -put local dir without permission

2016-06-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317409#comment-15317409
 ] 

Hadoop QA commented on HADOOP-12718:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  0m  
5s{color} | {color:red} Docker failed to build yetus/hadoop:2c91fd8. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808090/HADOOP-12718.004.patch
 |
| JIRA Issue | HADOOP-12718 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9670/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Incorrect error message by fs -put local dir without permission
> ---
>
> Key: HADOOP-12718
> URL: https://issues.apache.org/jira/browse/HADOOP-12718
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Blocker
>  Labels: supportability
> Attachments: HADOOP-12718.001.patch, HADOOP-12718.002.patch, 
> HADOOP-12718.003.patch, HADOOP-12718.004.patch, 
> TestFsShellCopyPermission-output.001.txt, 
> TestFsShellCopyPermission-output.002.txt, TestFsShellCopyPermission.001.patch
>
>
> When the user doesn't have access permission to the local directory, the 
> "hadoop fs -put" command prints a confusing error message "No such file or 
> directory".
> {noformat}
> $ whoami
> systest
> $ cd /home/systest
> $ ls -ld .
> drwx--. 4 systest systest 4096 Jan 13 14:21 .
> $ mkdir d1
> $ sudo -u hdfs hadoop fs -put d1 /tmp
> put: `d1': No such file or directory
> {noformat}
> It will be more informative if the message is:
> {noformat}
> put: d1 (Permission denied)
> {noformat}
> If the source is a local file, the error message is ok:
> {noformat}
> put: f1 (Permission denied)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13240) TestAclCommands.testSetfaclValidations fail

2016-06-06 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317406#comment-15317406
 ] 

Chris Nauroth commented on HADOOP-13240:


I am not able to repro this test failure.  I tried the trunk and branch-2.7 
branches.

> TestAclCommands.testSetfaclValidations fail
> ---
>
> Key: HADOOP-13240
> URL: https://issues.apache.org/jira/browse/HADOOP-13240
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1, 2.7.2
> Environment: hadoop 2.4.1,as6.5
>Reporter: linbao111
>Assignee: John Zhuge
>Priority: Minor
>
> mvn test -Djava.net.preferIPv4Stack=true -Dlog4j.rootLogger=DEBUG,console 
> -Dtest=TestAclCommands#testSetfaclValidations failed with following message:
> ---
> Test set: org.apache.hadoop.fs.shell.TestAclCommands
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.599 sec <<< 
> FAILURE! - in org.apache.hadoop.fs.shell.TestAclCommands
> testSetfaclValidations(org.apache.hadoop.fs.shell.TestAclCommands)  Time 
> elapsed: 0.534 sec  <<< FAILURE!
> java.lang.AssertionError: setfacl should fail ACL spec missing
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at org.junit.Assert.assertFalse(Assert.java:68)
> at 
> org.apache.hadoop.fs.shell.TestAclCommands.testSetfaclValidations(TestAclCommands.java:81)
> i notice from 
> HADOOP-10277,hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
>  code changed
> should 
> hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.javabe
>  changed to:
> diff --git 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
>  b/hadoop-common-project/hadoop-common/src/test/java/org/
> index b14cd37..463bfcd
> --- 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
> @@ -80,7 +80,7 @@ public void testSetfaclValidations() throws Exception {
>  "/path" }));
>  assertFalse("setfacl should fail ACL spec missing",
>  0 == runCommand(new String[] { "-setfacl", "-m",
> -"", "/path" }));
> +":", "/path" }));
>}
>  
>@Test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13243) TestRollingFileSystemSink.testSetInitialFlushTime() fails intermittently

2016-06-06 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317396#comment-15317396
 ] 

Daniel Templeton commented on HADOOP-13243:
---

I'm working on it, even though I can't assign it to myself anymore.

> TestRollingFileSystemSink.testSetInitialFlushTime() fails intermittently
> 
>
> Key: HADOOP-13243
> URL: https://issues.apache.org/jira/browse/HADOOP-13243
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Priority: Minor
>
> Because of poor checking of boundary conditions, the test fails 1% of the 
> time:
> {noformat}
> The initial flush time was calculated incorrectly: 0
> Stacktrace
> java.lang.AssertionError: The initial flush time was calculated incorrectly: 0
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.metrics2.sink.TestRollingFileSystemSink.testSetInitialFlushTime(TestRollingFileSystemSink.java:120)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12807) S3AFileSystem should read AWS credentials from environment variables

2016-06-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317397#comment-15317397
 ] 

Hudson commented on HADOOP-12807:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #9915 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9915/])
HADOOP-12807 S3AFileSystem should read AWS credentials from environment 
(stevel: rev a3f78d8fa83f07f9183f3546203a191fcf50008c)
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
* hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md


> S3AFileSystem should read AWS credentials from environment variables
> 
>
> Key: HADOOP-12807
> URL: https://issues.apache.org/jira/browse/HADOOP-12807
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Tobin Baker
>Assignee: Tobin Baker
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12807-1.patch, HADOOP-12807-branch-2-004.patch
>
>
> Unlike the {{DefaultAWSCredentialsProviderChain}} in the AWS SDK, the 
> {{AWSCredentialsProviderChain}} constructed by {{S3AFileSystem}} does not 
> include an {{EnvironmentVariableCredentialsProvider}} instance. This prevents 
> users from supplying AWS credentials in the environment variables 
> {{AWS_ACCESS_KEY_ID}} and {{AWS_SECRET_ACCESS_KEY}}, which is the only 
> alternative in some scenarios.
> In my scenario, I need to access S3 from within a test running in a CI 
> environment that does not support IAM roles but does allow me to supply 
> encrypted environment variables. Thus, the only secure approach I can use is 
> to supply my AWS credentials in environment variables (plaintext 
> configuration files are out of the question).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13243) TestRollingFileSystemSink.testSetInitialFlushTime() fails intermittently

2016-06-06 Thread Daniel Templeton (JIRA)
Daniel Templeton created HADOOP-13243:
-

 Summary: TestRollingFileSystemSink.testSetInitialFlushTime() fails 
intermittently
 Key: HADOOP-13243
 URL: https://issues.apache.org/jira/browse/HADOOP-13243
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.9.0
Reporter: Daniel Templeton
Priority: Minor


Because of poor checking of boundary conditions, the test fails 1% of the time:

{noformat}
The initial flush time was calculated incorrectly: 0

Stacktrace

java.lang.AssertionError: The initial flush time was calculated incorrectly: 0
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.metrics2.sink.TestRollingFileSystemSink.testSetInitialFlushTime(TestRollingFileSystemSink.java:120)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12807) S3AFileSystem should read AWS credentials from environment variables

2016-06-06 Thread Tobin Baker (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317370#comment-15317370
 ] 

Tobin Baker commented on HADOOP-12807:
--

Thanks so much for getting this in, and sorry for slacking off on the tests!

I don't think our CI configuration exposes us to much risk since the IAM user 
whose env vars are encrypted in our {{.travis.yml}} file has no permissions 
except read/write access to a dedicated test data S3 bucket. Also, our Travis 
account is restricted to users with write privileges on our Github repo, which 
is confined to our team.


> S3AFileSystem should read AWS credentials from environment variables
> 
>
> Key: HADOOP-12807
> URL: https://issues.apache.org/jira/browse/HADOOP-12807
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Tobin Baker
>Assignee: Tobin Baker
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12807-1.patch, HADOOP-12807-branch-2-004.patch
>
>
> Unlike the {{DefaultAWSCredentialsProviderChain}} in the AWS SDK, the 
> {{AWSCredentialsProviderChain}} constructed by {{S3AFileSystem}} does not 
> include an {{EnvironmentVariableCredentialsProvider}} instance. This prevents 
> users from supplying AWS credentials in the environment variables 
> {{AWS_ACCESS_KEY_ID}} and {{AWS_SECRET_ACCESS_KEY}}, which is the only 
> alternative in some scenarios.
> In my scenario, I need to access S3 from within a test running in a CI 
> environment that does not support IAM roles but does allow me to supply 
> encrypted environment variables. Thus, the only secure approach I can use is 
> to supply my AWS credentials in environment variables (plaintext 
> configuration files are out of the question).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12807) S3AFileSystem should read AWS credentials from environment variables

2016-06-06 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12807:

   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Thanks: patch applied to 2.8

Tobin, I hope you aren't letting anyone untrusted submit patches to that CI 
system? As if they can print your env vars, they get your secrets.

Given that the env vars supported include transient session tokens, you may be 
able to get away with session tokens there; it may mean that the STS SDK JAR 
needs to go on to the CP. If you do try this —let us know how you get on.

> S3AFileSystem should read AWS credentials from environment variables
> 
>
> Key: HADOOP-12807
> URL: https://issues.apache.org/jira/browse/HADOOP-12807
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Tobin Baker
>Assignee: Tobin Baker
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12807-1.patch, HADOOP-12807-branch-2-004.patch
>
>
> Unlike the {{DefaultAWSCredentialsProviderChain}} in the AWS SDK, the 
> {{AWSCredentialsProviderChain}} constructed by {{S3AFileSystem}} does not 
> include an {{EnvironmentVariableCredentialsProvider}} instance. This prevents 
> users from supplying AWS credentials in the environment variables 
> {{AWS_ACCESS_KEY_ID}} and {{AWS_SECRET_ACCESS_KEY}}, which is the only 
> alternative in some scenarios.
> In my scenario, I need to access S3 from within a test running in a CI 
> environment that does not support IAM roles but does allow me to supply 
> encrypted environment variables. Thus, the only secure approach I can use is 
> to supply my AWS credentials in environment variables (plaintext 
> configuration files are out of the question).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12807) S3AFileSystem should read AWS credentials from environment variables

2016-06-06 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12807:

Assignee: Tobin Baker

> S3AFileSystem should read AWS credentials from environment variables
> 
>
> Key: HADOOP-12807
> URL: https://issues.apache.org/jira/browse/HADOOP-12807
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Tobin Baker
>Assignee: Tobin Baker
>Priority: Minor
> Attachments: HADOOP-12807-1.patch, HADOOP-12807-branch-2-004.patch
>
>
> Unlike the {{DefaultAWSCredentialsProviderChain}} in the AWS SDK, the 
> {{AWSCredentialsProviderChain}} constructed by {{S3AFileSystem}} does not 
> include an {{EnvironmentVariableCredentialsProvider}} instance. This prevents 
> users from supplying AWS credentials in the environment variables 
> {{AWS_ACCESS_KEY_ID}} and {{AWS_SECRET_ACCESS_KEY}}, which is the only 
> alternative in some scenarios.
> In my scenario, I need to access S3 from within a test running in a CI 
> environment that does not support IAM roles but does allow me to supply 
> encrypted environment variables. Thus, the only secure approach I can use is 
> to supply my AWS credentials in environment variables (plaintext 
> configuration files are out of the question).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12807) S3AFileSystem should read AWS credentials from environment variables

2016-06-06 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317306#comment-15317306
 ] 

Steve Loughran commented on HADOOP-12807:
-

Yetus is happy apart from the tests. 

There aren't tests as there isn't an easy way to set up an execution 
environment with env vars for one test suite, and not for the others. I've 
manually verified it by doing a full hadoop release and performing `hadoop fs` 
commands.

+1

> S3AFileSystem should read AWS credentials from environment variables
> 
>
> Key: HADOOP-12807
> URL: https://issues.apache.org/jira/browse/HADOOP-12807
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Tobin Baker
>Priority: Minor
> Attachments: HADOOP-12807-1.patch, HADOOP-12807-branch-2-004.patch
>
>
> Unlike the {{DefaultAWSCredentialsProviderChain}} in the AWS SDK, the 
> {{AWSCredentialsProviderChain}} constructed by {{S3AFileSystem}} does not 
> include an {{EnvironmentVariableCredentialsProvider}} instance. This prevents 
> users from supplying AWS credentials in the environment variables 
> {{AWS_ACCESS_KEY_ID}} and {{AWS_SECRET_ACCESS_KEY}}, which is the only 
> alternative in some scenarios.
> In my scenario, I need to access S3 from within a test running in a CI 
> environment that does not support IAM roles but does allow me to supply 
> encrypted environment variables. Thus, the only secure approach I can use is 
> to supply my AWS credentials in environment variables (plaintext 
> configuration files are out of the question).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13242) Authenticate to Azure Data Lake using client ID and keys

2016-06-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317260#comment-15317260
 ] 

Hadoop QA commented on HADOOP-13242:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} HADOOP-13242 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807802/HDFS-10462-002.patch |
| JIRA Issue | HADOOP-13242 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9669/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Authenticate to Azure Data Lake using client ID and keys
> 
>
> Key: HADOOP-13242
> URL: https://issues.apache.org/jira/browse/HADOOP-13242
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
> Environment: All
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Attachments: HDFS-10462-001.patch, HDFS-10462-002.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Current OAuth2 support (used by HADOOP-12666) supports getting a token using 
> client creds. However, the client creds support does not pass the "resource" 
> parameter required by Azure AD. This work adds support for the "resource" 
> parameter when acquring the OAuth2 token from Azure AD, so the client 
> credentials can be used to authenticate to Azure Data Lake. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-13242) Authenticate to Azure Data Lake using client ID and keys

2016-06-06 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth moved HDFS-10462 to HADOOP-13242:
---

Component/s: (was: hdfs-client)
 fs/azure
Key: HADOOP-13242  (was: HDFS-10462)
Project: Hadoop Common  (was: Hadoop HDFS)

> Authenticate to Azure Data Lake using client ID and keys
> 
>
> Key: HADOOP-13242
> URL: https://issues.apache.org/jira/browse/HADOOP-13242
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
> Environment: All
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Attachments: HDFS-10462-001.patch, HDFS-10462-002.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Current OAuth2 support (used by HADOOP-12666) supports getting a token using 
> client creds. However, the client creds support does not pass the "resource" 
> parameter required by Azure AD. This work adds support for the "resource" 
> parameter when acquring the OAuth2 token from Azure AD, so the client 
> credentials can be used to authenticate to Azure Data Lake. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13227) AsyncCallHandler should use a event driven architecture to handle async calls

2016-06-06 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317244#comment-15317244
 ] 

Jing Zhao commented on HADOOP-13227:


Thanks a lot for updating the patch, Nicholas!

bq. We don't want to read the non-head elements from the queue since it is an 
O( n) operation, where n is the size of the queue.

Can we use ConcurrentLinkedQueue's iterator for the scanning here? We will not 
get any {{ConcurrentModificationException}}, also only the monitor thread 
removes elements from the queue, all other threads only add elements into the 
queue.

> AsyncCallHandler should use a event driven architecture to handle async calls
> -
>
> Key: HADOOP-13227
> URL: https://issues.apache.org/jira/browse/HADOOP-13227
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io, ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: c13227_20160602.patch, c13227_20160606.patch
>
>
> This JIRA is to address [Jing's 
> comments|https://issues.apache.org/jira/browse/HADOOP-13226?focusedCommentId=15308630=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15308630]
>  in HADOOP-13226.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12807) S3AFileSystem should read AWS credentials from environment variables

2016-06-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317195#comment-15317195
 ] 

Hadoop QA commented on HADOOP-12807:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
45s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
14s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:babe025 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808463/HADOOP-12807-branch-2-004.patch
 |
| JIRA Issue | HADOOP-12807 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 45ad7bceeacf 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / 074588d |
| Default Java | 1.7.0_101 |
| Multi-JDK versions |  

[jira] [Updated] (HADOOP-12807) S3AFileSystem should read AWS credentials from environment variables

2016-06-06 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12807:

Attachment: HADOOP-12807-branch-2-004.patch

Patch 004; 3KB long, This had better apply to branch-2 or I'm in a mess

> S3AFileSystem should read AWS credentials from environment variables
> 
>
> Key: HADOOP-12807
> URL: https://issues.apache.org/jira/browse/HADOOP-12807
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Tobin Baker
>Priority: Minor
> Attachments: HADOOP-12807-1.patch, HADOOP-12807-branch-2-004.patch
>
>
> Unlike the {{DefaultAWSCredentialsProviderChain}} in the AWS SDK, the 
> {{AWSCredentialsProviderChain}} constructed by {{S3AFileSystem}} does not 
> include an {{EnvironmentVariableCredentialsProvider}} instance. This prevents 
> users from supplying AWS credentials in the environment variables 
> {{AWS_ACCESS_KEY_ID}} and {{AWS_SECRET_ACCESS_KEY}}, which is the only 
> alternative in some scenarios.
> In my scenario, I need to access S3 from within a test running in a CI 
> environment that does not support IAM roles but does allow me to supply 
> encrypted environment variables. Thus, the only secure approach I can use is 
> to supply my AWS credentials in environment variables (plaintext 
> configuration files are out of the question).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12807) S3AFileSystem should read AWS credentials from environment variables

2016-06-06 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12807:

Status: Open  (was: Patch Available)

> S3AFileSystem should read AWS credentials from environment variables
> 
>
> Key: HADOOP-12807
> URL: https://issues.apache.org/jira/browse/HADOOP-12807
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Tobin Baker
>Priority: Minor
> Attachments: HADOOP-12807-1.patch, HADOOP-12807-branch-2-004.patch
>
>
> Unlike the {{DefaultAWSCredentialsProviderChain}} in the AWS SDK, the 
> {{AWSCredentialsProviderChain}} constructed by {{S3AFileSystem}} does not 
> include an {{EnvironmentVariableCredentialsProvider}} instance. This prevents 
> users from supplying AWS credentials in the environment variables 
> {{AWS_ACCESS_KEY_ID}} and {{AWS_SECRET_ACCESS_KEY}}, which is the only 
> alternative in some scenarios.
> In my scenario, I need to access S3 from within a test running in a CI 
> environment that does not support IAM roles but does allow me to supply 
> encrypted environment variables. Thus, the only secure approach I can use is 
> to supply my AWS credentials in environment variables (plaintext 
> configuration files are out of the question).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12807) S3AFileSystem should read AWS credentials from environment variables

2016-06-06 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12807:

Status: Patch Available  (was: Open)

> S3AFileSystem should read AWS credentials from environment variables
> 
>
> Key: HADOOP-12807
> URL: https://issues.apache.org/jira/browse/HADOOP-12807
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Tobin Baker
>Priority: Minor
> Attachments: HADOOP-12807-1.patch, HADOOP-12807-branch-2-004.patch
>
>
> Unlike the {{DefaultAWSCredentialsProviderChain}} in the AWS SDK, the 
> {{AWSCredentialsProviderChain}} constructed by {{S3AFileSystem}} does not 
> include an {{EnvironmentVariableCredentialsProvider}} instance. This prevents 
> users from supplying AWS credentials in the environment variables 
> {{AWS_ACCESS_KEY_ID}} and {{AWS_SECRET_ACCESS_KEY}}, which is the only 
> alternative in some scenarios.
> In my scenario, I need to access S3 from within a test running in a CI 
> environment that does not support IAM roles but does allow me to supply 
> encrypted environment variables. Thus, the only secure approach I can use is 
> to supply my AWS credentials in environment variables (plaintext 
> configuration files are out of the question).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13227) AsyncCallHandler should use a event driven architecture to handle async calls

2016-06-06 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-13227:
-
Attachment: c13227_20160606.patch

> In RetryInfo#newRetryInfo, looks like failover, fail, and retry are mutual 
> exclusive? ...

It is correct that the end result is mutually exclusive.  However, we need to 
loop all of the actions in order to determine which one to keep.  Indeed, we 
may combine the failover and fail in RetryInfo to a single action.  Let me 
change it.

> ... consider directly using ConcurrentLinkedQueue which utilizes an efficient 
> non-block algorithm. 

It is a good idea.
 
> In checkCalls, do you think we can avoid the poll+offer operations for a 
> not-done-yet call?

I think it is hard to avoid.  We don't want to read the non-head elements from 
the queue since it is an O( n) operation, where n is the size of the queue.  
Poll and offer indeed are cheap for linked queue.  Let me know if you have an 
idea to avoid poll+offer.

Here is a new patch:

c13227_20160606.patch

> AsyncCallHandler should use a event driven architecture to handle async calls
> -
>
> Key: HADOOP-13227
> URL: https://issues.apache.org/jira/browse/HADOOP-13227
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io, ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Attachments: c13227_20160602.patch, c13227_20160606.patch
>
>
> This JIRA is to address [Jing's 
> comments|https://issues.apache.org/jira/browse/HADOOP-13226?focusedCommentId=15308630=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15308630]
>  in HADOOP-13226.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12807) S3AFileSystem should read AWS credentials from environment variables

2016-06-06 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12807:

Attachment: (was: HADOOP-12807-branch-2-002.patch)

> S3AFileSystem should read AWS credentials from environment variables
> 
>
> Key: HADOOP-12807
> URL: https://issues.apache.org/jira/browse/HADOOP-12807
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Tobin Baker
>Priority: Minor
> Attachments: HADOOP-12807-1.patch
>
>
> Unlike the {{DefaultAWSCredentialsProviderChain}} in the AWS SDK, the 
> {{AWSCredentialsProviderChain}} constructed by {{S3AFileSystem}} does not 
> include an {{EnvironmentVariableCredentialsProvider}} instance. This prevents 
> users from supplying AWS credentials in the environment variables 
> {{AWS_ACCESS_KEY_ID}} and {{AWS_SECRET_ACCESS_KEY}}, which is the only 
> alternative in some scenarios.
> In my scenario, I need to access S3 from within a test running in a CI 
> environment that does not support IAM roles but does allow me to supply 
> encrypted environment variables. Thus, the only secure approach I can use is 
> to supply my AWS credentials in environment variables (plaintext 
> configuration files are out of the question).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12807) S3AFileSystem should read AWS credentials from environment variables

2016-06-06 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12807:

Attachment: (was: HADOOP-12807-branch-2-002.patch)

> S3AFileSystem should read AWS credentials from environment variables
> 
>
> Key: HADOOP-12807
> URL: https://issues.apache.org/jira/browse/HADOOP-12807
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Tobin Baker
>Priority: Minor
> Attachments: HADOOP-12807-1.patch
>
>
> Unlike the {{DefaultAWSCredentialsProviderChain}} in the AWS SDK, the 
> {{AWSCredentialsProviderChain}} constructed by {{S3AFileSystem}} does not 
> include an {{EnvironmentVariableCredentialsProvider}} instance. This prevents 
> users from supplying AWS credentials in the environment variables 
> {{AWS_ACCESS_KEY_ID}} and {{AWS_SECRET_ACCESS_KEY}}, which is the only 
> alternative in some scenarios.
> In my scenario, I need to access S3 from within a test running in a CI 
> environment that does not support IAM roles but does allow me to supply 
> encrypted environment variables. Thus, the only secure approach I can use is 
> to supply my AWS credentials in environment variables (plaintext 
> configuration files are out of the question).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13203) S3a: Consider reducing the number of connection aborts by setting correct length in s3 request

2016-06-06 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317086#comment-15317086
 ] 

Steve Loughran commented on HADOOP-13203:
-

It looks like, as people note, the move may make forward seeking, or a mix of 
seek + read() calls more expensive.More specifically, it could well accelerate 
a sequence of readFully() offset calls, but not handle so well situations of ) 
+ read(pos, n) + seek(pos + n + n2) , stuff the forward skipping could handle.

Even regarding readFully() calls, it isn't going to handle well any mix of 
read()+readFully(), as the first read will have triggered a to-end-of-file read.

It seems to me that one could actually do something of both where all reads 
specified a block length, such as 64KB. On sustained forward reads, when the 
boundary was triggered it'd read forward. On mixed seek/read operations, ones 
where the range of the read is unknown, this would significantly optimise any 
random access use, rather than those which exclusively used on read operation.

And here's the problem: right now we don't know what are the API/file use modes 
in widespread use against s3. We don't have the data. I can see what you're 
highlighting: the current mechanism is very expensive for backwards seeks —but 
we have just optimised forward seeking *and* instrumented the code to collect 
detail on what's actually going on.

# I don't want to rush into a change which has the potential to make some 
existing codepaths worse —especially as we don't know how the FS gets used.
# I'd really like to see collected statistics on FS usage across a broad 
dataset. Anyone here is welcome to contribute to this —it should include 
statistics gathered in downstream use.

I'm very tempted to argue this should be an S3a phase III improvement: it has 
ramifications, and we should do it well. We are, with the metrics, in a 
position to understand those ramifications and, if not in a rush, implement 
something which works well for a broad set of uses

> S3a: Consider reducing the number of connection aborts by setting correct 
> length in s3 request
> --
>
> Key: HADOOP-13203
> URL: https://issues.apache.org/jira/browse/HADOOP-13203
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-13203-branch-2-001.patch, 
> HADOOP-13203-branch-2-002.patch, HADOOP-13203-branch-2-003.patch
>
>
> Currently file's "contentLength" is set as the "requestedStreamLen", when 
> invoking S3AInputStream::reopen().  As a part of lazySeek(), sometimes the 
> stream had to be closed and reopened. But lots of times the stream was closed 
> with abort() causing the internal http connection to be unusable. This incurs 
> lots of connection establishment cost in some jobs.  It would be good to set 
> the correct value for the stream length to avoid connection aborts. 
> I will post the patch once aws tests passes in my machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13237) s3a initialization against public bucket fails if caller lacks any credentials

2016-06-06 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13237:
---
Attachment: HADOOP-13237.001.patch

Hello [~ste...@apache.org].  I got curious about this, and I think I have a 
solution, so I'm reopening and attaching a patch.  This is an incomplete patch 
just to communicate the idea, so I won't click Submit Patch yet.

I mentioned before that I think anonymous access should be opt-in only through 
explicit configuration, so users don't mistakenly set up an insecure 
deployment.  Instead of adding a new property, I now think the existing 
{{fs.s3a.aws.credentials.provider}} should be fine for this.  By setting it 
equal to {{AnonymousAWSCredentialsProvider}}, it should bypass the credentials 
chain (which insists on finding non-null credentials) and instead use anonymous 
credentials directly.

Unfortunately, there is a bug with that.  The reflection-based credential 
provider initialization logic demands that the class have a constructor that 
accepts a {{URI}} and a {{Configuration}}.  That wouldn't make sense for an 
{{AnonymousAWSCredentialsProvider}}, so I've added a fallback path to the 
initialization to support calling a default constructor.

I tested this by removing my S3A credentials from configuration and trying to 
access the public landsat-pds bucket.  I was able to repro the bug you 
reported.  Then, I applied my patch, retried, and it worked fine.

{code}
> hadoop fs -cat s3a://landsat-pds/run_info.json
cat: doesBucketExist on landsat-pds: com.amazonaws.AmazonClientException: 
Unable to load AWS credentials from any provider in the chain: Unable to load 
AWS credentials from any provider in the chain

> hadoop fs 
> -Dfs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider
>  -cat s3a://landsat-pds/run_info.json
{"active_run": "unknown on ip-10-144-75-61 started at 2016-06-06 
18:09:24.791372 (landsat_ingestor_exec.py)", "last_run": 4215}
{code}

Is this what you had in mind?  If so, let me know, and I'll finish off the 
remaining work for this patch:

# Add a unit test for anonymous access.
# Update documentation of fs.s3a.aws.credentials.provider in core-default.xml.
# Update hadoop-aws site documentation with more discussion of 
fs.s3a.aws.credentials.provider.
# Any other feedback from you or other code reviewers.

> s3a initialization against public bucket fails if caller lacks any credentials
> --
>
> Key: HADOOP-13237
> URL: https://issues.apache.org/jira/browse/HADOOP-13237
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.8.0
>
> Attachments: HADOOP-13237.001.patch
>
>
> If an S3 bucket is public, anyone should be able to read from it.
> However, you cannot create an s3a client bonded to a public bucket unless you 
> have some credentials; the {{doesBucketExist()}} check rejects the call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13237) s3a initialization against public bucket fails if caller lacks any credentials

2016-06-06 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13237:
---
Assignee: Chris Nauroth  (was: Steve Loughran)
Priority: Minor  (was: Major)

> s3a initialization against public bucket fails if caller lacks any credentials
> --
>
> Key: HADOOP-13237
> URL: https://issues.apache.org/jira/browse/HADOOP-13237
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Chris Nauroth
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-13237.001.patch
>
>
> If an S3 bucket is public, anyone should be able to read from it.
> However, you cannot create an s3a client bonded to a public bucket unless you 
> have some credentials; the {{doesBucketExist()}} check rejects the call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13237) s3a initialization against public bucket fails if caller lacks any credentials

2016-06-06 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth reopened HADOOP-13237:


> s3a initialization against public bucket fails if caller lacks any credentials
> --
>
> Key: HADOOP-13237
> URL: https://issues.apache.org/jira/browse/HADOOP-13237
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.8.0
>
>
> If an S3 bucket is public, anyone should be able to read from it.
> However, you cannot create an s3a client bonded to a public bucket unless you 
> have some credentials; the {{doesBucketExist()}} check rejects the call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12892) fix/rewrite create-release

2016-06-06 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317024#comment-15317024
 ] 

Wangda Tan commented on HADOOP-12892:
-

If -PreleaseDocs is relatively easy to be fixed in branch-2/branch-2.8, we 
should look at it. [~aw], could you add your points regarding to which JIRA we 
need to backport to branch-2 to support -Preleasedocs? I can try to backport 
them.

Since I have very limited understanding of shell scripts, I don't really know 
what will be broken if we don't include HADOOP-12850.

Is the create-release in branch-2 broken by HADOOP-11792?

> fix/rewrite create-release
> --
>
> Key: HADOOP-12892
> URL: https://issues.apache.org/jira/browse/HADOOP-12892
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12892.00.patch, HADOOP-12892.01.patch, 
> HADOOP-12892.02.patch, HADOOP-12892.03.patch
>
>
> create-release needs some major surgery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12892) fix/rewrite create-release

2016-06-06 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15317020#comment-15317020
 ] 

Allen Wittenauer commented on HADOOP-12892:
---

Just a reminder that the old method of building on Jenkins violates ASF policy. 
 Also:
* I need to update the HowToRelease docs to use create-release
* -Pdocs is missing from the build, just like it was in the original 
create-release.sh file. :(

> fix/rewrite create-release
> --
>
> Key: HADOOP-12892
> URL: https://issues.apache.org/jira/browse/HADOOP-12892
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12892.00.patch, HADOOP-12892.01.patch, 
> HADOOP-12892.02.patch, HADOOP-12892.03.patch
>
>
> create-release needs some major surgery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12892) fix/rewrite create-release

2016-06-06 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15316982#comment-15316982
 ] 

Andrew Wang commented on HADOOP-12892:
--

So one missing feature in the branch-2/branch-2.8 create-release script is that 
it doesn't use the new -Preleasedocs to build the CHANGES.txt file. I'm not 
sure how much work it is to add this support, but that's the alternative if we 
don't backport this JIRA.

Could you provide some more context on what breaks if we don't include 
HADOOP-12850 and so on? It might be less work to tweak the backport to avoid 
these dependent JIRAs compared to adding Preleasedocs support to the old script.

> fix/rewrite create-release
> --
>
> Key: HADOOP-12892
> URL: https://issues.apache.org/jira/browse/HADOOP-12892
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12892.00.patch, HADOOP-12892.01.patch, 
> HADOOP-12892.02.patch, HADOOP-12892.03.patch
>
>
> create-release needs some major surgery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13240) TestAclCommands.testSetfaclValidations fail

2016-06-06 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge reassigned HADOOP-13240:
---

Assignee: John Zhuge

> TestAclCommands.testSetfaclValidations fail
> ---
>
> Key: HADOOP-13240
> URL: https://issues.apache.org/jira/browse/HADOOP-13240
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1, 2.7.2
> Environment: hadoop 2.4.1,as6.5
>Reporter: linbao111
>Assignee: John Zhuge
>Priority: Minor
>
> mvn test -Djava.net.preferIPv4Stack=true -Dlog4j.rootLogger=DEBUG,console 
> -Dtest=TestAclCommands#testSetfaclValidations failed with following message:
> ---
> Test set: org.apache.hadoop.fs.shell.TestAclCommands
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.599 sec <<< 
> FAILURE! - in org.apache.hadoop.fs.shell.TestAclCommands
> testSetfaclValidations(org.apache.hadoop.fs.shell.TestAclCommands)  Time 
> elapsed: 0.534 sec  <<< FAILURE!
> java.lang.AssertionError: setfacl should fail ACL spec missing
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at org.junit.Assert.assertFalse(Assert.java:68)
> at 
> org.apache.hadoop.fs.shell.TestAclCommands.testSetfaclValidations(TestAclCommands.java:81)
> i notice from 
> HADOOP-10277,hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
>  code changed
> should 
> hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.javabe
>  changed to:
> diff --git 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
>  b/hadoop-common-project/hadoop-common/src/test/java/org/
> index b14cd37..463bfcd
> --- 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
> @@ -80,7 +80,7 @@ public void testSetfaclValidations() throws Exception {
>  "/path" }));
>  assertFalse("setfacl should fail ACL spec missing",
>  0 == runCommand(new String[] { "-setfacl", "-m",
> -"", "/path" }));
> +":", "/path" }));
>}
>  
>@Test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12892) fix/rewrite create-release

2016-06-06 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15316864#comment-15316864
 ] 

Wangda Tan commented on HADOOP-12892:
-

Thanks [~andrew.wang], sorry for my late response, I missed your comment above.

I took a quick try to backport this patch to branch-2.8, it has a couple of 
conflicts. At least, it depends on HADOOP-12850, and HADOOP-12850 depends on 
more commits like HADOOP-10115, which again are marked as an incompatible 
change.

Given a couple of incompatible dependencies of this patch, I felt risky to 
backport all of them to branch-2.8. Since create-release works in branch-2 
before branch-2.8, do you know which JIRA introduced failures of create-release 
in branch-2? Can we revert problematic commits which leads to the failure?

> fix/rewrite create-release
> --
>
> Key: HADOOP-12892
> URL: https://issues.apache.org/jira/browse/HADOOP-12892
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12892.00.patch, HADOOP-12892.01.patch, 
> HADOOP-12892.02.patch, HADOOP-12892.03.patch
>
>
> create-release needs some major surgery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13189) FairCallQueue makes callQueue larger than the configured capacity.

2016-06-06 Thread Vinitha Reddy Gankidi (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15316800#comment-15316800
 ] 

Vinitha Reddy Gankidi commented on HADOOP-13189:


Attached a patch. Please review.

> FairCallQueue makes callQueue larger than the configured capacity.
> --
>
> Key: HADOOP-13189
> URL: https://issues.apache.org/jira/browse/HADOOP-13189
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.6.0
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
> Attachments: HADOOP-13189.001.patch
>
>
> {{FairCallQueue}} divides {{callQueue}} into multiple (4 by default) 
> sub-queues, with each sub-queue corresponding to a different level of 
> priority. The constructor for {{FairCallQueue}} takes the same parameter 
> {{capacity}} as the default CallQueue implementation, and allocates all its 
> sub-queues of size {{capacity}}. With 4 levels of priority (sub-queues) by 
> default it results in the total callQueue size 4 times larger than it should 
> be based on the configuration.
> {{capacity}} should be divided by the number of sub-queues at some place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13189) FairCallQueue makes callQueue larger than the configured capacity.

2016-06-06 Thread Vinitha Reddy Gankidi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinitha Reddy Gankidi updated HADOOP-13189:
---
Attachment: HADOOP-13189.001.patch

> FairCallQueue makes callQueue larger than the configured capacity.
> --
>
> Key: HADOOP-13189
> URL: https://issues.apache.org/jira/browse/HADOOP-13189
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.6.0
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
> Attachments: HADOOP-13189.001.patch
>
>
> {{FairCallQueue}} divides {{callQueue}} into multiple (4 by default) 
> sub-queues, with each sub-queue corresponding to a different level of 
> priority. The constructor for {{FairCallQueue}} takes the same parameter 
> {{capacity}} as the default CallQueue implementation, and allocates all its 
> sub-queues of size {{capacity}}. With 4 levels of priority (sub-queues) by 
> default it results in the total callQueue size 4 times larger than it should 
> be based on the configuration.
> {{capacity}} should be divided by the number of sub-queues at some place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13189) FairCallQueue makes callQueue larger than the configured capacity.

2016-06-06 Thread Vinitha Reddy Gankidi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinitha Reddy Gankidi reassigned HADOOP-13189:
--

Assignee: Vinitha Reddy Gankidi

> FairCallQueue makes callQueue larger than the configured capacity.
> --
>
> Key: HADOOP-13189
> URL: https://issues.apache.org/jira/browse/HADOOP-13189
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.6.0
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
> Attachments: HADOOP-13189.001.patch
>
>
> {{FairCallQueue}} divides {{callQueue}} into multiple (4 by default) 
> sub-queues, with each sub-queue corresponding to a different level of 
> priority. The constructor for {{FairCallQueue}} takes the same parameter 
> {{capacity}} as the default CallQueue implementation, and allocates all its 
> sub-queues of size {{capacity}}. With 4 levels of priority (sub-queues) by 
> default it results in the total callQueue size 4 times larger than it should 
> be based on the configuration.
> {{capacity}} should be divided by the number of sub-queues at some place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12807) S3AFileSystem should read AWS credentials from environment variables

2016-06-06 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12807:

Attachment: HADOOP-12807-branch-2-002.patch

don't know why the patch didn't take. Trying again, rebased to branch-2 again

> S3AFileSystem should read AWS credentials from environment variables
> 
>
> Key: HADOOP-12807
> URL: https://issues.apache.org/jira/browse/HADOOP-12807
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Tobin Baker
>Priority: Minor
> Attachments: HADOOP-12807-1.patch, HADOOP-12807-branch-2-002.patch, 
> HADOOP-12807-branch-2-002.patch
>
>
> Unlike the {{DefaultAWSCredentialsProviderChain}} in the AWS SDK, the 
> {{AWSCredentialsProviderChain}} constructed by {{S3AFileSystem}} does not 
> include an {{EnvironmentVariableCredentialsProvider}} instance. This prevents 
> users from supplying AWS credentials in the environment variables 
> {{AWS_ACCESS_KEY_ID}} and {{AWS_SECRET_ACCESS_KEY}}, which is the only 
> alternative in some scenarios.
> In my scenario, I need to access S3 from within a test running in a CI 
> environment that does not support IAM roles but does allow me to supply 
> encrypted environment variables. Thus, the only secure approach I can use is 
> to supply my AWS credentials in environment variables (plaintext 
> configuration files are out of the question).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12807) S3AFileSystem should read AWS credentials from environment variables

2016-06-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15316669#comment-15316669
 ] 

Hadoop QA commented on HADOOP-12807:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} HADOOP-12807 does not apply to branch-2. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808413/HADOOP-12807-branch-2-002.patch
 |
| JIRA Issue | HADOOP-12807 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9666/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> S3AFileSystem should read AWS credentials from environment variables
> 
>
> Key: HADOOP-12807
> URL: https://issues.apache.org/jira/browse/HADOOP-12807
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Tobin Baker
>Priority: Minor
> Attachments: HADOOP-12807-1.patch, HADOOP-12807-branch-2-002.patch, 
> HADOOP-12807-branch-2-002.patch
>
>
> Unlike the {{DefaultAWSCredentialsProviderChain}} in the AWS SDK, the 
> {{AWSCredentialsProviderChain}} constructed by {{S3AFileSystem}} does not 
> include an {{EnvironmentVariableCredentialsProvider}} instance. This prevents 
> users from supplying AWS credentials in the environment variables 
> {{AWS_ACCESS_KEY_ID}} and {{AWS_SECRET_ACCESS_KEY}}, which is the only 
> alternative in some scenarios.
> In my scenario, I need to access S3 from within a test running in a CI 
> environment that does not support IAM roles but does allow me to supply 
> encrypted environment variables. Thus, the only secure approach I can use is 
> to supply my AWS credentials in environment variables (plaintext 
> configuration files are out of the question).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12807) S3AFileSystem should read AWS credentials from environment variables

2016-06-06 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12807:

Status: Patch Available  (was: Open)

> S3AFileSystem should read AWS credentials from environment variables
> 
>
> Key: HADOOP-12807
> URL: https://issues.apache.org/jira/browse/HADOOP-12807
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Tobin Baker
>Priority: Minor
> Attachments: HADOOP-12807-1.patch, HADOOP-12807-branch-2-002.patch, 
> HADOOP-12807-branch-2-002.patch
>
>
> Unlike the {{DefaultAWSCredentialsProviderChain}} in the AWS SDK, the 
> {{AWSCredentialsProviderChain}} constructed by {{S3AFileSystem}} does not 
> include an {{EnvironmentVariableCredentialsProvider}} instance. This prevents 
> users from supplying AWS credentials in the environment variables 
> {{AWS_ACCESS_KEY_ID}} and {{AWS_SECRET_ACCESS_KEY}}, which is the only 
> alternative in some scenarios.
> In my scenario, I need to access S3 from within a test running in a CI 
> environment that does not support IAM roles but does allow me to supply 
> encrypted environment variables. Thus, the only secure approach I can use is 
> to supply my AWS credentials in environment variables (plaintext 
> configuration files are out of the question).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10048) LocalDirAllocator should avoid holding locks while accessing the filesystem

2016-06-06 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15316672#comment-15316672
 ] 

Junping Du commented on HADOOP-10048:
-

Thanks [~jlowe] for updating the patch.
bq.  All or most of the threads could end up theoretically clustering on the 
same disk which is less than ideal. Attaching a new patch that uses an 
AtomicInteger to make sure that simultaneous threads won't get the same 
starting point when searching the directories.
Make sense. This approach looks better in solving this problem.

bq. An alternative approach would be to use a random starting location like is 
done when the size is not specified.
Agree. This could be a nice improvement that we could make later. However, for 
size not specified case, creating a Random object per call may not be 
necessary. May be this is also something we can improve later?

Latest (006) patch looks pretty good to me. I would like to get it in for next 
24 hours if no further comments from others.

> LocalDirAllocator should avoid holding locks while accessing the filesystem
> ---
>
> Key: HADOOP-10048
> URL: https://issues.apache.org/jira/browse/HADOOP-10048
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.3.0
>Reporter: Jason Lowe
>Assignee: Jason Lowe
> Attachments: HADOOP-10048.003.patch, HADOOP-10048.004.patch, 
> HADOOP-10048.005.patch, HADOOP-10048.006.patch, HADOOP-10048.patch, 
> HADOOP-10048.trunk.patch
>
>
> As noted in MAPREDUCE-5584 and HADOOP-7016, LocalDirAllocator can be a 
> bottleneck for multithreaded setups like the ShuffleHandler.  We should 
> consider moving to a lockless design or minimizing the critical sections to a 
> very small amount of time that does not involve I/O operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12807) S3AFileSystem should read AWS credentials from environment variables

2016-06-06 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12807:

Status: Open  (was: Patch Available)

> S3AFileSystem should read AWS credentials from environment variables
> 
>
> Key: HADOOP-12807
> URL: https://issues.apache.org/jira/browse/HADOOP-12807
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Tobin Baker
>Priority: Minor
> Attachments: HADOOP-12807-1.patch, HADOOP-12807-branch-2-002.patch
>
>
> Unlike the {{DefaultAWSCredentialsProviderChain}} in the AWS SDK, the 
> {{AWSCredentialsProviderChain}} constructed by {{S3AFileSystem}} does not 
> include an {{EnvironmentVariableCredentialsProvider}} instance. This prevents 
> users from supplying AWS credentials in the environment variables 
> {{AWS_ACCESS_KEY_ID}} and {{AWS_SECRET_ACCESS_KEY}}, which is the only 
> alternative in some scenarios.
> In my scenario, I need to access S3 from within a test running in a CI 
> environment that does not support IAM roles but does allow me to supply 
> encrypted environment variables. Thus, the only secure approach I can use is 
> to supply my AWS credentials in environment variables (plaintext 
> configuration files are out of the question).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13241) document s3a better

2016-06-06 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13241:
---

 Summary: document s3a better
 Key: HADOOP-13241
 URL: https://issues.apache.org/jira/browse/HADOOP-13241
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: documentation, fs/s3
Affects Versions: 2.8.0
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor


s3a can be documented better, things like classpath, troubleshooting, etc.

sit down and do it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-06-06 Thread Esther Kundin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15316501#comment-15316501
 ] 

Esther Kundin commented on HADOOP-12291:


The posix code was added after I started working on the patch and goes down a 
different code path.  I only added support for ldap hierarchies, I don't think 
it will work with posix, so I added the check.

> Add support for nested groups in LdapGroupsMapping
> --
>
> Key: HADOOP-12291
> URL: https://issues.apache.org/jira/browse/HADOOP-12291
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Gautam Gopalakrishnan
>Assignee: Esther Kundin
>  Labels: features, patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, 
> HADOOP-12291.003.patch, HADOOP-12291.004.patch, HADOOP-12291.005.patch, 
> HADOOP-12291.006.patch, HADOOP-12291.007.patch
>
>
> When using {{LdapGroupsMapping}} with Hadoop, nested groups are not 
> supported. So for example if user {{jdoe}} is part of group A which is a 
> member of group B, the group mapping currently returns only group A.
> Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and 
> SSSD (or similar tools) but would be good to have this feature as part of 
> {{LdapGroupsMapping}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13240) TestAclCommands.testSetfaclValidations fail

2016-06-06 Thread linbao111 (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

linbao111 updated HADOOP-13240:
---
Description: 
mvn test -Djava.net.preferIPv4Stack=true -Dlog4j.rootLogger=DEBUG,console 
-Dtest=TestAclCommands#testSetfaclValidations failed with following message:
---
Test set: org.apache.hadoop.fs.shell.TestAclCommands
---
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.599 sec <<< 
FAILURE! - in org.apache.hadoop.fs.shell.TestAclCommands
testSetfaclValidations(org.apache.hadoop.fs.shell.TestAclCommands)  Time 
elapsed: 0.534 sec  <<< FAILURE!
java.lang.AssertionError: setfacl should fail ACL spec missing
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.hadoop.fs.shell.TestAclCommands.testSetfaclValidations(TestAclCommands.java:81)

i notice from 
HADOOP-10277,hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
 code changed

should 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.javabe
 changed to:
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
 b/hadoop-common-project/hadoop-common/src/test/java/org/
index b14cd37..463bfcd
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
@@ -80,7 +80,7 @@ public void testSetfaclValidations() throws Exception {
 "/path" }));
 assertFalse("setfacl should fail ACL spec missing",
 0 == runCommand(new String[] { "-setfacl", "-m",
-"", "/path" }));
+":", "/path" }));
   }
 
   @Test

  was:
mvn test -Djava.net.preferIPv4Stack=true -Dlog4j.rootLogger=DEBUG,console 
-Dtest=TestAclCommands#testSetfaclValidations failed with following message:
---
Test set: org.apache.hadoop.fs.shell.TestAclCommands
---
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.599 sec <<< 
FAILURE! - in org.apache.hadoop.fs.shell.TestAclCommands
testSetfaclValidations(org.apache.hadoop.fs.shell.TestAclCommands)  Time 
elapsed: 0.534 sec  <<< FAILURE!
java.lang.AssertionError: setfacl should fail ACL spec missing
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.hadoop.fs.shell.TestAclCommands.testSetfaclValidations(TestAclCommands.java:81)

i notice from 
HADOOP-10277,hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
 code changed

should 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.javabe
 changed to:
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
 b/hadoop-common-project/hadoop-common/src/test/java/org/
old mode 100644
new mode 100755
index b14cd37..463bfcd
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
@@ -80,7 +80,7 @@ public void testSetfaclValidations() throws Exception {
 "/path" }));
 assertFalse("setfacl should fail ACL spec missing",
 0 == runCommand(new String[] { "-setfacl", "-m",
-"", "/path" }));
+":", "/path" }));
   }
 
   @Test


> TestAclCommands.testSetfaclValidations fail
> ---
>
> Key: HADOOP-13240
> URL: https://issues.apache.org/jira/browse/HADOOP-13240
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1, 2.7.2
> Environment: hadoop 2.4.1,as6.5
>Reporter: linbao111
>Priority: Minor
>
> mvn test -Djava.net.preferIPv4Stack=true -Dlog4j.rootLogger=DEBUG,console 
> -Dtest=TestAclCommands#testSetfaclValidations failed with following message:
> ---
> Test set: org.apache.hadoop.fs.shell.TestAclCommands
> ---
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.599 sec <<< 
> FAILURE! - in org.apache.hadoop.fs.shell.TestAclCommands
> 

[jira] [Updated] (HADOOP-13226) Support async call retry and failover

2016-06-06 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-13226:
-
Fix Version/s: (was: HDFS-9924)
   2.8.0

> Support async call retry and failover
> -
>
> Key: HADOOP-13226
> URL: https://issues.apache.org/jira/browse/HADOOP-13226
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: io, ipc
>Reporter: Xiaobing Zhou
>Assignee: Tsz Wo Nicholas Sze
> Fix For: 2.8.0
>
> Attachments: h10433_20160524.patch, h10433_20160525.patch, 
> h10433_20160525b.patch, h10433_20160527.patch, h10433_20160528.patch, 
> h10433_20160528c.patch
>
>
> In current Async DFS implementation, file system calls are invoked and 
> returns Future immediately to clients. Clients call Future#get to retrieve 
> final results. Future#get internally invokes a chain of callbacks residing in 
> ClientNamenodeProtocolTranslatorPB, ProtobufRpcEngine and ipc.Client. The 
> callback path bypasses the original retry layer/logic designed for 
> synchronous DFS. This proposes refactoring to make retry also works for Async 
> DFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12957) Limit the number of outstanding async calls

2016-06-06 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-12957:
-
Fix Version/s: (was: HDFS-9924)
   2.8.0

> Limit the number of outstanding async calls
> ---
>
> Key: HADOOP-12957
> URL: https://issues.apache.org/jira/browse/HADOOP-12957
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Fix For: 2.8.0
>
> Attachments: HADOOP-12957-HADOOP-12909.000.patch, 
> HADOOP-12957-combo.000.patch, HADOOP-12957.001.patch, HADOOP-12957.002.patch, 
> HADOOP-12957.003.patch, HADOOP-12957.004.patch, HADOOP-12957.005.patch, 
> HADOOP-12957.006.patch, HADOOP-12957.007.patch, HADOOP-12957.008.patch, 
> HADOOP-12957.009.patch, HADOOP-12957.010.patch, HADOOP-12957.011.patch
>
>
> In async RPC, if the callers don't read replies fast enough, the buffer 
> storing replies could be used up. This is to propose limiting the number of 
> outstanding async calls to eliminate the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13168) Support Future.get with timeout in ipc async calls

2016-06-06 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-13168:
-
Fix Version/s: (was: HDFS-9924)
   2.8.0

> Support Future.get with timeout in ipc async calls
> --
>
> Key: HADOOP-13168
> URL: https://issues.apache.org/jira/browse/HADOOP-13168
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Fix For: 2.8.0
>
> Attachments: c13168_20160517.patch, c13168_20160518.patch, 
> c13168_20160519.patch
>
>
> Currently, the Future returned by ipc async call only support Future.get() 
> but not Future.get(timeout, unit).  We should support the latter as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13240) TestAclCommands.testSetfaclValidations fail

2016-06-06 Thread linbao111 (JIRA)
linbao111 created HADOOP-13240:
--

 Summary: TestAclCommands.testSetfaclValidations fail
 Key: HADOOP-13240
 URL: https://issues.apache.org/jira/browse/HADOOP-13240
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.2, 2.4.1
 Environment: hadoop 2.4.1,as6.5
Reporter: linbao111
Priority: Minor


mvn test -Djava.net.preferIPv4Stack=true -Dlog4j.rootLogger=DEBUG,console 
-Dtest=TestAclCommands#testSetfaclValidations failed with following message:
---
Test set: org.apache.hadoop.fs.shell.TestAclCommands
---
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.599 sec <<< 
FAILURE! - in org.apache.hadoop.fs.shell.TestAclCommands
testSetfaclValidations(org.apache.hadoop.fs.shell.TestAclCommands)  Time 
elapsed: 0.534 sec  <<< FAILURE!
java.lang.AssertionError: setfacl should fail ACL spec missing
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.hadoop.fs.shell.TestAclCommands.testSetfaclValidations(TestAclCommands.java:81)

i notice from 
HADOOP-10277,hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
 code changed

should 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.javabe
 changed to:
diff --git 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
 b/hadoop-common-project/hadoop-common/src/test/java/org/
old mode 100644
new mode 100755
index b14cd37..463bfcd
--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestAclCommands.java
@@ -80,7 +80,7 @@ public void testSetfaclValidations() throws Exception {
 "/path" }));
 assertFalse("setfacl should fail ACL spec missing",
 0 == runCommand(new String[] { "-setfacl", "-m",
-"", "/path" }));
+":", "/path" }));
   }
 
   @Test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org