[jira] [Commented] (HADOOP-14284) Shade Guava everywhere

2017-08-23 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16139568#comment-16139568
 ] 

Tsuyoshi Ozawa commented on HADOOP-14284:
-

In summary, the Guava's problem is caused by the usages of HDFS's internal 
interface by downstream projects. Hence, we will have an effort for urging 
downstream project to use the shaded client instead. Is this correct?

Can we think the same way about protobuf problem?

> Shade Guava everywhere
> --
>
> Key: HADOOP-14284
> URL: https://issues.apache.org/jira/browse/HADOOP-14284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, 
> HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch, 
> HADOOP-14284.012.patch
>
>
> HADOOP-10101 upgraded the guava version for 3.x to 21.
> Guava is broadly used by Java projects that consume our artifacts. 
> Unfortunately, these projects also consume our private artifacts like 
> {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced 
> by HADOOP-11804, currently only available in 3.0.0-alpha2.
> We should shade Guava everywhere to proactively avoid breaking downstreams. 
> This isn't a requirement for all dependency upgrades, but it's necessary for 
> known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14284) Shade Guava everywhere

2017-08-23 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16139546#comment-16139546
 ] 

Allen Wittenauer commented on HADOOP-14284:
---

What should 'hadoop classpath', et. al. return?  It really seems like it (and 
anything else that isn't a daemon or admin command) should be returning/using a 
shaded jar + supplemental bits from tools/.  

> Shade Guava everywhere
> --
>
> Key: HADOOP-14284
> URL: https://issues.apache.org/jira/browse/HADOOP-14284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, 
> HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch, 
> HADOOP-14284.012.patch
>
>
> HADOOP-10101 upgraded the guava version for 3.x to 21.
> Guava is broadly used by Java projects that consume our artifacts. 
> Unfortunately, these projects also consume our private artifacts like 
> {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced 
> by HADOOP-11804, currently only available in 3.0.0-alpha2.
> We should shade Guava everywhere to proactively avoid breaking downstreams. 
> This isn't a requirement for all dependency upgrades, but it's necessary for 
> known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14284) Shade Guava everywhere

2017-08-23 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16139472#comment-16139472
 ] 

Junping Du commented on HADOOP-14284:
-

Sounds like I miss latest comments from [~busbey] and [~vinodkv] when 
commenting. It looks like we are in the same page - shade client only.

> Shade Guava everywhere
> --
>
> Key: HADOOP-14284
> URL: https://issues.apache.org/jira/browse/HADOOP-14284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, 
> HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch, 
> HADOOP-14284.012.patch
>
>
> HADOOP-10101 upgraded the guava version for 3.x to 21.
> Guava is broadly used by Java projects that consume our artifacts. 
> Unfortunately, these projects also consume our private artifacts like 
> {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced 
> by HADOOP-11804, currently only available in 3.0.0-alpha2.
> We should shade Guava everywhere to proactively avoid breaking downstreams. 
> This isn't a requirement for all dependency upgrades, but it's necessary for 
> known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14284) Shade Guava everywhere

2017-08-23 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16139467#comment-16139467
 ] 

Junping Du commented on HADOOP-14284:
-

I still prefer the way to shade client only. What Stack proposed above is 
slightly better than current way (aka. Shade Guava Everywhere). However, it is 
still too complicated to me. As Stack mentioned above, it need code injection 
to our current (and future) code base as referring all third party library 
classes will have to use internal package name instead (not automatic). May be 
we can have/write some auto tools here, but it will be another burden of 
maintaining in future.

bq. Unfortunately for HDFS, there are a bunch of downstreams incorrectly 
including our server artifacts, so for HDFS, I think we need to shade those too.
Shouldn't we fix these incorrectly usage for downstream projects since brand 
new Hadoop 3.0? There may be some efforts for release synchronization between 
Hadoop and downstream projects, but I think it should worth it.


> Shade Guava everywhere
> --
>
> Key: HADOOP-14284
> URL: https://issues.apache.org/jira/browse/HADOOP-14284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, 
> HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch, 
> HADOOP-14284.012.patch
>
>
> HADOOP-10101 upgraded the guava version for 3.x to 21.
> Guava is broadly used by Java projects that consume our artifacts. 
> Unfortunately, these projects also consume our private artifacts like 
> {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced 
> by HADOOP-11804, currently only available in 3.0.0-alpha2.
> We should shade Guava everywhere to proactively avoid breaking downstreams. 
> This isn't a requirement for all dependency upgrades, but it's necessary for 
> known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14559) FTPFileSystem instance in TestFTPFileSystem should be created before tests and closed after tests

2017-08-23 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16139464#comment-16139464
 ] 

Hongyuan Li commented on HADOOP-14559:
--

Hi, [~ste...@apache.org], [~yzhangal], sorry to interrupt you, should this 
issue be resolved or closed ? 

> FTPFileSystem instance in TestFTPFileSystem should be created before tests 
> and closed after tests 
> --
>
> Key: HADOOP-14559
> URL: https://issues.apache.org/jira/browse/HADOOP-14559
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, test
>Affects Versions: 3.0.0-alpha2
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
>Priority: Minor
> Attachments: HADOOP-14559-001.patch
>
>
> The used FTPFileSystem in TestFTPFileSystem should be closed in each test 
> case as an improvement.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14284) Shade Guava everywhere

2017-08-23 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16139454#comment-16139454
 ] 

Vinod Kumar Vavilapalli commented on HADOOP-14284:
--

bq. Essentially, if we're not shading the server-side for YARN / MapReduce then 
we ought not. For client-side stuff we should just tell people who want to not 
have a Guava conflict that they need to use the published shaded client jars.
I did miss this in this long thread. This is essentially saying that if you are 
upgrading to 3.0
 - Don't use hadoop-hdfs - this is our private interface. Use the client jar.
 - If you want to use your own custom guava / jackson / whatever, use the 
shaded client jars. May be always use shaded client jars to future proof 
yourselves.

Did I read that right, [~busbey]?

While this is a change to downstream users, I can get behind this as the one 
final step from 2.x to 3.x which will shield our users for all of the future.

> Shade Guava everywhere
> --
>
> Key: HADOOP-14284
> URL: https://issues.apache.org/jira/browse/HADOOP-14284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, 
> HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch, 
> HADOOP-14284.012.patch
>
>
> HADOOP-10101 upgraded the guava version for 3.x to 21.
> Guava is broadly used by Java projects that consume our artifacts. 
> Unfortunately, these projects also consume our private artifacts like 
> {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced 
> by HADOOP-11804, currently only available in 3.0.0-alpha2.
> We should shade Guava everywhere to proactively avoid breaking downstreams. 
> This isn't a requirement for all dependency upgrades, but it's necessary for 
> known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14284) Shade Guava everywhere

2017-08-23 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16139447#comment-16139447
 ] 

Sean Busbey commented on HADOOP-14284:
--

I maintain the same view I had before:

{quote}
bq. Unfortunately for HDFS, there are a bunch of downstreams incorrectly 
including our server artifacts, so for HDFS, I think we need to shade those too.
This seems like the wrong approach IMHO. There's no incentive for folks to ever 
start respecting the public/not public interface we put up. If we're only 
concerned about client side impact, then we should just upgrade guava and close 
this jira. That way if folks run into a problem with the guava uprade, we just 
tell them to use the correct client jars instead of server jars. we can even 
put it in the release note for the guava upgrade.
{quote}

Essentially, if we're not shading the server-side for YARN / MapReduce then we 
ought not. For client-side stuff we should just tell people who want to not 
have a Guava conflict that they need to use the published shaded client jars.

> Shade Guava everywhere
> --
>
> Key: HADOOP-14284
> URL: https://issues.apache.org/jira/browse/HADOOP-14284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, 
> HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch, 
> HADOOP-14284.012.patch
>
>
> HADOOP-10101 upgraded the guava version for 3.x to 21.
> Guava is broadly used by Java projects that consume our artifacts. 
> Unfortunately, these projects also consume our private artifacts like 
> {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced 
> by HADOOP-11804, currently only available in 3.0.0-alpha2.
> We should shade Guava everywhere to proactively avoid breaking downstreams. 
> This isn't a requirement for all dependency upgrades, but it's necessary for 
> known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14284) Shade Guava everywhere

2017-08-23 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16139419#comment-16139419
 ] 

Vinod Kumar Vavilapalli commented on HADOOP-14284:
--

Paging [~djp] for his opinion.

This is a long JIRA and there's been a lot of back-and-forth, so just 
reiterating. I am not a fan of this but if you want to shade everything because 
of the hadoop-hdfs module issue, you should definitely leave YARN and mapreduce 
out of this - see my comment above, *snip*
bq. I think there is one thing we should definitely do. YARN and mapreduce have 
always had separate client libraries. And the expectation of users has always 
been to not depend on the server jars. So you should definitely skip YARN 
server side modules and mapreduce non-client modules from shading.

> Shade Guava everywhere
> --
>
> Key: HADOOP-14284
> URL: https://issues.apache.org/jira/browse/HADOOP-14284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, 
> HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch, 
> HADOOP-14284.012.patch
>
>
> HADOOP-10101 upgraded the guava version for 3.x to 21.
> Guava is broadly used by Java projects that consume our artifacts. 
> Unfortunately, these projects also consume our private artifacts like 
> {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced 
> by HADOOP-11804, currently only available in 3.0.0-alpha2.
> We should shade Guava everywhere to proactively avoid breaking downstreams. 
> This isn't a requirement for all dependency upgrades, but it's necessary for 
> known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14799) Update nimbus-jose-jwt to 4.41.1

2017-08-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16139417#comment-16139417
 ] 

Hadoop QA commented on HADOOP-14799:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
3s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
48s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 51s{color} | {color:orange} root: The patch generated 1 new + 128 unchanged 
- 7 fixed = 129 total (was 135) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
3s{color} | {color:green} hadoop-auth in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
17s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}113m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14799 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883449/HADOOP-14799.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 83df6ca0053b 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/p

[jira] [Commented] (HADOOP-14688) Intern strings in KeyVersion and EncryptedKeyVersion

2017-08-23 Thread Misha Dmitriev (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16139371#comment-16139371
 ] 

Misha Dmitriev commented on HADOOP-14688:
-

[~daryn]: when a live heap dump is captured, as done here, a full GC is 
performed before a heap snapshot is taken. So if the given application produces 
objects that are very short-lived, i.e. quickly become garbage, then we will 
only see those of them that are live at the moment, which is typically not 
much. Conversely, most objects in a live heap dump tend to be relatively 
long-lived.

Furthermore, experience has shown that for reasonably long-lived strings, the 
CPU overhead of interning is small compared to the reduction in the memory 
pressure, reduced GC pauses, etc. That is, the cost of a fast internal 
String.intern() call is comparable to the cost of GC scanning and moving around 
all the extra copies of a string that remain in memory without interning.

> Intern strings in KeyVersion and EncryptedKeyVersion
> 
>
> Key: HADOOP-14688
> URL: https://issues.apache.org/jira/browse/HADOOP-14688
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: GC root of the String.png, HADOOP-14688.01.patch, 
> heapdump analysis.png, jxray.report
>
>
> This is inspired by [~mi...@cloudera.com]'s work on HDFS-11383.
> The key names and key version names are usually the same for a bunch of 
> {{KeyVersion}} and {{EncryptedKeyVersion}}. We should not create duplicate 
> objects for them.
> This is more important to HDFS-10899, where we try to re-encrypt all files' 
> EDEKs in a given EZ. Those EDEKs all has the same key name, and mostly using 
> no more than a couple of key version names.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9747) Reduce unnecessary UGI synchronization

2017-08-23 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-9747:
--
Target Version/s: 2.8.2  (was: 2.8.1)

> Reduce unnecessary UGI synchronization
> --
>
> Key: HADOOP-9747
> URL: https://issues.apache.org/jira/browse/HADOOP-9747
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0-alpha1
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HADOOP-9747.2.branch-2.patch, HADOOP-9747.2.trunk.patch, 
> HADOOP-9747.branch-2.patch, HADOOP-9747.trunk.patch
>
>
> Jstacks of heavily loaded NNs show up to dozens of threads blocking in the 
> UGI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14729) Upgrade JUnit 3 TestCase to JUnit 4

2017-08-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16139353#comment-16139353
 ] 

Hadoop QA commented on HADOOP-14729:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 53 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
42s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
35s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
42s{color} | {color:green} root generated 0 new + 1289 unchanged - 2 fixed = 
1289 total (was 1291) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 14s{color} | {color:orange} root: The patch generated 43 new + 703 unchanged 
- 97 fixed = 746 total (was 800) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  9m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
51s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-yarn-server-web-proxy in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
54s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
57s{color} | {color:green} hadoop-mapreduce-client-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 98m 
18s{color} | {color:green} hadoop-mapreduce-client-jobclient in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
32s{color} | {color:green} hadoop-mapreduce-client-nativetask in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
51s{color} | {color:green} hadoop-mapreduce-examples in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
42s{color} | {color:green} hadoop-streaming in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
44s{color} | {color:green} hadoop-datajoin in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
4s{color} | {color:green} hadoop-extras in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
49s{color} | {color:green} The patch

[jira] [Commented] (HADOOP-14284) Shade Guava everywhere

2017-08-23 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16139325#comment-16139325
 ] 

Daniel Templeton commented on HADOOP-14284:
---

Given that we're heading rapidly into the beta1 code freeze, let's make a call 
and move forward.  While shading only the clients is the smaller change, it 
creates inconsistency in the code base if we follow HBase's lead on modifying 
the imports, which I think we should.  I would therefore vote to shade Guava 
everywhere.  Any objections?  [~ozawa], will you have time to work on this 
patch before beta1?

> Shade Guava everywhere
> --
>
> Key: HADOOP-14284
> URL: https://issues.apache.org/jira/browse/HADOOP-14284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, 
> HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch, 
> HADOOP-14284.012.patch
>
>
> HADOOP-10101 upgraded the guava version for 3.x to 21.
> Guava is broadly used by Java projects that consume our artifacts. 
> Unfortunately, these projects also consume our private artifacts like 
> {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced 
> by HADOOP-11804, currently only available in 3.0.0-alpha2.
> We should shade Guava everywhere to proactively avoid breaking downstreams. 
> This isn't a requirement for all dependency upgrades, but it's necessary for 
> known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14796) Update json-simple version to 1.1.1

2017-08-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16139321#comment-16139321
 ] 

Hadoop QA commented on HADOOP-14796:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
9s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m  
5s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14796 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883450/HADOOP-14796.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 3deacf09660f 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 26d8c8f |
| Default Java | 1.8.0_144 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13103/testReport/ |
| modules | C: hadoop-project U: hadoop-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13103/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update json-simple version to 1.1.1
> ---
>
> Key: HADOOP-14796
> URL: https://issues.apache.org/jira/browse/HADOOP-14796
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: HADOOP-14796.001.patch
>
>
> Update the dependency
> com.googlecode.json-simple:json-simple:1.1
> to the latest (1.1.1).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14654) Update httpclient version to 4.5.3

2017-08-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16139323#comment-16139323
 ] 

Hadoop QA commented on HADOOP-14654:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m  
7s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14654 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12878848/HADOOP-14654.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux cb28f2aa8b8b 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 26d8c8f |
| Default Java | 1.8.0_144 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13104/testReport/ |
| modules | C: hadoop-project U: hadoop-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13104/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update httpclient version to 4.5.3
> --
>
> Key: HADOOP-14654
> URL: https://issues.apache.org/jira/browse/HADOOP-14654
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: HADOOP-14654.001.patch
>
>
> Update the dependency
> org.apache.httpcomponents:httpclient:4.5.2
> to the latest (4.5.3).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14649) Update aliyun-sdk-oss version to 2.8.1

2017-08-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16139315#comment-16139315
 ] 

Hudson commented on HADOOP-14649:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12232 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12232/])
HADOOP-14649. Update aliyun-sdk-oss version to 2.8.1. (Genmao Yu via (rchiang: 
rev 26d8c8fa586a634ae91993940b28b3a1452d4be6)
* (edit) hadoop-project/pom.xml


> Update aliyun-sdk-oss version to 2.8.1
> --
>
> Key: HADOOP-14649
> URL: https://issues.apache.org/jira/browse/HADOOP-14649
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ray Chiang
>Assignee: Genmao Yu
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14649.000.patch
>
>
> Update the dependency
> com.aliyun.oss:aliyun-sdk-oss:2.4.1
> to the latest (2.8.1).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14796) Update json-simple version to 1.1.1

2017-08-23 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang reassigned HADOOP-14796:
---

Assignee: Ray Chiang

> Update json-simple version to 1.1.1
> ---
>
> Key: HADOOP-14796
> URL: https://issues.apache.org/jira/browse/HADOOP-14796
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: HADOOP-14796.001.patch
>
>
> Update the dependency
> com.googlecode.json-simple:json-simple:1.1
> to the latest (1.1.1).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14654) Update httpclient version to 4.5.3

2017-08-23 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HADOOP-14654:

Status: Patch Available  (was: Open)

Comparing against my baseline testing, I'm not seeing any new test failures.

> Update httpclient version to 4.5.3
> --
>
> Key: HADOOP-14654
> URL: https://issues.apache.org/jira/browse/HADOOP-14654
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: HADOOP-14654.001.patch
>
>
> Update the dependency
> org.apache.httpcomponents:httpclient:4.5.2
> to the latest (4.5.3).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14796) Update json-simple version to 1.1.1

2017-08-23 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HADOOP-14796:

Status: Patch Available  (was: Open)

> Update json-simple version to 1.1.1
> ---
>
> Key: HADOOP-14796
> URL: https://issues.apache.org/jira/browse/HADOOP-14796
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ray Chiang
> Attachments: HADOOP-14796.001.patch
>
>
> Update the dependency
> com.googlecode.json-simple:json-simple:1.1
> to the latest (1.1.1).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14796) Update json-simple version to 1.1.1

2017-08-23 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HADOOP-14796:

Attachment: HADOOP-14796.001.patch

> Update json-simple version to 1.1.1
> ---
>
> Key: HADOOP-14796
> URL: https://issues.apache.org/jira/browse/HADOOP-14796
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ray Chiang
> Attachments: HADOOP-14796.001.patch
>
>
> Update the dependency
> com.googlecode.json-simple:json-simple:1.1
> to the latest (1.1.1).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14649) Update aliyun-sdk-oss version to 2.8.1

2017-08-23 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HADOOP-14649:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-beta1
   Status: Resolved  (was: Patch Available)

Committed to trunk.

Thanks [~uncleGen] for the contribution!  Thanks [~drankye] and 
[~ste...@apache.org] for your help in getting this done!

> Update aliyun-sdk-oss version to 2.8.1
> --
>
> Key: HADOOP-14649
> URL: https://issues.apache.org/jira/browse/HADOOP-14649
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ray Chiang
>Assignee: Genmao Yu
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14649.000.patch
>
>
> Update the dependency
> com.aliyun.oss:aliyun-sdk-oss:2.4.1
> to the latest (2.8.1).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14799) Update nimbus-jose-jwt to 4.41.1

2017-08-23 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HADOOP-14799:

Attachment: HADOOP-14799.002.patch

- Fix typo in class name
- Change method visibility in 
JWTRedirectAuthenticationHandler#constructLoginURL() for testing

> Update nimbus-jose-jwt to 4.41.1
> 
>
> Key: HADOOP-14799
> URL: https://issues.apache.org/jira/browse/HADOOP-14799
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: HADOOP-14799.001.patch, HADOOP-14799.002.patch
>
>
> Update the dependency
> com.nimbusds:nimbus-jose-jwt:3.9
> to the latest (4.41.1)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13197) Add non-decayed call metrics for DecayRpcScheduler

2017-08-23 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16139270#comment-16139270
 ] 

Wei-Chiu Chuang commented on HADOOP-13197:
--

Update fix versions based on git log.

> Add non-decayed call metrics for DecayRpcScheduler
> --
>
> Key: HADOOP-13197
> URL: https://issues.apache.org/jira/browse/HADOOP-13197
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, metrics
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13197.00.patch, HADOOP-13197.01.patch, 
> HADOOP-13197.02.patch
>
>
> DecayRpcScheduler currently exposes decayed call count over the time. It will 
> be useful to expose the non-decayed raw count for monitoring applications. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13197) Add non-decayed call metrics for DecayRpcScheduler

2017-08-23 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-13197:
-
Fix Version/s: 2.8.0
   3.0.0-alpha1
   2.9.0

> Add non-decayed call metrics for DecayRpcScheduler
> --
>
> Key: HADOOP-13197
> URL: https://issues.apache.org/jira/browse/HADOOP-13197
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, metrics
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13197.00.patch, HADOOP-13197.01.patch, 
> HADOOP-13197.02.patch
>
>
> DecayRpcScheduler currently exposes decayed call count over the time. It will 
> be useful to expose the non-decayed raw count for monitoring applications. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14649) Update aliyun-sdk-oss version to 2.8.1

2017-08-23 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16139226#comment-16139226
 ] 

Ray Chiang commented on HADOOP-14649:
-

Thanks [~uncleGen].  +1.  Committing this soon.

> Update aliyun-sdk-oss version to 2.8.1
> --
>
> Key: HADOOP-14649
> URL: https://issues.apache.org/jira/browse/HADOOP-14649
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ray Chiang
>Assignee: Genmao Yu
> Attachments: HADOOP-14649.000.patch
>
>
> Update the dependency
> com.aliyun.oss:aliyun-sdk-oss:2.4.1
> to the latest (2.8.1).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14729) Upgrade JUnit 3 TestCase to JUnit 4

2017-08-23 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-14729:

Attachment: HADOOP-14729.012.patch

> Upgrade JUnit 3 TestCase to JUnit 4
> ---
>
> Key: HADOOP-14729
> URL: https://issues.apache.org/jira/browse/HADOOP-14729
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Akira Ajisaka
>Assignee: Ajay Kumar
>  Labels: newbie
> Attachments: HADOOP-14729.001.patch, HADOOP-14729.002.patch, 
> HADOOP-14729.003.patch, HADOOP-14729.004.patch, HADOOP-14729.005.patch, 
> HADOOP-14729.006.patch, HADOOP-14729.007.patch, HADOOP-14729.008.patch, 
> HADOOP-14729.009.patch, HADOOP-14729.010.patch, HADOOP-14729.011.patch, 
> HADOOP-14729.012.patch
>
>
> There are still test classes that extend from junit.framework.TestCase in 
> hadoop-common. Upgrade them to JUnit4.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9747) Reduce unnecessary UGI synchronization

2017-08-23 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16138923#comment-16138923
 ] 

Eric Payne commented on HADOOP-9747:


[~daryn], Thanks for providing the fixes for the YARN tests.

+1. The patch LGTM. If there are no concerns, I will commit tomorrow afternoon.

> Reduce unnecessary UGI synchronization
> --
>
> Key: HADOOP-9747
> URL: https://issues.apache.org/jira/browse/HADOOP-9747
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0-alpha1
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HADOOP-9747.2.branch-2.patch, HADOOP-9747.2.trunk.patch, 
> HADOOP-9747.branch-2.patch, HADOOP-9747.trunk.patch
>
>
> Jstacks of heavily loaded NNs show up to dozens of threads blocking in the 
> UGI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14251) Credential provider should handle property key deprecation

2017-08-23 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14251:

   Resolution: Fixed
Fix Version/s: 2.8.3
   3.0.0-beta1
   2.9.0
   Status: Resolved  (was: Patch Available)

Committed to trunk, branch-2, and branch-2.8.

Thanks [~steve_l] for the review!

> Credential provider should handle property key deprecation
> --
>
> Key: HADOOP-14251
> URL: https://issues.apache.org/jira/browse/HADOOP-14251
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Critical
> Fix For: 2.9.0, 3.0.0-beta1, 2.8.3
>
> Attachments: HADOOP-14251.001.patch, HADOOP-14251.002.patch, 
> HADOOP-14251.003.patch, HADOOP-14251.004.patch, HADOOP-14251.005.patch
>
>
> The properties with old keys stored in a credential store can not be read via 
> the new property keys, even though the old keys have been deprecated.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14251) Credential provider should handle property key deprecation

2017-08-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16138916#comment-16138916
 ] 

Hudson commented on HADOOP-14251:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12231 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12231/])
HADOOP-14251. Credential provider should handle property key (jzhuge: rev 
7e6463d2fb5f9383d88baec290461868cf476e4c)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java


> Credential provider should handle property key deprecation
> --
>
> Key: HADOOP-14251
> URL: https://issues.apache.org/jira/browse/HADOOP-14251
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Critical
> Attachments: HADOOP-14251.001.patch, HADOOP-14251.002.patch, 
> HADOOP-14251.003.patch, HADOOP-14251.004.patch, HADOOP-14251.005.patch
>
>
> The properties with old keys stored in a credential store can not be read via 
> the new property keys, even though the old keys have been deprecated.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14729) Upgrade JUnit 3 TestCase to JUnit 4

2017-08-23 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16138904#comment-16138904
 ] 

Ajay Kumar edited comment on HADOOP-14729 at 8/23/17 7:17 PM:
--

[~ste...@apache.org], Revereted back changes for azure and s3 test class. Jira 
to track upgrade for TestS3NInMemoryFileSystem. [HADOOP-14803]


was (Author: ajayydv):
[~ste...@apache.org], Revereted back changes for azure and s3 test class. Jira 
to track upgrade for TestS3NInMemoryFileSystem. [#HADOOP-14803]

> Upgrade JUnit 3 TestCase to JUnit 4
> ---
>
> Key: HADOOP-14729
> URL: https://issues.apache.org/jira/browse/HADOOP-14729
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Akira Ajisaka
>Assignee: Ajay Kumar
>  Labels: newbie
> Attachments: HADOOP-14729.001.patch, HADOOP-14729.002.patch, 
> HADOOP-14729.003.patch, HADOOP-14729.004.patch, HADOOP-14729.005.patch, 
> HADOOP-14729.006.patch, HADOOP-14729.007.patch, HADOOP-14729.008.patch, 
> HADOOP-14729.009.patch, HADOOP-14729.010.patch, HADOOP-14729.011.patch
>
>
> There are still test classes that extend from junit.framework.TestCase in 
> hadoop-common. Upgrade them to JUnit4.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14729) Upgrade JUnit 3 TestCase to JUnit 4

2017-08-23 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16138904#comment-16138904
 ] 

Ajay Kumar commented on HADOOP-14729:
-

[~ste...@apache.org], Revereted back changes for azure and s3 test class. Jira 
to track upgrade for TestS3NInMemoryFileSystem. [#HADOOP-14803]

> Upgrade JUnit 3 TestCase to JUnit 4
> ---
>
> Key: HADOOP-14729
> URL: https://issues.apache.org/jira/browse/HADOOP-14729
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Akira Ajisaka
>Assignee: Ajay Kumar
>  Labels: newbie
> Attachments: HADOOP-14729.001.patch, HADOOP-14729.002.patch, 
> HADOOP-14729.003.patch, HADOOP-14729.004.patch, HADOOP-14729.005.patch, 
> HADOOP-14729.006.patch, HADOOP-14729.007.patch, HADOOP-14729.008.patch, 
> HADOOP-14729.009.patch, HADOOP-14729.010.patch, HADOOP-14729.011.patch
>
>
> There are still test classes that extend from junit.framework.TestCase in 
> hadoop-common. Upgrade them to JUnit4.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14729) Upgrade JUnit 3 TestCase to JUnit 4

2017-08-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16138902#comment-16138902
 ] 

Hadoop QA commented on HADOOP-14729:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HADOOP-14729 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-14729 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12883387/HADOOP-14729.011.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13100/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Upgrade JUnit 3 TestCase to JUnit 4
> ---
>
> Key: HADOOP-14729
> URL: https://issues.apache.org/jira/browse/HADOOP-14729
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Akira Ajisaka
>Assignee: Ajay Kumar
>  Labels: newbie
> Attachments: HADOOP-14729.001.patch, HADOOP-14729.002.patch, 
> HADOOP-14729.003.patch, HADOOP-14729.004.patch, HADOOP-14729.005.patch, 
> HADOOP-14729.006.patch, HADOOP-14729.007.patch, HADOOP-14729.008.patch, 
> HADOOP-14729.009.patch, HADOOP-14729.010.patch, HADOOP-14729.011.patch
>
>
> There are still test classes that extend from junit.framework.TestCase in 
> hadoop-common. Upgrade them to JUnit4.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14803) Upgrade JUnit 3 TestCase to JUnit 4 in TestS3NInMemoryFileSystem

2017-08-23 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-14803:

Description:  Upgrade JUnit 3 TestCase to JUnit 4 in 
TestS3NInMemoryFileSystem

> Upgrade JUnit 3 TestCase to JUnit 4 in TestS3NInMemoryFileSystem
> 
>
> Key: HADOOP-14803
> URL: https://issues.apache.org/jira/browse/HADOOP-14803
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Ajay Kumar
>
>  Upgrade JUnit 3 TestCase to JUnit 4 in TestS3NInMemoryFileSystem



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14803) Upgrade JUnit 3 TestCase to JUnit 4 in TestS3NInMemoryFileSystem

2017-08-23 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-14803:

Tags: junit3

> Upgrade JUnit 3 TestCase to JUnit 4 in TestS3NInMemoryFileSystem
> 
>
> Key: HADOOP-14803
> URL: https://issues.apache.org/jira/browse/HADOOP-14803
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Ajay Kumar
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14803) Upgrade JUnit 3 TestCase to JUnit 4 in TestS3NInMemoryFileSystem

2017-08-23 Thread Ajay Kumar (JIRA)
Ajay Kumar created HADOOP-14803:
---

 Summary: Upgrade JUnit 3 TestCase to JUnit 4 in 
TestS3NInMemoryFileSystem
 Key: HADOOP-14803
 URL: https://issues.apache.org/jira/browse/HADOOP-14803
 Project: Hadoop Common
  Issue Type: Test
Reporter: Ajay Kumar






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14729) Upgrade JUnit 3 TestCase to JUnit 4

2017-08-23 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-14729:

Attachment: HADOOP-14729.011.patch

> Upgrade JUnit 3 TestCase to JUnit 4
> ---
>
> Key: HADOOP-14729
> URL: https://issues.apache.org/jira/browse/HADOOP-14729
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Akira Ajisaka
>Assignee: Ajay Kumar
>  Labels: newbie
> Attachments: HADOOP-14729.001.patch, HADOOP-14729.002.patch, 
> HADOOP-14729.003.patch, HADOOP-14729.004.patch, HADOOP-14729.005.patch, 
> HADOOP-14729.006.patch, HADOOP-14729.007.patch, HADOOP-14729.008.patch, 
> HADOOP-14729.009.patch, HADOOP-14729.010.patch, HADOOP-14729.011.patch
>
>
> There are still test classes that extend from junit.framework.TestCase in 
> hadoop-common. Upgrade them to JUnit4.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14444) New implementation of ftp and sftp filesystems

2017-08-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16138811#comment-16138811
 ] 

Hadoop QA commented on HADOOP-1:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 31 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-tools {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
25s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
6s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-tools {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-tools_hadoop-ftp generated 1 new + 0 unchanged 
- 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m 
12s{color} | {color:red} hadoop-tools generated 1 new + 434 unchanged - 0 fixed 
= 435 total (was 434) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-ftp in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 48m 10s{color} 
| {color:red} hadoop-tools in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}131m 17s{colo

[jira] [Commented] (HADOOP-14799) Update nimbus-jose-jwt to 4.41.1

2017-08-23 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16138744#comment-16138744
 ] 

Ray Chiang commented on HADOOP-14799:
-

>From what I can tell from mvnrepository.com, both versions should be APL 2.0.  
>Am I missing something?

And I didn't notice the misspelling.  Will fix that.

> Update nimbus-jose-jwt to 4.41.1
> 
>
> Key: HADOOP-14799
> URL: https://issues.apache.org/jira/browse/HADOOP-14799
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: HADOOP-14799.001.patch
>
>
> Update the dependency
> com.nimbusds:nimbus-jose-jwt:3.9
> to the latest (4.41.1)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14801) s3guard diff demand creates a new table

2017-08-23 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16138736#comment-16138736
 ] 

Sean Mackrory commented on HADOOP-14801:


This is probably it. The forceCreate option was only put in for the create 
command. I also don't see it as being correct for any other command to do.

{code}
diff --git 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java
 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java
index 9bd0cb8..958ddee 100644
--- 
a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java
+++ 
b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java
@@ -700,7 +700,7 @@ public int run(String[] args, PrintStream out) throws 
IOException {
   }
   String s3Path = paths.get(0);
   initS3AFileSystem(s3Path);
-  initMetadataStore(true);
+  initMetadataStore(false);
 
   URI uri;
   try {
{code}

> s3guard diff demand creates a new table
> ---
>
> Key: HADOOP-14801
> URL: https://issues.apache.org/jira/browse/HADOOP-14801
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Priority: Minor
>
> ifr you call {{s3guard diff}} to diff a bucket and a table, it creates the 
> table if not already there. I don't see that as being the right thing to do.
> {code}
> hadoop s3guard diff $bucket
> 2017-08-22 15:14:47,025 INFO s3guard.DynamoDBMetadataStore: Creating 
> non-existent DynamoDB table hwdev-steve-ireland-new in region eu-west-1
> 2017-08-22 15:14:52,384 INFO s3guard.S3GuardTool: Metadata store 
> DynamoDBMetadataStore{region=eu-west-1, tableName=hwdev-steve-ireland-new} is 
> initialized.
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14251) Credential provider should handle property key deprecation

2017-08-23 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16138730#comment-16138730
 ] 

Steve Loughran commented on HADOOP-14251:
-

LGTM

+1

nice useful piece of work

> Credential provider should handle property key deprecation
> --
>
> Key: HADOOP-14251
> URL: https://issues.apache.org/jira/browse/HADOOP-14251
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Critical
> Attachments: HADOOP-14251.001.patch, HADOOP-14251.002.patch, 
> HADOOP-14251.003.patch, HADOOP-14251.004.patch, HADOOP-14251.005.patch
>
>
> The properties with old keys stored in a credential store can not be read via 
> the new property keys, even though the old keys have been deprecated.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14220) Add S3GuardTool diagnostics command

2017-08-23 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14220:

Attachment: HADOOP-14220-HADOOP-13345-002.patch

HADOOP-14220

* new s3guad CLI command bucket-info
* change how all S3GuardTools fail: they now through an ExitUtils.ExitException 
with a status code (from the LauncherExitCodes list). Its easier to bail out 
and tests can get the full stack trace on any failure instead of some "return 
code wrong check the logs" kind of report
* unit test for invalid args to the CLI
* test for some more invalid states of existing ops
* some new tests for bucket-info, though not enough (need some on DDB + bucket)
* address various other s3guard tool issues discussed elsewhere

This is a big enough change that it's not stable yet, it'll be targeting 
post-merge s3guard.


> Add S3GuardTool diagnostics command
> ---
>
> Key: HADOOP-14220
> URL: https://issues.apache.org/jira/browse/HADOOP-14220
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14220-HADOOP-13345-001.patch, 
> HADOOP-14220-HADOOP-13345-002.patch
>
>
> Add a diagnostics command to s3guard which does whatever we need to diagnose 
> problems for a specific (named) s3a url. This is something which can be 
> attached to bug reports as well as used by developers.
> * Properties to log (with provenance attribute, which can track bucket 
> overrides: s3guard metastore setup, autocreate, capacity, 
> * table present/absent
> * # of keys in DDB table for that bucket?
> * any other stats?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14220) Add S3GuardTool bucket-info command

2017-08-23 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14220:

Summary: Add S3GuardTool bucket-info command  (was: Add S3GuardTool 
diagnostics command)

> Add S3GuardTool bucket-info command
> ---
>
> Key: HADOOP-14220
> URL: https://issues.apache.org/jira/browse/HADOOP-14220
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14220-HADOOP-13345-001.patch, 
> HADOOP-14220-HADOOP-13345-002.patch
>
>
> Add a diagnostics command to s3guard which does whatever we need to diagnose 
> problems for a specific (named) s3a url. This is something which can be 
> attached to bug reports as well as used by developers.
> * Properties to log (with provenance attribute, which can track bucket 
> overrides: s3guard metastore setup, autocreate, capacity, 
> * table present/absent
> * # of keys in DDB table for that bucket?
> * any other stats?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14799) Update nimbus-jose-jwt to 4.41.1

2017-08-23 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16138713#comment-16138713
 ] 

Steve Loughran commented on HADOOP-14799:
-

This fixes licensing problems, doesn't it? so it should be tagged as a blocker 
for 3.0 beta

Patch-wise, LGTM, though this is a good time to fix the spelling of the test 
from {{TestJWTRedirectAuthentictionHandler}} to   
{{TestJWTRedirectAuthenticationHandler}}

> Update nimbus-jose-jwt to 4.41.1
> 
>
> Key: HADOOP-14799
> URL: https://issues.apache.org/jira/browse/HADOOP-14799
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: HADOOP-14799.001.patch
>
>
> Update the dependency
> com.nimbusds:nimbus-jose-jwt:3.9
> to the latest (4.41.1)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14802) Add support for using container saskeys for all accesses

2017-08-23 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16138617#comment-16138617
 ] 

Steve Loughran commented on HADOOP-14802:
-

You are going to have to hit submit for yetus to look at the patch, and get the 
machine happy before you get the full peer review.

Before that, can you state: which Azure endpoint have you run *all* the azure 
tests against?

This is the policy we're going to be enforcing; the requirement for you to 
declare that you've done the test is our barrier to anything untested going in
https://github.com/steveloughran/hadoop/blob/azure/HADOOP-14553-testing/hadoop-tools/hadoop-azure/src/site/markdown/testing_azure.md

I don't see any new tests here BTW. Why not?

> Add support for using container saskeys for all accesses
> 
>
> Key: HADOOP-14802
> URL: https://issues.apache.org/jira/browse/HADOOP-14802
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Sivaguru Sankaridurg
>Assignee: Sivaguru Sankaridurg
>  Labels: azure, fs, secure, wasb
> Attachments: HADOOP-14802.001.patch
>
>
> This JIRA tracks adding support for using container saskey for all storage 
> access.
> Instead of using saskeys that are specific to each blob, it is possible to 
> re-use the container saskey for all blob accesses.
> This provides a performance improvement over using blob-specific saskeys



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14802) Add support for using container saskeys for all accesses

2017-08-23 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14802:

Issue Type: Sub-task  (was: Improvement)
Parent: HADOOP-14552

> Add support for using container saskeys for all accesses
> 
>
> Key: HADOOP-14802
> URL: https://issues.apache.org/jira/browse/HADOOP-14802
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Sivaguru Sankaridurg
>Assignee: Sivaguru Sankaridurg
>  Labels: azure, fs, secure, wasb
> Attachments: HADOOP-14802.001.patch
>
>
> This JIRA tracks adding support for using container saskey for all storage 
> access.
> Instead of using saskeys that are specific to each blob, it is possible to 
> re-use the container saskey for all blob accesses.
> This provides a performance improvement over using blob-specific saskeys



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14729) Upgrade JUnit 3 TestCase to JUnit 4

2017-08-23 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16138609#comment-16138609
 ] 

Steve Loughran commented on HADOOP-14729:
-

Has anyone tested the azure stuff? 

If not, I'm go ask for those changes to be kept out on the general "no changes 
to WASB without a full test run" policy.

In HADOOP-14553 we've been moving everything to a test/integration test setup; 
I'd really like to do the changes in there. It brings the test run down to <15 
minutes, which means the "test before you submit a patch" policy is viable

> Upgrade JUnit 3 TestCase to JUnit 4
> ---
>
> Key: HADOOP-14729
> URL: https://issues.apache.org/jira/browse/HADOOP-14729
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Akira Ajisaka
>Assignee: Ajay Kumar
>  Labels: newbie
> Attachments: HADOOP-14729.001.patch, HADOOP-14729.002.patch, 
> HADOOP-14729.003.patch, HADOOP-14729.004.patch, HADOOP-14729.005.patch, 
> HADOOP-14729.006.patch, HADOOP-14729.007.patch, HADOOP-14729.008.patch, 
> HADOOP-14729.009.patch, HADOOP-14729.010.patch
>
>
> There are still test classes that extend from junit.framework.TestCase in 
> hadoop-common. Upgrade them to JUnit4.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14444) New implementation of ftp and sftp filesystems

2017-08-23 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16138603#comment-16138603
 ] 

Steve Loughran commented on HADOOP-1:
-

* I'd like to have people playing with this, people who have the need to use 
the ftp code. It's only through that where we find problems.
* we are pretty much feature complete for 3.0, and I'd like to target this to 
3.1. Not just for the new client, but because of the changes it'll bring to the 
packaging. I don't want to complicate what has become fairly complex right now 
(there's lots of shading going on, see).
* I haven't had time to play with this myself. I'm keeping an eye on it so it 
doesn't languish, lost --nobody likes that

> New implementation of ftp and sftp filesystems
> --
>
> Key: HADOOP-1
> URL: https://issues.apache.org/jira/browse/HADOOP-1
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Lukas Waldmann
>Assignee: Lukas Waldmann
> Attachments: HADOOP-1.2.patch, HADOOP-1.3.patch, 
> HADOOP-1.4.patch, HADOOP-1.5.patch, HADOOP-1.6.patch, 
> HADOOP-1.7.patch, HADOOP-1.8.patch, HADOOP-1.9.patch, 
> HADOOP-1.patch
>
>
> Current implementation of FTP and SFTP filesystems have severe limitations 
> and performance issues when dealing with high number of files. Mine patch 
> solve those issues and integrate both filesystems such a way that most of the 
> core functionality is common for both and therefore simplifying the 
> maintainability.
> The core features:
> * Support for HTTP/SOCKS proxies
> * Support for passive FTP
> * Support for explicit FTPS (SSL/TLS)
> * Support of connection pooling - new connection is not created for every 
> single command but reused from the pool.
> For huge number of files it shows order of magnitude performance improvement 
> over not pooled connections.
> * Caching of directory trees. For ftp you always need to list whole directory 
> whenever you ask information about particular file.
> Again for huge number of files it shows order of magnitude performance 
> improvement over not cached connections.
> * Support of keep alive (NOOP) messages to avoid connection drops
> * Support for Unix style or regexp wildcard glob - useful for listing a 
> particular files across whole directory tree
> * Support for reestablishing broken ftp data transfers - can happen 
> surprisingly often



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14779) Refactor decryptEncryptedKey in KeyProviderCryptoExtension

2017-08-23 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14779:
---
  Resolution: Duplicate
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

This was incorporated HADOOP-14705, so closing as a dup.
Let's continue future work on the follow-on. Thanks for the reviews.

> Refactor decryptEncryptedKey in KeyProviderCryptoExtension
> --
>
> Key: HADOOP-14779
> URL: https://issues.apache.org/jira/browse/HADOOP-14779
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: HADOOP-14779.01.patch
>
>
> We could separate out the actual decrypt logic from the 
> {{decryptEncryptedKey}}. This enables reencrypt calls to possibly reuse the 
> codec.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14444) New implementation of ftp and sftp filesystems

2017-08-23 Thread Lukas Waldmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Waldmann updated HADOOP-1:

Description: 
Current implementation of FTP and SFTP filesystems have severe limitations and 
performance issues when dealing with high number of files. Mine patch solve 
those issues and integrate both filesystems such a way that most of the core 
functionality is common for both and therefore simplifying the maintainability.

The core features:
* Support for HTTP/SOCKS proxies
* Support for passive FTP
* Support for explicit FTPS (SSL/TLS)
* Support of connection pooling - new connection is not created for every 
single command but reused from the pool.
For huge number of files it shows order of magnitude performance improvement 
over not pooled connections.
* Caching of directory trees. For ftp you always need to list whole directory 
whenever you ask information about particular file.
Again for huge number of files it shows order of magnitude performance 
improvement over not cached connections.
* Support of keep alive (NOOP) messages to avoid connection drops
* Support for Unix style or regexp wildcard glob - useful for listing a 
particular files across whole directory tree
* Support for reestablishing broken ftp data transfers - can happen 
surprisingly often

  was:
Current implementation of FTP and SFTP filesystems have severe limitations and 
performance issues when dealing with high number of files. Mine patch solve 
those issues and integrate both filesystems such a way that most of the core 
functionality is common for both and therefore simplifying the maintainability.

The core features:
* Support for HTTP/SOCKS proxies
* Support for passive FTP
* Support of connection pooling - new connection is not created for every 
single command but reused from the pool.
For huge number of files it shows order of magnitude performance improvement 
over not pooled connections.
* Caching of directory trees. For ftp you always need to list whole directory 
whenever you ask information about particular file.
Again for huge number of files it shows order of magnitude performance 
improvement over not cached connections.
* Support of keep alive (NOOP) messages to avoid connection drops
* Support for Unix style or regexp wildcard glob - useful for listing a 
particular files across whole directory tree
* Support for reestablishing broken ftp data transfers - can happen 
surprisingly often


> New implementation of ftp and sftp filesystems
> --
>
> Key: HADOOP-1
> URL: https://issues.apache.org/jira/browse/HADOOP-1
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Lukas Waldmann
>Assignee: Lukas Waldmann
> Attachments: HADOOP-1.2.patch, HADOOP-1.3.patch, 
> HADOOP-1.4.patch, HADOOP-1.5.patch, HADOOP-1.6.patch, 
> HADOOP-1.7.patch, HADOOP-1.8.patch, HADOOP-1.9.patch, 
> HADOOP-1.patch
>
>
> Current implementation of FTP and SFTP filesystems have severe limitations 
> and performance issues when dealing with high number of files. Mine patch 
> solve those issues and integrate both filesystems such a way that most of the 
> core functionality is common for both and therefore simplifying the 
> maintainability.
> The core features:
> * Support for HTTP/SOCKS proxies
> * Support for passive FTP
> * Support for explicit FTPS (SSL/TLS)
> * Support of connection pooling - new connection is not created for every 
> single command but reused from the pool.
> For huge number of files it shows order of magnitude performance improvement 
> over not pooled connections.
> * Caching of directory trees. For ftp you always need to list whole directory 
> whenever you ask information about particular file.
> Again for huge number of files it shows order of magnitude performance 
> improvement over not cached connections.
> * Support of keep alive (NOOP) messages to avoid connection drops
> * Support for Unix style or regexp wildcard glob - useful for listing a 
> particular files across whole directory tree
> * Support for reestablishing broken ftp data transfers - can happen 
> surprisingly often



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14444) New implementation of ftp and sftp filesystems

2017-08-23 Thread Lukas Waldmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Waldmann updated HADOOP-1:

Status: Patch Available  (was: In Progress)

> New implementation of ftp and sftp filesystems
> --
>
> Key: HADOOP-1
> URL: https://issues.apache.org/jira/browse/HADOOP-1
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Lukas Waldmann
>Assignee: Lukas Waldmann
> Attachments: HADOOP-1.2.patch, HADOOP-1.3.patch, 
> HADOOP-1.4.patch, HADOOP-1.5.patch, HADOOP-1.6.patch, 
> HADOOP-1.7.patch, HADOOP-1.8.patch, HADOOP-1.9.patch, 
> HADOOP-1.patch
>
>
> Current implementation of FTP and SFTP filesystems have severe limitations 
> and performance issues when dealing with high number of files. Mine patch 
> solve those issues and integrate both filesystems such a way that most of the 
> core functionality is common for both and therefore simplifying the 
> maintainability.
> The core features:
> * Support for HTTP/SOCKS proxies
> * Support for passive FTP
> * Support of connection pooling - new connection is not created for every 
> single command but reused from the pool.
> For huge number of files it shows order of magnitude performance improvement 
> over not pooled connections.
> * Caching of directory trees. For ftp you always need to list whole directory 
> whenever you ask information about particular file.
> Again for huge number of files it shows order of magnitude performance 
> improvement over not cached connections.
> * Support of keep alive (NOOP) messages to avoid connection drops
> * Support for Unix style or regexp wildcard glob - useful for listing a 
> particular files across whole directory tree
> * Support for reestablishing broken ftp data transfers - can happen 
> surprisingly often



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14444) New implementation of ftp and sftp filesystems

2017-08-23 Thread Lukas Waldmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Waldmann updated HADOOP-1:

Attachment: HADOOP-1.9.patch

What's new:
Support for explicit FTPS (SSL/TLS)
Few fixes in connection pooling
Better test infrastructure 

> New implementation of ftp and sftp filesystems
> --
>
> Key: HADOOP-1
> URL: https://issues.apache.org/jira/browse/HADOOP-1
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Lukas Waldmann
>Assignee: Lukas Waldmann
> Attachments: HADOOP-1.2.patch, HADOOP-1.3.patch, 
> HADOOP-1.4.patch, HADOOP-1.5.patch, HADOOP-1.6.patch, 
> HADOOP-1.7.patch, HADOOP-1.8.patch, HADOOP-1.9.patch, 
> HADOOP-1.patch
>
>
> Current implementation of FTP and SFTP filesystems have severe limitations 
> and performance issues when dealing with high number of files. Mine patch 
> solve those issues and integrate both filesystems such a way that most of the 
> core functionality is common for both and therefore simplifying the 
> maintainability.
> The core features:
> * Support for HTTP/SOCKS proxies
> * Support for passive FTP
> * Support of connection pooling - new connection is not created for every 
> single command but reused from the pool.
> For huge number of files it shows order of magnitude performance improvement 
> over not pooled connections.
> * Caching of directory trees. For ftp you always need to list whole directory 
> whenever you ask information about particular file.
> Again for huge number of files it shows order of magnitude performance 
> improvement over not cached connections.
> * Support of keep alive (NOOP) messages to avoid connection drops
> * Support for Unix style or regexp wildcard glob - useful for listing a 
> particular files across whole directory tree
> * Support for reestablishing broken ftp data transfers - can happen 
> surprisingly often



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14444) New implementation of ftp and sftp filesystems

2017-08-23 Thread Lukas Waldmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Waldmann updated HADOOP-1:

Status: In Progress  (was: Patch Available)

> New implementation of ftp and sftp filesystems
> --
>
> Key: HADOOP-1
> URL: https://issues.apache.org/jira/browse/HADOOP-1
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Lukas Waldmann
>Assignee: Lukas Waldmann
> Attachments: HADOOP-1.2.patch, HADOOP-1.3.patch, 
> HADOOP-1.4.patch, HADOOP-1.5.patch, HADOOP-1.6.patch, 
> HADOOP-1.7.patch, HADOOP-1.8.patch, HADOOP-1.patch
>
>
> Current implementation of FTP and SFTP filesystems have severe limitations 
> and performance issues when dealing with high number of files. Mine patch 
> solve those issues and integrate both filesystems such a way that most of the 
> core functionality is common for both and therefore simplifying the 
> maintainability.
> The core features:
> * Support for HTTP/SOCKS proxies
> * Support for passive FTP
> * Support of connection pooling - new connection is not created for every 
> single command but reused from the pool.
> For huge number of files it shows order of magnitude performance improvement 
> over not pooled connections.
> * Caching of directory trees. For ftp you always need to list whole directory 
> whenever you ask information about particular file.
> Again for huge number of files it shows order of magnitude performance 
> improvement over not cached connections.
> * Support of keep alive (NOOP) messages to avoid connection drops
> * Support for Unix style or regexp wildcard glob - useful for listing a 
> particular files across whole directory tree
> * Support for reestablishing broken ftp data transfers - can happen 
> surprisingly often



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14802) Add support for using container saskeys for all accesses

2017-08-23 Thread Sivaguru Sankaridurg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sivaguru Sankaridurg updated HADOOP-14802:
--
Attachment: HADOOP-14802.001.patch

> Add support for using container saskeys for all accesses
> 
>
> Key: HADOOP-14802
> URL: https://issues.apache.org/jira/browse/HADOOP-14802
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Sivaguru Sankaridurg
>Assignee: Sivaguru Sankaridurg
>  Labels: azure, fs, secure, wasb
> Attachments: HADOOP-14802.001.patch
>
>
> This JIRA tracks adding support for using container saskey for all storage 
> access.
> Instead of using saskeys that are specific to each blob, it is possible to 
> re-use the container saskey for all blob accesses.
> This provides a performance improvement over using blob-specific saskeys



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14802) Add support for using container saskeys for all accesses

2017-08-23 Thread Sivaguru Sankaridurg (JIRA)
Sivaguru Sankaridurg created HADOOP-14802:
-

 Summary: Add support for using container saskeys for all accesses
 Key: HADOOP-14802
 URL: https://issues.apache.org/jira/browse/HADOOP-14802
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/azure
Reporter: Sivaguru Sankaridurg
Assignee: Sivaguru Sankaridurg


This JIRA tracks adding support for using container saskey for all storage 
access.
Instead of using saskeys that are specific to each blob, it is possible to 
re-use the container saskey for all blob accesses.
This provides a performance improvement over using blob-specific saskeys



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14649) Update aliyun-sdk-oss version to 2.8.1

2017-08-23 Thread Genmao Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16138263#comment-16138263
 ] 

Genmao Yu commented on HADOOP-14649:


cc [~ste...@apache.org] and [~rchiang]

> Update aliyun-sdk-oss version to 2.8.1
> --
>
> Key: HADOOP-14649
> URL: https://issues.apache.org/jira/browse/HADOOP-14649
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ray Chiang
>Assignee: Genmao Yu
> Attachments: HADOOP-14649.000.patch
>
>
> Update the dependency
> com.aliyun.oss:aliyun-sdk-oss:2.4.1
> to the latest (2.8.1).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-6494) MapFile.Reader does not seek to first entry for multi-valued key

2017-08-23 Thread Nico Meyer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16138152#comment-16138152
 ] 

Nico Meyer commented on HADOOP-6494:


I rediscovered this problem a while ago the hard way and implemented the exact 
same fix proposed here. At the very least the documentation should state, the 
multi valued keys will give the wrong result.

> MapFile.Reader does not seek to first entry for multi-valued key
> 
>
> Key: HADOOP-6494
> URL: https://issues.apache.org/jira/browse/HADOOP-6494
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Peter Spiro
>Priority: Minor
>
> When a MapFile contains a key with multiple entries and one of these entries 
> other than the first happens to be stored in the index, then the Reader's 
> seek() and get*() methods will generally not return the first entry, making 
> it impossible to retrieve all of the key's entries using next().
> One easy solution would be to modify the Writer's append() method to only 
> index an entry if it's the first entry belonging to its key, e.g.:
> public synchronized void append(WritableComparable key, Writable val)
>   throws IOException {
>   boolean equalsLastKey = (size != 0 && comparator.compare(lastKey, key) 
> == 0);
>   checkKey(key);
>   boolean largeEnoughInterval = size % indexInterval == 0;
>   if (largeEnoughInterval && !equalsLastKey) {// add an index 
> entry
> position.set(data.getLength());   // point to current eof
> index.append(key, position);
>   }
>   data.append(key, val);  // append key/value to data
>   if (!largeEnoughInterval || !equalsLastKey)
>   size++;
> }
> (The size variable should then be renamed to something more accurate.)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13998) Merge initial S3guard release into trunk

2017-08-23 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16138149#comment-16138149
 ] 

Steve Loughran commented on HADOOP-13998:
-

I'm doing the info command along with a bit of a rework of the CLI code; just 
the aggregate set of niggles. I don't want to hold the vote up though as it'll 
need work and is non-critical

> Merge initial S3guard release into trunk
> 
>
> Key: HADOOP-13998
> URL: https://issues.apache.org/jira/browse/HADOOP-13998
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13998-001.patch, HADOOP-13998-002.patch, 
> HADOOP-13998-003.patch, HADOOP-13998-004.patch
>
>
> JIRA to link in all the things we think are needed for a preview/merge into 
> trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14158) Possible for modified configuration to leak into metadatastore in S3GuardTool

2017-08-23 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16138143#comment-16138143
 ] 

Steve Loughran commented on HADOOP-14158:
-

What's probably happening in the test is the config is being cached. The 
configuration needs to mark the FS as non-cached/create a unique one. 

> Possible for modified configuration to leak into metadatastore in S3GuardTool
> -
>
> Key: HADOOP-14158
> URL: https://issues.apache.org/jira/browse/HADOOP-14158
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
>
> It doesn't appear to do it when run from the command-line, but when running 
> the S3GuardTool.run (i.e. the parent function of most of the functions used 
> in the unit tests) from a unit test, you end up with a NullMetadataStore, 
> regardless of what else was configured.
> We create an instance of S3AFileSystem with the metadata store implementation 
> overridden to NullMetadataStore so that we have distinct interfaces to S3 and 
> the metadata store. S3Guard can later be called using this filesystem, 
> causing it to pick up the filesystem's configuration, which instructs it to 
> use the NullMetadataStore implementation. This shouldn't be possible.
> It is unknown if this happens in any real-world scenario - I've been unable 
> to reproduce the problem from the command-line. But it definitely happens in 
> a test, it shouldn't, and fixing this will at least allow HADOOP-14145 to 
> have an automated test.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14703) ConsoleSink for metrics2

2017-08-23 Thread Ronald Macmaster (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16138068#comment-16138068
 ] 

Ronald Macmaster commented on HADOOP-14703:
---

Woah, I just saw your comment. 
That's freaky. I will need to look into that more.


> ConsoleSink for metrics2
> 
>
> Key: HADOOP-14703
> URL: https://issues.apache.org/jira/browse/HADOOP-14703
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, metrics
>Affects Versions: 3.0.0-beta1
>Reporter: Ronald Macmaster
>Assignee: Ronald Macmaster
>  Labels: newbie
> Attachments: 
> 0001-HADOOP-14703.-ConsoleSink-for-simple-metrics-printin.patch, 
> HADOOP-14703.001.patch, HADOOP-14703.002.patch, HADOOP-14703.003.patch, 
> HADOOP-14703.004.patch, HADOOP-14703.006.patch
>
>   Original Estimate: 6h
>  Remaining Estimate: 6h
>
> The ConsoleSink will provide a simple solution to dump metrics to the console 
> through std.out. 
> Quick access to metrics through the console will simplify the development, 
> testing, and debugging process.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14703) ConsoleSink for metrics2

2017-08-23 Thread Ronald Macmaster (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ronald Macmaster updated HADOOP-14703:
--
Attachment: HADOOP-14703.006.patch

I've added a Thread.sleep(3000) call after the metrics publishMetricsNow call. 
hadoop-metrics2-test.properties must be sourced as a resource from the 
classpath, and the GenericTestUtils.getTestDir() directory (target/test/data) 
is not normally in the classpath. Now, the test tries to add target/test/data 
to the classpath. The test works locally, but it fails on the Jenkins server 
(despite correctly sourcing from hadoop-metrics2-test.properties). If the test 
still fails on the Jenkins server with a 3 second delay, it may be best to 
revert it to the original properties file creation method. 

> ConsoleSink for metrics2
> 
>
> Key: HADOOP-14703
> URL: https://issues.apache.org/jira/browse/HADOOP-14703
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, metrics
>Affects Versions: 3.0.0-beta1
>Reporter: Ronald Macmaster
>Assignee: Ronald Macmaster
>  Labels: newbie
> Attachments: 
> 0001-HADOOP-14703.-ConsoleSink-for-simple-metrics-printin.patch, 
> HADOOP-14703.001.patch, HADOOP-14703.002.patch, HADOOP-14703.003.patch, 
> HADOOP-14703.004.patch, HADOOP-14703.006.patch
>
>   Original Estimate: 6h
>  Remaining Estimate: 6h
>
> The ConsoleSink will provide a simple solution to dump metrics to the console 
> through std.out. 
> Quick access to metrics through the console will simplify the development, 
> testing, and debugging process.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14729) Upgrade JUnit 3 TestCase to JUnit 4

2017-08-23 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16138008#comment-16138008
 ] 

Akira Ajisaka commented on HADOOP-14729:


LGTM, +1. I'll commit this tomorrow if there are no objection.

> Upgrade JUnit 3 TestCase to JUnit 4
> ---
>
> Key: HADOOP-14729
> URL: https://issues.apache.org/jira/browse/HADOOP-14729
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Akira Ajisaka
>Assignee: Ajay Kumar
>  Labels: newbie
> Attachments: HADOOP-14729.001.patch, HADOOP-14729.002.patch, 
> HADOOP-14729.003.patch, HADOOP-14729.004.patch, HADOOP-14729.005.patch, 
> HADOOP-14729.006.patch, HADOOP-14729.007.patch, HADOOP-14729.008.patch, 
> HADOOP-14729.009.patch, HADOOP-14729.010.patch
>
>
> There are still test classes that extend from junit.framework.TestCase in 
> hadoop-common. Upgrade them to JUnit4.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14703) ConsoleSink for metrics2

2017-08-23 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16137999#comment-16137999
 ] 

John Zhuge commented on HADOOP-14703:
-

Patch 004 has the same issue as 003.

If you change TEST_PREFIX to "logsink" in 003, the following parallel tests 
will pass:
{noformat}
( cd hadoop-common-project/hadoop-common && mvn test -Dtest=Test*Sink 
-P\!shelltest -Pparallel-tests )
{noformat}

"test1" is also fine, just not "test". Don't understand why.

> ConsoleSink for metrics2
> 
>
> Key: HADOOP-14703
> URL: https://issues.apache.org/jira/browse/HADOOP-14703
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, metrics
>Affects Versions: 3.0.0-beta1
>Reporter: Ronald Macmaster
>Assignee: Ronald Macmaster
>  Labels: newbie
> Attachments: 
> 0001-HADOOP-14703.-ConsoleSink-for-simple-metrics-printin.patch, 
> HADOOP-14703.001.patch, HADOOP-14703.002.patch, HADOOP-14703.003.patch, 
> HADOOP-14703.004.patch
>
>   Original Estimate: 6h
>  Remaining Estimate: 6h
>
> The ConsoleSink will provide a simple solution to dump metrics to the console 
> through std.out. 
> Quick access to metrics through the console will simplify the development, 
> testing, and debugging process.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org