[jira] [Commented] (HADOOP-15950) Failover for LdapGroupsMapping

2018-11-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704180#comment-16704180
 ] 

Hadoop QA commented on HADOOP-15950:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 16m 10s{color} 
| {color:red} root generated 3 new + 1487 unchanged - 1 fixed = 1490 total (was 
1488) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 32 unchanged - 7 fixed = 32 total (was 39) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
56s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
56s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}103m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15950 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950104/HADOOP-15950.006.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 82792f698652 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / bad1203 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15584/artifact/out/diff-compile-javac-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15584/testReport/ |
| Max. process+thread count | 1387 (vs. ulimit of 1) |
| modules | C: 

[jira] [Commented] (HADOOP-15960) Update guava to 27.0-jre in hadoop-common

2018-11-29 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704178#comment-16704178
 ] 

Wei-Chiu Chuang commented on HADOOP-15960:
--

Not quite clear where the "afu" class path comes from. I suppose that has to do 
with shading which prepend the prefix, but couldn't find any mentioning of 
"afu" in Hadoop codebase.

As far as I can tell, HBase compiles against this Hadoop patch (albeit the new 
Gauva version adds a few dependencies and I had to explicitly add additional 
license text in HBase)

I am running HBase small tests and so far so good. 

The tricker question is the transitive guava version used in the YARN ATS 
module.  That is
{code:title=hadoop-project/pom.xml}
  
${hbase.two.version}
3.0.0
11.0.2

hadoop-yarn-server-timelineservice-hbase-server-2
  
{code}
I suppose this guava dependency should get updated too.

> Update guava to 27.0-jre in hadoop-common
> -
>
> Key: HADOOP-15960
> URL: https://issues.apache.org/jira/browse/HADOOP-15960
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15960.000.WIP.patch
>
>
> com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
> [CVE-2018-10237|https://nvd.nist.gov/vuln/detail/CVE-2018-10237].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15922) DelegationTokenAuthenticationFilter get wrong doAsUser since it does not decode URL

2018-11-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704151#comment-16704151
 ] 

Hadoop QA commented on HADOOP-15922:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-kms in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m 
20s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 20s{color} 
| {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 48s{color} | {color:orange} hadoop-common-project: The patch generated 2 new 
+ 99 unchanged - 0 fixed = 101 total (was 99) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-kms in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
33s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-kms in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 45s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 24s{color} 
| {color:red} hadoop-kms in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 80m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.security.token.delegation.TestZKDelegationTokenSecretManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15922 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950029/HADOOP-15922.005.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1c4fa6390c23 4.4.0-134-generic #160~14.04.1-Ubuntu SMP Fri Aug 
17 11:07:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / bad1203 |
| maven | version: Apache Maven 3.3.9 |
| Default 

[jira] [Commented] (HADOOP-15566) Remove HTrace support

2018-11-29 Thread Colin P. McCabe (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704118#comment-16704118
 ] 

Colin P. McCabe commented on HADOOP-15566:
--

Hi folks,

I just saw this JIRA while searching for something else.  I was one of the guys 
who worked on HTrace, both on the Hadoop integration side and on the HTrace 
project itself.  It is definitely sad that it didn't make it out of the 
incubator.  There is clearly a need for this kind of work in Hadoop and in 
other projects.

I don't have a strong opinion about which other tracing API should be used in 
Hadoop.  I would caution everyone that Hadoop's compatibility shackles are 
heavy -- very heavy indeed.  Just to give an example, a typical Hadoop 
installation might have HDFS, HBase, and Phoenix installed.  These projects all 
have separate developers, PMCs, and release cycles, but expect to be able to 
share the same CLASSPATH happily.  Projects often push back very hard on trying 
to update library dependencies, especially in "minor" releases.  To add to 
that, people often stay on older stable versions of Hadoop for years.

In theory, Hadoop vendors offer a snaphot of the full Hadoop stack, carefully 
configured so that things work together.  In practice, libraries are not always 
harmonized as well as we would like.  Some users want to mix and match versions 
of things, or not even use a vendor distribution at all.  This makes setting up 
end-to-end tracing pretty difficult.

There were some efforts to add better CLASSPATH isolation to Hadoop.  I haven't 
kept up with those, so I don't know how much this situation has improved.

I do think that the idea of keeping HTrace around as a shim API might make 
sense for Hadoop.  This would mean that adding support for a new version of the 
OpenTracing or Zipkin library would only require updating that shim code in 
hadoop-common, rather than trying to coordinate changes across a dozen Hadoop 
projects.

Also, HTrace already has code to export spans to Zipkin, if that helps.  I 
think it would be relatively straightforward to write the same thing for 
opentracing as well.

> Remove HTrace support
> -
>
> Key: HADOOP-15566
> URL: https://issues.apache.org/jira/browse/HADOOP-15566
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 3.1.0
>Reporter: Todd Lipcon
>Priority: Major
>  Labels: security
> Attachments: Screen Shot 2018-06-29 at 11.59.16 AM.png, 
> ss-trace-s3a.png
>
>
> The HTrace incubator project has voted to retire itself and won't be making 
> further releases. The Hadoop project currently has various hooks with HTrace. 
> It seems in some cases (eg HDFS-13702) these hooks have had measurable 
> performance overhead. Given these two factors, I think we should consider 
> removing the HTrace integration. If there is someone willing to do the work, 
> replacing it with OpenTracing might be a better choice since there is an 
> active community.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15950) Failover for LdapGroupsMapping

2018-11-29 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HADOOP-15950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704112#comment-16704112
 ] 

Íñigo Goiri commented on HADOOP-15950:
--

Thanks [~lukmajercak] for  [^HADOOP-15950.006.patch].
A couple minor comments for the doc:
* Use "To configure the" instead of "You can configure the".
* We should show the configuration key/values as XML with property and so on 
instead of just key=value.

> Failover for LdapGroupsMapping
> --
>
> Key: HADOOP-15950
> URL: https://issues.apache.org/jira/browse/HADOOP-15950
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common, security
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HADOOP-15950.001.patch, HADOOP-15950.002.patch, 
> HADOOP-15950.003.patch, HADOOP-15950.004.patch, HADOOP-15950.005.patch, 
> HADOOP-15950.006.patch
>
>
> Currently, LdapGroupsMapping supports only a single ldap server url, this can 
> obviously cause issues if the ldap instance goes down. This JIRA attempts to 
> improve this by allowing users to list multiple ldap server urls, and 
> performing a failover if we detect any issues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15950) Failover for LdapGroupsMapping

2018-11-29 Thread Lukas Majercak (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lukas Majercak updated HADOOP-15950:

Attachment: HADOOP-15950.006.patch

> Failover for LdapGroupsMapping
> --
>
> Key: HADOOP-15950
> URL: https://issues.apache.org/jira/browse/HADOOP-15950
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common, security
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HADOOP-15950.001.patch, HADOOP-15950.002.patch, 
> HADOOP-15950.003.patch, HADOOP-15950.004.patch, HADOOP-15950.005.patch, 
> HADOOP-15950.006.patch
>
>
> Currently, LdapGroupsMapping supports only a single ldap server url, this can 
> obviously cause issues if the ldap instance goes down. This JIRA attempts to 
> improve this by allowing users to list multiple ldap server urls, and 
> performing a failover if we detect any issues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15950) Failover for LdapGroupsMapping

2018-11-29 Thread Lukas Majercak (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704104#comment-16704104
 ] 

Lukas Majercak commented on HADOOP-15950:
-

Added patch006 with changes to GroupsMapping.md and core-default.xml. Could you 
review this [~jojochuang]? Thanks!

> Failover for LdapGroupsMapping
> --
>
> Key: HADOOP-15950
> URL: https://issues.apache.org/jira/browse/HADOOP-15950
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common, security
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HADOOP-15950.001.patch, HADOOP-15950.002.patch, 
> HADOOP-15950.003.patch, HADOOP-15950.004.patch, HADOOP-15950.005.patch, 
> HADOOP-15950.006.patch
>
>
> Currently, LdapGroupsMapping supports only a single ldap server url, this can 
> obviously cause issues if the ldap instance goes down. This JIRA attempts to 
> improve this by allowing users to list multiple ldap server urls, and 
> performing a failover if we detect any issues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15922) DelegationTokenAuthenticationFilter get wrong doAsUser since it does not decode URL

2018-11-29 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-15922:
---
Status: Patch Available  (was: Reopened)

Reopen to test patch 005.

> DelegationTokenAuthenticationFilter get wrong doAsUser since it does not 
> decode URL
> ---
>
> Key: HADOOP-15922
> URL: https://issues.apache.org/jira/browse/HADOOP-15922
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 3.3.0, 3.1.2, 3.2.1
>
> Attachments: HADOOP-15922.001.patch, HADOOP-15922.002.patch, 
> HADOOP-15922.003.patch, HADOOP-15922.004.patch, HADOOP-15922.005.patch
>
>
> DelegationTokenAuthenticationFilter get wrong doAsUser when proxy user from 
> client is complete kerberos name (e.g., user/hostn...@realm.com, actually it 
> is acceptable), because DelegationTokenAuthenticationFilter does not decode 
> DOAS parameter in URL which is encoded by {{URLEncoder}} at client.
> e.g. KMS as example:
> a. KMSClientProvider creates connection to KMS Server using 
> DelegationTokenAuthenticatedURL#openConnection.
> b. If KMSClientProvider is a doAsUser, KMSClientProvider will put {{doas}} 
> with url encoded user as one parameter of http request. 
> {code:java}
> // proxyuser
> if (doAs != null) {
>   extraParams.put(DO_AS, URLEncoder.encode(doAs, "UTF-8"));
> }
> {code}
> c. when KMS server receives the request, it does not decode the proxy user.
> As result, KMS Server will get the wrong proxy user if this proxy user is 
> complete Kerberos Name or it includes some special character. Some other 
> authentication and authorization exception will throws next to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-15922) DelegationTokenAuthenticationFilter get wrong doAsUser since it does not decode URL

2018-11-29 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang reopened HADOOP-15922:


> DelegationTokenAuthenticationFilter get wrong doAsUser since it does not 
> decode URL
> ---
>
> Key: HADOOP-15922
> URL: https://issues.apache.org/jira/browse/HADOOP-15922
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 3.3.0, 3.1.2, 3.2.1
>
> Attachments: HADOOP-15922.001.patch, HADOOP-15922.002.patch, 
> HADOOP-15922.003.patch, HADOOP-15922.004.patch, HADOOP-15922.005.patch
>
>
> DelegationTokenAuthenticationFilter get wrong doAsUser when proxy user from 
> client is complete kerberos name (e.g., user/hostn...@realm.com, actually it 
> is acceptable), because DelegationTokenAuthenticationFilter does not decode 
> DOAS parameter in URL which is encoded by {{URLEncoder}} at client.
> e.g. KMS as example:
> a. KMSClientProvider creates connection to KMS Server using 
> DelegationTokenAuthenticatedURL#openConnection.
> b. If KMSClientProvider is a doAsUser, KMSClientProvider will put {{doas}} 
> with url encoded user as one parameter of http request. 
> {code:java}
> // proxyuser
> if (doAs != null) {
>   extraParams.put(DO_AS, URLEncoder.encode(doAs, "UTF-8"));
> }
> {code}
> c. when KMS server receives the request, it does not decode the proxy user.
> As result, KMS Server will get the wrong proxy user if this proxy user is 
> complete Kerberos Name or it includes some special character. Some other 
> authentication and authorization exception will throws next to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15922) DelegationTokenAuthenticationFilter get wrong doAsUser since it does not decode URL

2018-11-29 Thread Daryn Sharp (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704058#comment-16704058
 ] 

Daryn Sharp commented on HADOOP-15922:
--

+1.  I don't see any compatibility concerns.

> DelegationTokenAuthenticationFilter get wrong doAsUser since it does not 
> decode URL
> ---
>
> Key: HADOOP-15922
> URL: https://issues.apache.org/jira/browse/HADOOP-15922
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 3.3.0, 3.1.2, 3.2.1
>
> Attachments: HADOOP-15922.001.patch, HADOOP-15922.002.patch, 
> HADOOP-15922.003.patch, HADOOP-15922.004.patch, HADOOP-15922.005.patch
>
>
> DelegationTokenAuthenticationFilter get wrong doAsUser when proxy user from 
> client is complete kerberos name (e.g., user/hostn...@realm.com, actually it 
> is acceptable), because DelegationTokenAuthenticationFilter does not decode 
> DOAS parameter in URL which is encoded by {{URLEncoder}} at client.
> e.g. KMS as example:
> a. KMSClientProvider creates connection to KMS Server using 
> DelegationTokenAuthenticatedURL#openConnection.
> b. If KMSClientProvider is a doAsUser, KMSClientProvider will put {{doas}} 
> with url encoded user as one parameter of http request. 
> {code:java}
> // proxyuser
> if (doAs != null) {
>   extraParams.put(DO_AS, URLEncoder.encode(doAs, "UTF-8"));
> }
> {code}
> c. when KMS server receives the request, it does not decode the proxy user.
> As result, KMS Server will get the wrong proxy user if this proxy user is 
> complete Kerberos Name or it includes some special character. Some other 
> authentication and authorization exception will throws next to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15959) revert HADOOP-12751

2018-11-29 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated HADOOP-15959:

Fix Version/s: (was: 3.2.1)
   3.2.0

> revert HADOOP-12751
> ---
>
> Key: HADOOP-15959
> URL: https://issues.apache.org/jira/browse/HADOOP-15959
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.2.0, 3.1.1, 2.9.2, 3.0.3, 2.7.7, 2.8.5
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: HADOOP-15959-001.patch, HADOOP-15959-branch-2-002.patch
>
>
> HADOOP-12751 doesn't quite work right. Revert.
> (this patch is so jenkins can do the test runs)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15959) revert HADOOP-12751

2018-11-29 Thread Bolke de Bruin (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704035#comment-16704035
 ] 

Bolke de Bruin commented on HADOOP-15959:
-

Care to explain what doesn’t work right it’s not clear from the description? We 
depend on this in our environment

> revert HADOOP-12751
> ---
>
> Key: HADOOP-15959
> URL: https://issues.apache.org/jira/browse/HADOOP-15959
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.2.0, 3.1.1, 2.9.2, 3.0.3, 2.7.7, 2.8.5
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.0.4, 3.1.2, 3.2.1
>
> Attachments: HADOOP-15959-001.patch, HADOOP-15959-branch-2-002.patch
>
>
> HADOOP-12751 doesn't quite work right. Revert.
> (this patch is so jenkins can do the test runs)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15957) WASB: Add asterisk wildcard support for PageBlobDirSet

2018-11-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703819#comment-16703819
 ] 

Hadoop QA commented on HADOOP-15957:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 19s{color} | {color:orange} hadoop-tools/hadoop-azure: The patch generated 1 
new + 2 unchanged - 0 fixed = 3 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 53s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
8s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15957 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950067/HADOOP-15957-002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ab93132ece24 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ae5fbdd |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15582/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15582/testReport/ |
| Max. process+thread count | 340 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15582/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org 

[jira] [Updated] (HADOOP-15957) WASB: Add asterisk wildcard support for PageBlobDirSet

2018-11-29 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-15957:
-
Attachment: HADOOP-15957-002.patch

> WASB: Add asterisk wildcard support for PageBlobDirSet
> --
>
> Key: HADOOP-15957
> URL: https://issues.apache.org/jira/browse/HADOOP-15957
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15957-001.patch, HADOOP-15957-002.patch
>
>
> In WASB, property "*fs.azure.page.blob.dir*" only support literal directory 
> name.
> We need to add support for wildcard '*' to represent for any directory name.
> For example, the following pattern should be supported:
> {code:java}
> /dir1/dir2 
> /dir1/*/dir3
> /dir1/*/*/dir4
> /dir1/*/*/file
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15959) revert HADOOP-12751

2018-11-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703765#comment-16703765
 ] 

Hadoop QA commented on HADOOP-15959:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
38s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 8s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
57s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
50s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
15s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | {color:orange} hadoop-common-project: The patch generated 1 new 
+ 157 unchanged - 4 fixed = 158 total (was 161) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
56s{color} | {color:green} hadoop-auth in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 11s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
44s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}141m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:a716388 |
| JIRA Issue | HADOOP-15959 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950048/HADOOP-15959-branch-2-002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b23bc53084f1 4.4.0-134-generic #160~14.04.1-Ubuntu SMP Fri Aug 
17 11:07:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2 / 408f60e |
| maven | version: Apache Maven 3.3.9 
(bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) |
| Default Java | 1.7.0_181 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15580/artifact/out/diff-checkstyle-hadoop-common-project.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15580/artifact/out/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15580/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 

[jira] [Commented] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2018-11-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703739#comment-16703739
 ] 

Hadoop QA commented on HADOOP-15625:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 18s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 4 
new + 6 unchanged - 0 fixed = 10 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
23s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15625 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12934259/HADOOP-15625-002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b895e9cde797 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f534736 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15581/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15581/testReport/ |
| Max. process+thread count | 341 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15581/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This 

[jira] [Commented] (HADOOP-15957) WASB: Add asterisk wildcard support for PageBlobDirSet

2018-11-29 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703736#comment-16703736
 ] 

Da Zhou commented on HADOOP-15957:
--

Submitting 002 patch.
Tests are updated based on the comment,  also fixed the checkstyle issue.
 

> WASB: Add asterisk wildcard support for PageBlobDirSet
> --
>
> Key: HADOOP-15957
> URL: https://issues.apache.org/jira/browse/HADOOP-15957
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15957-001.patch, HADOOP-15957-002.patch
>
>
> In WASB, property "*fs.azure.page.blob.dir*" only support literal directory 
> name.
> We need to add support for wildcard '*' to represent for any directory name.
> For example, the following pattern should be supported:
> {code:java}
> /dir1/dir2 
> /dir1/*/dir3
> /dir1/*/*/dir4
> /dir1/*/*/file
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15819) S3A integration test failures: FileSystem is closed! - without parallel test run

2018-11-29 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703729#comment-16703729
 ] 

Adam Antal commented on HADOOP-15819:
-

Strangely I rerun the tests in trunk with no modification, and got only one 
error in {{ITestS3ACommitterFactory}} (same as above). 

{noformat}
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.017 s 
<<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory
[ERROR] 
testEverything(org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory)  Time 
elapsed: 0.016 s  <<< ERROR!
java.io.IOException: s3a://adamantal-hdfs-s3-dev: FileSystem is closed!
at 
org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory.setup(ITestS3ACommitterFactory.java:94)
{noformat}

Probably this is due to the fixes in HADOOP-15947, HADOOP-15798 or 
HADOOP-15370. Still there is one more error, but it can be resolved with either 
disabling caching, or by investigating why the closed S3AFS is returned from 
{{FileSystem.get()}}.

> S3A integration test failures: FileSystem is closed! - without parallel test 
> run
> 
>
> Key: HADOOP-15819
> URL: https://issues.apache.org/jira/browse/HADOOP-15819
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: HADOOP-15819.000.patch, S3ACloseEnforcedFileSystem.java, 
> S3ACloseEnforcedFileSystem.java, closed_fs_closers_example_5klines.log.zip
>
>
> Running the integration tests for hadoop-aws {{mvn -Dscale verify}} against 
> Amazon AWS S3 (eu-west-1, us-west-1, with no s3guard) we see a lot of these 
> failures:
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.408 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob)
>   Time elapsed: 0.027 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 4.345 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob
> [ERROR] 
> testStagingDirectory(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.021 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.022 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.489 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest)
>   Time elapsed: 0.023 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.695 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob
> [ERROR] testMRJob(org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob)  
> Time elapsed: 0.039 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.015 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory
> [ERROR] 
> testEverything(org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory)  
> Time elapsed: 0.014 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> {noformat}
> The big issue is that the tests are running in a serial manner - no test is 
> running on top of the other - so we should not see that the tests are failing 
> like this. The issue could be in how we handle 
> org.apache.hadoop.fs.FileSystem#CACHE - the tests should use the same 
> S3AFileSystem so if A test uses a FileSystem and closes it in teardown then B 
> test will get the same FileSystem object from the cache and try to use it, 
> but it is closed.
> We see this a lot in our downstream testing too. It's not possible to tell 
> that the failed regression test result is an implementation issue in the 
> runtime code or a test implementation problem. 
> I've checked when and what closes the S3AFileSystem with a sightly modified 
> version of S3AFileSystem which logs the closers of the fs if an error should 
> occur. I'll attach this modified java file for reference. See the next 

[jira] [Commented] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2018-11-29 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703695#comment-16703695
 ] 

Steve Loughran commented on HADOOP-15625:
-

yeah, I know, responsibility is mine. I'll do this. I'm also doing HADOOP-15229 
which doesn't conflict code-wise, but complicates merging a bit.

> S3A input stream to use etags to detect changed source files
> 
>
> Key: HADOOP-15625
> URL: https://issues.apache.org/jira/browse/HADOOP-15625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP-15625-001.patch, HADOOP-15625-002.patch
>
>
> S3A input stream doesn't handle changing source files any better than the 
> other cloud store connectors. Specifically: it doesn't noticed it has 
> changed, caches the length from startup, and whenever a seek triggers a new 
> GET, you may get one of: old data, new data, and even perhaps go from new 
> data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with 
> S3Guard, BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15625) S3A input stream to use etags to detect changed source files

2018-11-29 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703658#comment-16703658
 ] 

Brahma Reddy Battula commented on HADOOP-15625:
---

[~ste...@apache.org] My aws account got blocked,as there was outstanding amount.

I will create another account and verify. Can you help on verifying with 
S3AGuard.?

> S3A input stream to use etags to detect changed source files
> 
>
> Key: HADOOP-15625
> URL: https://issues.apache.org/jira/browse/HADOOP-15625
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP-15625-001.patch, HADOOP-15625-002.patch
>
>
> S3A input stream doesn't handle changing source files any better than the 
> other cloud store connectors. Specifically: it doesn't noticed it has 
> changed, caches the length from startup, and whenever a seek triggers a new 
> GET, you may get one of: old data, new data, and even perhaps go from new 
> data to old data due to eventual consistency.
> We can't do anything to stop this, but we could detect changes by
> # caching the etag of the first HEAD/GET (we don't get that HEAD on open with 
> S3Guard, BTW)
> # on future GET requests, verify the etag of the response
> # raise an IOE if the remote file changed during the read.
> It's a more dramatic failure, but it stops changes silently corrupting things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15954) ABFS: Enable owner and group conversion for MSI and login user using OAuth

2018-11-29 Thread junhua gu (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703653#comment-16703653
 ] 

junhua gu commented on HADOOP-15954:


HADOOP-15954-003

Add support for set owner, group and acls with short name.

> ABFS: Enable owner and group conversion for MSI and login user using OAuth
> --
>
> Key: HADOOP-15954
> URL: https://issues.apache.org/jira/browse/HADOOP-15954
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: junhua gu
>Assignee: junhua gu
>Priority: Major
> Attachments: HADOOP-15954-001.patch, HADOOP-15954-002.patch, 
> HADOOP-15954-003.patch
>
>
> Add support for overwriting owner and group in set/get operations to be the 
> service principal id when OAuth is used. Add support for upn short name 
> format.
>  
> Add Standard Transformer for SharedKey / Service 
> Add interface provides an extensible model for customizing the acquisition of 
> Identity Transformer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15954) ABFS: Enable owner and group conversion for MSI and login user using OAuth

2018-11-29 Thread junhua gu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

junhua gu updated HADOOP-15954:
---
Attachment: HADOOP-15954-003.patch

> ABFS: Enable owner and group conversion for MSI and login user using OAuth
> --
>
> Key: HADOOP-15954
> URL: https://issues.apache.org/jira/browse/HADOOP-15954
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: junhua gu
>Assignee: junhua gu
>Priority: Major
> Attachments: HADOOP-15954-001.patch, HADOOP-15954-002.patch, 
> HADOOP-15954-003.patch
>
>
> Add support for overwriting owner and group in set/get operations to be the 
> service principal id when OAuth is used. Add support for upn short name 
> format.
>  
> Add Standard Transformer for SharedKey / Service 
> Add interface provides an extensible model for customizing the acquisition of 
> Identity Transformer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15922) DelegationTokenAuthenticationFilter get wrong doAsUser since it does not decode URL

2018-11-29 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703635#comment-16703635
 ] 

Eric Yang edited comment on HADOOP-15922 at 11/29/18 7:02 PM:
--

[~daryn] Good catch on the double encode.  Thanks

[~hexiaoqiao] Patch 005 is the better fix if we don't need to worry about 
rolling upgrade where the caller is using existing code.  Is there a concern 
there?  Thanks


was (Author: eyang):
[~daryn] Good catch on the double encode.  Thanks

[~hexiaoqiao] Patch 005 is the better fix.  Thanks

> DelegationTokenAuthenticationFilter get wrong doAsUser since it does not 
> decode URL
> ---
>
> Key: HADOOP-15922
> URL: https://issues.apache.org/jira/browse/HADOOP-15922
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 3.3.0, 3.1.2, 3.2.1
>
> Attachments: HADOOP-15922.001.patch, HADOOP-15922.002.patch, 
> HADOOP-15922.003.patch, HADOOP-15922.004.patch, HADOOP-15922.005.patch
>
>
> DelegationTokenAuthenticationFilter get wrong doAsUser when proxy user from 
> client is complete kerberos name (e.g., user/hostn...@realm.com, actually it 
> is acceptable), because DelegationTokenAuthenticationFilter does not decode 
> DOAS parameter in URL which is encoded by {{URLEncoder}} at client.
> e.g. KMS as example:
> a. KMSClientProvider creates connection to KMS Server using 
> DelegationTokenAuthenticatedURL#openConnection.
> b. If KMSClientProvider is a doAsUser, KMSClientProvider will put {{doas}} 
> with url encoded user as one parameter of http request. 
> {code:java}
> // proxyuser
> if (doAs != null) {
>   extraParams.put(DO_AS, URLEncoder.encode(doAs, "UTF-8"));
> }
> {code}
> c. when KMS server receives the request, it does not decode the proxy user.
> As result, KMS Server will get the wrong proxy user if this proxy user is 
> complete Kerberos Name or it includes some special character. Some other 
> authentication and authorization exception will throws next to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15922) DelegationTokenAuthenticationFilter get wrong doAsUser since it does not decode URL

2018-11-29 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703635#comment-16703635
 ] 

Eric Yang edited comment on HADOOP-15922 at 11/29/18 6:45 PM:
--

[~daryn] Good catch on the double encode.  Thanks

[~hexiaoqiao] Patch 005 is the better fix.  Thanks


was (Author: eyang):
[~daryn] Good catch on the double encode.

[~hexiaoqiao] Patch 005 is the better fix.  Thanks

> DelegationTokenAuthenticationFilter get wrong doAsUser since it does not 
> decode URL
> ---
>
> Key: HADOOP-15922
> URL: https://issues.apache.org/jira/browse/HADOOP-15922
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 3.3.0, 3.1.2, 3.2.1
>
> Attachments: HADOOP-15922.001.patch, HADOOP-15922.002.patch, 
> HADOOP-15922.003.patch, HADOOP-15922.004.patch, HADOOP-15922.005.patch
>
>
> DelegationTokenAuthenticationFilter get wrong doAsUser when proxy user from 
> client is complete kerberos name (e.g., user/hostn...@realm.com, actually it 
> is acceptable), because DelegationTokenAuthenticationFilter does not decode 
> DOAS parameter in URL which is encoded by {{URLEncoder}} at client.
> e.g. KMS as example:
> a. KMSClientProvider creates connection to KMS Server using 
> DelegationTokenAuthenticatedURL#openConnection.
> b. If KMSClientProvider is a doAsUser, KMSClientProvider will put {{doas}} 
> with url encoded user as one parameter of http request. 
> {code:java}
> // proxyuser
> if (doAs != null) {
>   extraParams.put(DO_AS, URLEncoder.encode(doAs, "UTF-8"));
> }
> {code}
> c. when KMS server receives the request, it does not decode the proxy user.
> As result, KMS Server will get the wrong proxy user if this proxy user is 
> complete Kerberos Name or it includes some special character. Some other 
> authentication and authorization exception will throws next to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15922) DelegationTokenAuthenticationFilter get wrong doAsUser since it does not decode URL

2018-11-29 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703635#comment-16703635
 ] 

Eric Yang commented on HADOOP-15922:


[~daryn] Good catch on the double encode.

[~hexiaoqiao] Patch 005 is the better fix.  Thanks

> DelegationTokenAuthenticationFilter get wrong doAsUser since it does not 
> decode URL
> ---
>
> Key: HADOOP-15922
> URL: https://issues.apache.org/jira/browse/HADOOP-15922
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 3.3.0, 3.1.2, 3.2.1
>
> Attachments: HADOOP-15922.001.patch, HADOOP-15922.002.patch, 
> HADOOP-15922.003.patch, HADOOP-15922.004.patch, HADOOP-15922.005.patch
>
>
> DelegationTokenAuthenticationFilter get wrong doAsUser when proxy user from 
> client is complete kerberos name (e.g., user/hostn...@realm.com, actually it 
> is acceptable), because DelegationTokenAuthenticationFilter does not decode 
> DOAS parameter in URL which is encoded by {{URLEncoder}} at client.
> e.g. KMS as example:
> a. KMSClientProvider creates connection to KMS Server using 
> DelegationTokenAuthenticatedURL#openConnection.
> b. If KMSClientProvider is a doAsUser, KMSClientProvider will put {{doas}} 
> with url encoded user as one parameter of http request. 
> {code:java}
> // proxyuser
> if (doAs != null) {
>   extraParams.put(DO_AS, URLEncoder.encode(doAs, "UTF-8"));
> }
> {code}
> c. when KMS server receives the request, it does not decode the proxy user.
> As result, KMS Server will get the wrong proxy user if this proxy user is 
> complete Kerberos Name or it includes some special character. Some other 
> authentication and authorization exception will throws next to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15819) S3A integration test failures: FileSystem is closed! - without parallel test run

2018-11-29 Thread Adam Antal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal updated HADOOP-15819:

Attachment: S3ACloseEnforcedFileSystem.java

> S3A integration test failures: FileSystem is closed! - without parallel test 
> run
> 
>
> Key: HADOOP-15819
> URL: https://issues.apache.org/jira/browse/HADOOP-15819
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: S3ACloseEnforcedFileSystem.java, 
> S3ACloseEnforcedFileSystem.java, closed_fs_closers_example_5klines.log.zip
>
>
> Running the integration tests for hadoop-aws {{mvn -Dscale verify}} against 
> Amazon AWS S3 (eu-west-1, us-west-1, with no s3guard) we see a lot of these 
> failures:
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.408 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob)
>   Time elapsed: 0.027 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 4.345 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob
> [ERROR] 
> testStagingDirectory(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.021 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.022 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.489 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest)
>   Time elapsed: 0.023 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.695 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob
> [ERROR] testMRJob(org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob)  
> Time elapsed: 0.039 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.015 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory
> [ERROR] 
> testEverything(org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory)  
> Time elapsed: 0.014 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> {noformat}
> The big issue is that the tests are running in a serial manner - no test is 
> running on top of the other - so we should not see that the tests are failing 
> like this. The issue could be in how we handle 
> org.apache.hadoop.fs.FileSystem#CACHE - the tests should use the same 
> S3AFileSystem so if A test uses a FileSystem and closes it in teardown then B 
> test will get the same FileSystem object from the cache and try to use it, 
> but it is closed.
> We see this a lot in our downstream testing too. It's not possible to tell 
> that the failed regression test result is an implementation issue in the 
> runtime code or a test implementation problem. 
> I've checked when and what closes the S3AFileSystem with a sightly modified 
> version of S3AFileSystem which logs the closers of the fs if an error should 
> occur. I'll attach this modified java file for reference. See the next 
> example of the result when it's running:
> {noformat}
> 2018-10-04 00:52:25,596 [Thread-4201] ERROR s3a.S3ACloseEnforcedFileSystem 
> (S3ACloseEnforcedFileSystem.java:checkIfClosed(74)) - Use after close(): 
> java.lang.RuntimeException: Using closed FS!.
>   at 
> org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.checkIfClosed(S3ACloseEnforcedFileSystem.java:73)
>   at 
> org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.mkdirs(S3ACloseEnforcedFileSystem.java:474)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:193)
>   at 
> org.apache.hadoop.fs.s3a.ITestS3AClosedFS.setup(ITestS3AClosedFS.java:40)
>   at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
>   

[jira] [Updated] (HADOOP-15819) S3A integration test failures: FileSystem is closed! - without parallel test run

2018-11-29 Thread Adam Antal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal updated HADOOP-15819:

Attachment: HADOOP-15819.000.patch

> S3A integration test failures: FileSystem is closed! - without parallel test 
> run
> 
>
> Key: HADOOP-15819
> URL: https://issues.apache.org/jira/browse/HADOOP-15819
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: HADOOP-15819.000.patch, S3ACloseEnforcedFileSystem.java, 
> S3ACloseEnforcedFileSystem.java, closed_fs_closers_example_5klines.log.zip
>
>
> Running the integration tests for hadoop-aws {{mvn -Dscale verify}} against 
> Amazon AWS S3 (eu-west-1, us-west-1, with no s3guard) we see a lot of these 
> failures:
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.408 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob)
>   Time elapsed: 0.027 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 4.345 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob
> [ERROR] 
> testStagingDirectory(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.021 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.022 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.489 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest)
>   Time elapsed: 0.023 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.695 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob
> [ERROR] testMRJob(org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob)  
> Time elapsed: 0.039 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.015 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory
> [ERROR] 
> testEverything(org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory)  
> Time elapsed: 0.014 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> {noformat}
> The big issue is that the tests are running in a serial manner - no test is 
> running on top of the other - so we should not see that the tests are failing 
> like this. The issue could be in how we handle 
> org.apache.hadoop.fs.FileSystem#CACHE - the tests should use the same 
> S3AFileSystem so if A test uses a FileSystem and closes it in teardown then B 
> test will get the same FileSystem object from the cache and try to use it, 
> but it is closed.
> We see this a lot in our downstream testing too. It's not possible to tell 
> that the failed regression test result is an implementation issue in the 
> runtime code or a test implementation problem. 
> I've checked when and what closes the S3AFileSystem with a sightly modified 
> version of S3AFileSystem which logs the closers of the fs if an error should 
> occur. I'll attach this modified java file for reference. See the next 
> example of the result when it's running:
> {noformat}
> 2018-10-04 00:52:25,596 [Thread-4201] ERROR s3a.S3ACloseEnforcedFileSystem 
> (S3ACloseEnforcedFileSystem.java:checkIfClosed(74)) - Use after close(): 
> java.lang.RuntimeException: Using closed FS!.
>   at 
> org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.checkIfClosed(S3ACloseEnforcedFileSystem.java:73)
>   at 
> org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.mkdirs(S3ACloseEnforcedFileSystem.java:474)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:193)
>   at 
> org.apache.hadoop.fs.s3a.ITestS3AClosedFS.setup(ITestS3AClosedFS.java:40)
>   at 

[jira] [Comment Edited] (HADOOP-15819) S3A integration test failures: FileSystem is closed! - without parallel test run

2018-11-29 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703616#comment-16703616
 ] 

Adam Antal edited comment on HADOOP-15819 at 11/29/18 6:26 PM:
---

Also some minor remark on {{S3ACloseEnforcedFileSystem}}: the method 
{{processDeleteOnExit()}} should not call the {{checkIfClosed()}} method in 
{{S3ACloseEnforcedFileSystem}}, because the former is only called after the 
{{close()}} (the order of the calls is {{S3ACloseEnforcedFileSystem:close() -> 
S3AFileSystem:close() -> FileSystem:close() -> 
S3ACloseEnforcedFileSystem:processDeleteOnExit()}}) thus creating misleading 
errors during the normal close of the filesystem.
I uploaded that for reference.


was (Author: adam.antal):
Also some minor remark on {{S3ACloseEnforcedFileSystem}}: the method 
{{processDeleteOnExit()}} should not call the {{checkIfClosed()}} method in 
{{S3ACloseEnforcedFileSystem}}, because the former is only called after the 
{{close()}} (the order of the calls is {{S3ACloseEnforcedFileSystem:close() -> 
S3AFileSystem:close() -> FileSystem:close() -> 
S3ACloseEnforcedFileSystem:processDeleteOnExit()}}) thus creating misleading 
errors during the normal close of the filesystem.

> S3A integration test failures: FileSystem is closed! - without parallel test 
> run
> 
>
> Key: HADOOP-15819
> URL: https://issues.apache.org/jira/browse/HADOOP-15819
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: S3ACloseEnforcedFileSystem.java, 
> S3ACloseEnforcedFileSystem.java, closed_fs_closers_example_5klines.log.zip
>
>
> Running the integration tests for hadoop-aws {{mvn -Dscale verify}} against 
> Amazon AWS S3 (eu-west-1, us-west-1, with no s3guard) we see a lot of these 
> failures:
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.408 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob)
>   Time elapsed: 0.027 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 4.345 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob
> [ERROR] 
> testStagingDirectory(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.021 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.022 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.489 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest)
>   Time elapsed: 0.023 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.695 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob
> [ERROR] testMRJob(org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob)  
> Time elapsed: 0.039 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.015 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory
> [ERROR] 
> testEverything(org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory)  
> Time elapsed: 0.014 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> {noformat}
> The big issue is that the tests are running in a serial manner - no test is 
> running on top of the other - so we should not see that the tests are failing 
> like this. The issue could be in how we handle 
> org.apache.hadoop.fs.FileSystem#CACHE - the tests should use the same 
> S3AFileSystem so if A test uses a FileSystem and closes it in teardown then B 
> test will get the same FileSystem object from the cache and try to use it, 
> but it is closed.
> We see this a lot in our downstream testing too. It's not possible to tell 
> that the failed regression test result is an implementation issue in the 
> runtime code or a test implementation problem. 
> I've checked when and what closes the 

[jira] [Commented] (HADOOP-15960) Update guava to 27.0-jre in hadoop-common

2018-11-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703622#comment-16703622
 ] 

Hadoop QA commented on HADOOP-15960:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
6s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-client-modules/hadoop-client-check-invariants 
hadoop-client-modules/hadoop-client-check-test-invariants {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 14m 35s{color} 
| {color:red} root generated 19 new + 1481 unchanged - 7 fixed = 1500 total 
(was 1488) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 1s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
33s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-client-modules/hadoop-client-check-invariants 
hadoop-client-modules/hadoop-client-check-test-invariants {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 
24s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
31s{color} | 

[jira] [Commented] (HADOOP-15819) S3A integration test failures: FileSystem is closed! - without parallel test run

2018-11-29 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703616#comment-16703616
 ] 

Adam Antal commented on HADOOP-15819:
-

Also some minor remark on {{S3ACloseEnforcedFileSystem}}: the method 
{{processDeleteOnExit()}} should not call the {{checkIfClosed()}} method in 
{{S3ACloseEnforcedFileSystem}}, because the former is only called after the 
{{close()}} (the order of the calls is {{S3ACloseEnforcedFileSystem:close() -> 
S3AFileSystem:close() -> FileSystem:close() -> 
S3ACloseEnforcedFileSystem:processDeleteOnExit()}}) thus creating misleading 
errors during the normal close of the filesystem.

> S3A integration test failures: FileSystem is closed! - without parallel test 
> run
> 
>
> Key: HADOOP-15819
> URL: https://issues.apache.org/jira/browse/HADOOP-15819
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: S3ACloseEnforcedFileSystem.java, 
> closed_fs_closers_example_5klines.log.zip
>
>
> Running the integration tests for hadoop-aws {{mvn -Dscale verify}} against 
> Amazon AWS S3 (eu-west-1, us-west-1, with no s3guard) we see a lot of these 
> failures:
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.408 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob)
>   Time elapsed: 0.027 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 4.345 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob
> [ERROR] 
> testStagingDirectory(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.021 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.022 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.489 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest)
>   Time elapsed: 0.023 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.695 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob
> [ERROR] testMRJob(org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob)  
> Time elapsed: 0.039 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.015 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory
> [ERROR] 
> testEverything(org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory)  
> Time elapsed: 0.014 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> {noformat}
> The big issue is that the tests are running in a serial manner - no test is 
> running on top of the other - so we should not see that the tests are failing 
> like this. The issue could be in how we handle 
> org.apache.hadoop.fs.FileSystem#CACHE - the tests should use the same 
> S3AFileSystem so if A test uses a FileSystem and closes it in teardown then B 
> test will get the same FileSystem object from the cache and try to use it, 
> but it is closed.
> We see this a lot in our downstream testing too. It's not possible to tell 
> that the failed regression test result is an implementation issue in the 
> runtime code or a test implementation problem. 
> I've checked when and what closes the S3AFileSystem with a sightly modified 
> version of S3AFileSystem which logs the closers of the fs if an error should 
> occur. I'll attach this modified java file for reference. See the next 
> example of the result when it's running:
> {noformat}
> 2018-10-04 00:52:25,596 [Thread-4201] ERROR s3a.S3ACloseEnforcedFileSystem 
> (S3ACloseEnforcedFileSystem.java:checkIfClosed(74)) - Use after close(): 
> java.lang.RuntimeException: Using closed FS!.
>   at 
> org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.checkIfClosed(S3ACloseEnforcedFileSystem.java:73)
>   at 
> 

[jira] [Commented] (HADOOP-15959) revert HADOOP-12751

2018-11-29 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703598#comment-16703598
 ] 

Hudson commented on HADOOP-15959:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15529 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15529/])
HADOOP-15959. Revert "HADOOP-12751. While using kerberos Hadoop (stevel: rev 
d0edd37269bb40290b409d583bcf3b70897c13e0)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java
* (edit) 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java
* (edit) 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosName.java
* (edit) 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
* (edit) hadoop-common-project/hadoop-common/src/site/markdown/SecureMode.md
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/KDiag.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestKDiag.java


> revert HADOOP-12751
> ---
>
> Key: HADOOP-15959
> URL: https://issues.apache.org/jira/browse/HADOOP-15959
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.2.0, 3.1.1, 2.9.2, 3.0.3, 2.7.7, 2.8.5
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.0.4, 3.1.2, 3.2.1
>
> Attachments: HADOOP-15959-001.patch, HADOOP-15959-branch-2-002.patch
>
>
> HADOOP-12751 doesn't quite work right. Revert.
> (this patch is so jenkins can do the test runs)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12751) While using kerberos Hadoop incorrectly assumes names with '@' to be non-simple

2018-11-29 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703599#comment-16703599
 ] 

Hudson commented on HADOOP-12751:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15529 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15529/])
HADOOP-15959. Revert "HADOOP-12751. While using kerberos Hadoop (stevel: rev 
d0edd37269bb40290b409d583bcf3b70897c13e0)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java
* (edit) 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java
* (edit) 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosName.java
* (edit) 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
* (edit) hadoop-common-project/hadoop-common/src/site/markdown/SecureMode.md
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/KDiag.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestKDiag.java


> While using kerberos Hadoop incorrectly assumes names with '@' to be 
> non-simple
> ---
>
> Key: HADOOP-12751
> URL: https://issues.apache.org/jira/browse/HADOOP-12751
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2
> Environment: kerberos
>Reporter: Bolke de Bruin
>Assignee: Bolke de Bruin
>Priority: Critical
>  Labels: kerberos
> Fix For: 2.8.0, 3.0.0-alpha1, 2.7.6
>
> Attachments: 0001-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0001-Remove-check-for-user-name-characters-and.patch, 
> 0002-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0003-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0004-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0005-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0006-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0007-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0007-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0008-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0008-HADOOP-12751-leave-user-validation-to-os.patch, HADOOP-12751-009.patch, 
> HADOOP-12751-branch-2.7.009.patch
>
>
> In the scenario of a trust between two directories, eg. FreeIPA (ipa.local) 
> and Active Directory (ad.local) users can be made available on the OS level 
> by something like sssd. The trusted users will be of the form 'user@ad.local' 
> while other users are will not contain the domain. Executing 'id -Gn 
> user@ad.local' will successfully return the groups the user belongs to if 
> configured correctly. 
> However, it is assumed by Hadoop that users of the format with '@' cannot be 
> correct. This code is in KerberosName.java and seems to be a validator if the 
> 'auth_to_local' rules are applied correctly.
> In my opinion this should be removed or changed to a different kind of check 
> or maybe logged as a warning while still proceeding, as the current behavior 
> limits integration possibilities with other standard tools.
> Workaround are difficult to apply (by having a rewrite by system tools to for 
> example user_ad_local) due to down stream consequences.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15959) revert HADOOP-12751

2018-11-29 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15959:

Status: Patch Available  (was: Open)

> revert HADOOP-12751
> ---
>
> Key: HADOOP-15959
> URL: https://issues.apache.org/jira/browse/HADOOP-15959
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.5, 2.7.7, 3.0.3, 2.9.2, 3.1.1, 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.0.4, 3.1.2, 3.2.1
>
> Attachments: HADOOP-15959-001.patch, HADOOP-15959-branch-2-002.patch
>
>
> HADOOP-12751 doesn't quite work right. Revert.
> (this patch is so jenkins can do the test runs)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15959) revert HADOOP-12751

2018-11-29 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15959:

Attachment: HADOOP-15959-branch-2-002.patch

> revert HADOOP-12751
> ---
>
> Key: HADOOP-15959
> URL: https://issues.apache.org/jira/browse/HADOOP-15959
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.2.0, 3.1.1, 2.9.2, 3.0.3, 2.7.7, 2.8.5
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.0.4, 3.1.2, 3.2.1
>
> Attachments: HADOOP-15959-001.patch, HADOOP-15959-branch-2-002.patch
>
>
> HADOOP-12751 doesn't quite work right. Revert.
> (this patch is so jenkins can do the test runs)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15959) revert HADOOP-12751

2018-11-29 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15959:

Status: Open  (was: Patch Available)

> revert HADOOP-12751
> ---
>
> Key: HADOOP-15959
> URL: https://issues.apache.org/jira/browse/HADOOP-15959
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.5, 2.7.7, 3.0.3, 2.9.2, 3.1.1, 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.0.4, 3.1.2, 3.2.1
>
> Attachments: HADOOP-15959-001.patch
>
>
> HADOOP-12751 doesn't quite work right. Revert.
> (this patch is so jenkins can do the test runs)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15959) revert HADOOP-12751

2018-11-29 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15959:

Fix Version/s: 3.2.1
   3.1.2
   3.0.4

> revert HADOOP-12751
> ---
>
> Key: HADOOP-15959
> URL: https://issues.apache.org/jira/browse/HADOOP-15959
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.2.0, 3.1.1, 2.9.2, 3.0.3, 2.7.7, 2.8.5
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.0.4, 3.1.2, 3.2.1
>
> Attachments: HADOOP-15959-001.patch
>
>
> HADOOP-12751 doesn't quite work right. Revert.
> (this patch is so jenkins can do the test runs)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15959) revert HADOOP-12751

2018-11-29 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703576#comment-16703576
 ] 

Steve Loughran commented on HADOOP-15959:
-

Jenkins is happy; reverted in branch-3. checking for branch-2 next

> revert HADOOP-12751
> ---
>
> Key: HADOOP-15959
> URL: https://issues.apache.org/jira/browse/HADOOP-15959
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.2.0, 3.1.1, 2.9.2, 3.0.3, 2.7.7, 2.8.5
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.0.4, 3.1.2, 3.2.1
>
> Attachments: HADOOP-15959-001.patch
>
>
> HADOOP-12751 doesn't quite work right. Revert.
> (this patch is so jenkins can do the test runs)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15955) declare that fs.s3a.ext. is a prefix for arbitrary extensions

2018-11-29 Thread Larry McCay (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703572#comment-16703572
 ] 

Larry McCay commented on HADOOP-15955:
--

This makes sense, especially given the extensibility built into these storage 
connectors for find credentials, etc.

+1

> declare that fs.s3a.ext. is a prefix for arbitrary extensions
> -
>
> Key: HADOOP-15955
> URL: https://issues.apache.org/jira/browse/HADOOP-15955
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Trivial
>
> it turns out to be useful to use per-bucket config extensions for things like 
> delegation tokens. 
> To avoid third-party extensions to pollute the fs.s3a. namespace with their 
> own config options (So meaning its inevitable things break when we add some 
> new fs option), I propose the following policy
> # applications/extensions MUST NOT add their own config options under fs.s3a.
> # anyone is free to use "fs.s3a.ext.$product." options in s3a. 
> # we promise not to go near those options in the S3A module
> # it is left to people who add their own settings here to avoid clashing with 
> others, This is why you should use a unique product name
> With this specified (where?) then if someone uses some fs.s3a. option which 
> gets broken by an updated s3a connector, well, we say "wontfix", —and indeed, 
> get to tell them off for doing this.
> Propose: similar for the other stores



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15819) S3A integration test failures: FileSystem is closed! - without parallel test run

2018-11-29 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703567#comment-16703567
 ] 

Adam Antal commented on HADOOP-15819:
-

I investigated a bit further, and when the caching is disabled in 
{{AbstractS3ATestBase:createConfiguration()}} and 
{{ITestS3ACommitterFactory:setup()}} methods, the "Using closed FS!" exception 
does not appear. 
Note that the committer uses another config object, so it needs to be added 
there too.
Strictly speaking there's one failing test, which is {{ITestS3AClosedFS}}, but 
this is due to the addition of the {{S3ACloseEnforcedFileSystem}}, which raises 
an exception after closing the fs, but that is exactly what we investigate, so 
that exception is irrelevant.
If we decide later to explicitly disable the cache, I upload a patch which 
fixes the bug.

> S3A integration test failures: FileSystem is closed! - without parallel test 
> run
> 
>
> Key: HADOOP-15819
> URL: https://issues.apache.org/jira/browse/HADOOP-15819
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: S3ACloseEnforcedFileSystem.java, 
> closed_fs_closers_example_5klines.log.zip
>
>
> Running the integration tests for hadoop-aws {{mvn -Dscale verify}} against 
> Amazon AWS S3 (eu-west-1, us-west-1, with no s3guard) we see a lot of these 
> failures:
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.408 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob)
>   Time elapsed: 0.027 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 4.345 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob
> [ERROR] 
> testStagingDirectory(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.021 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.022 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.489 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest)
>   Time elapsed: 0.023 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.695 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob
> [ERROR] testMRJob(org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob)  
> Time elapsed: 0.039 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.015 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory
> [ERROR] 
> testEverything(org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory)  
> Time elapsed: 0.014 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> {noformat}
> The big issue is that the tests are running in a serial manner - no test is 
> running on top of the other - so we should not see that the tests are failing 
> like this. The issue could be in how we handle 
> org.apache.hadoop.fs.FileSystem#CACHE - the tests should use the same 
> S3AFileSystem so if A test uses a FileSystem and closes it in teardown then B 
> test will get the same FileSystem object from the cache and try to use it, 
> but it is closed.
> We see this a lot in our downstream testing too. It's not possible to tell 
> that the failed regression test result is an implementation issue in the 
> runtime code or a test implementation problem. 
> I've checked when and what closes the S3AFileSystem with a sightly modified 
> version of S3AFileSystem which logs the closers of the fs if an error should 
> occur. I'll attach this modified java file for reference. See the next 
> example of the result when it's running:
> {noformat}
> 2018-10-04 00:52:25,596 [Thread-4201] ERROR s3a.S3ACloseEnforcedFileSystem 
> (S3ACloseEnforcedFileSystem.java:checkIfClosed(74)) - Use after close(): 
> java.lang.RuntimeException: Using 

[jira] [Created] (HADOOP-15961) S3A committers: make sure there's regular progress() calls

2018-11-29 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15961:
---

 Summary: S3A committers: make sure there's regular progress() calls
 Key: HADOOP-15961
 URL: https://issues.apache.org/jira/browse/HADOOP-15961
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Steve Loughran


MAPREDUCE-7164 highlights how inside job/task commit more context.progress() 
callbacks are needed, just for HDFS.

the S3A committers should be reviewed similarly.

At a glance:

StagingCommitter.commitTaskInternal() is at risk if a task write upload enough 
data to the localfs that the upload takes longer than the timeout.

it should call progress it every single file commits, or better: modify 
{{uploadFileToPendingCommit}} to take a Progressable for progress callbacks 
after every part upload.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14927) ITestS3GuardTool failures in testDestroyNoBucket()

2018-11-29 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703515#comment-16703515
 ] 

Hudson commented on HADOOP-14927:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15528 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15528/])
HADOOP-14927. ITestS3GuardTool failures in testDestroyNoBucket(). (mackrorysd: 
rev 7eb0d3a32435da110dc9e6004dba8c5c9b082c35)
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java


> ITestS3GuardTool failures in testDestroyNoBucket()
> --
>
> Key: HADOOP-14927
> URL: https://issues.apache.org/jira/browse/HADOOP-14927
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1, 3.0.0-alpha3, 3.1.0
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-14927.001.patch, HADOOP-14927.002.patch
>
>
> Hit this when testing for the Hadoop 3.0.0-beta1 RC0.
> {noformat}
> hadoop-3.0.0-beta1-src/hadoop-tools/hadoop-aws$ mvn clean verify 
> -Dit.test="ITestS3GuardTool*" -Dtest=none -Ds3guard -Ddynamo
> ...
> Failed tests: 
>   
> ITestS3GuardToolDynamoDB>AbstractS3GuardToolTestBase.testDestroyNoBucket:228 
> Expected an exception, got 0
>   ITestS3GuardToolLocal>AbstractS3GuardToolTestBase.testDestroyNoBucket:228 
> Expected an exception, got 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15218) Make Hadoop compatible with Guava 22.0+

2018-11-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703504#comment-16703504
 ] 

Hadoop QA commented on HADOOP-15218:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
22s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15218 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12909865/HADOOP-15218-001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9e1c6364f7e6 4.4.0-134-generic #160~14.04.1-Ubuntu SMP Fri Aug 
17 11:07:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c1d24f8 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15578/testReport/ |
| Max. process+thread count | 754 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 |
| Console output | 

[jira] [Updated] (HADOOP-14927) ITestS3GuardTool failures in testDestroyNoBucket()

2018-11-29 Thread Sean Mackrory (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-14927:
---
Fix Version/s: 3.3.0

> ITestS3GuardTool failures in testDestroyNoBucket()
> --
>
> Key: HADOOP-14927
> URL: https://issues.apache.org/jira/browse/HADOOP-14927
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1, 3.0.0-alpha3, 3.1.0
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HADOOP-14927.001.patch, HADOOP-14927.002.patch
>
>
> Hit this when testing for the Hadoop 3.0.0-beta1 RC0.
> {noformat}
> hadoop-3.0.0-beta1-src/hadoop-tools/hadoop-aws$ mvn clean verify 
> -Dit.test="ITestS3GuardTool*" -Dtest=none -Ds3guard -Ddynamo
> ...
> Failed tests: 
>   
> ITestS3GuardToolDynamoDB>AbstractS3GuardToolTestBase.testDestroyNoBucket:228 
> Expected an exception, got 0
>   ITestS3GuardToolLocal>AbstractS3GuardToolTestBase.testDestroyNoBucket:228 
> Expected an exception, got 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14927) ITestS3GuardTool failures in testDestroyNoBucket()

2018-11-29 Thread Sean Mackrory (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-14927:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> ITestS3GuardTool failures in testDestroyNoBucket()
> --
>
> Key: HADOOP-14927
> URL: https://issues.apache.org/jira/browse/HADOOP-14927
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1, 3.0.0-alpha3, 3.1.0
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-14927.001.patch, HADOOP-14927.002.patch
>
>
> Hit this when testing for the Hadoop 3.0.0-beta1 RC0.
> {noformat}
> hadoop-3.0.0-beta1-src/hadoop-tools/hadoop-aws$ mvn clean verify 
> -Dit.test="ITestS3GuardTool*" -Dtest=none -Ds3guard -Ddynamo
> ...
> Failed tests: 
>   
> ITestS3GuardToolDynamoDB>AbstractS3GuardToolTestBase.testDestroyNoBucket:228 
> Expected an exception, got 0
>   ITestS3GuardToolLocal>AbstractS3GuardToolTestBase.testDestroyNoBucket:228 
> Expected an exception, got 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14927) ITestS3GuardTool failures in testDestroyNoBucket()

2018-11-29 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703465#comment-16703465
 ] 

Sean Mackrory commented on HADOOP-14927:


After fixing a few things in my test configuration and environment, I'm down to 
a much smaller, much more stable list of failures that I can clearly see are 
not related to new patches. This patch looks good to me, doesn't not introduce 
any regressions (I tested with -Ds3guard, -Ds3guard -Ddynamo, and -Ds3guard 
-Ddynamo -Dauth), and reliably fixes the test in question. Committing...

> ITestS3GuardTool failures in testDestroyNoBucket()
> --
>
> Key: HADOOP-14927
> URL: https://issues.apache.org/jira/browse/HADOOP-14927
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1, 3.0.0-alpha3, 3.1.0
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-14927.001.patch, HADOOP-14927.002.patch
>
>
> Hit this when testing for the Hadoop 3.0.0-beta1 RC0.
> {noformat}
> hadoop-3.0.0-beta1-src/hadoop-tools/hadoop-aws$ mvn clean verify 
> -Dit.test="ITestS3GuardTool*" -Dtest=none -Ds3guard -Ddynamo
> ...
> Failed tests: 
>   
> ITestS3GuardToolDynamoDB>AbstractS3GuardToolTestBase.testDestroyNoBucket:228 
> Expected an exception, got 0
>   ITestS3GuardToolLocal>AbstractS3GuardToolTestBase.testDestroyNoBucket:228 
> Expected an exception, got 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15922) DelegationTokenAuthenticationFilter get wrong doAsUser since it does not decode URL

2018-11-29 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703412#comment-16703412
 ] 

He Xiaoqiao commented on HADOOP-15922:
--

[~daryn] Thank you for your correct. I upload another v005 and follow your 
suggestions.could you help to revert commit and review the new one. Thanks 
again. 

> DelegationTokenAuthenticationFilter get wrong doAsUser since it does not 
> decode URL
> ---
>
> Key: HADOOP-15922
> URL: https://issues.apache.org/jira/browse/HADOOP-15922
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 3.3.0, 3.1.2, 3.2.1
>
> Attachments: HADOOP-15922.001.patch, HADOOP-15922.002.patch, 
> HADOOP-15922.003.patch, HADOOP-15922.004.patch, HADOOP-15922.005.patch
>
>
> DelegationTokenAuthenticationFilter get wrong doAsUser when proxy user from 
> client is complete kerberos name (e.g., user/hostn...@realm.com, actually it 
> is acceptable), because DelegationTokenAuthenticationFilter does not decode 
> DOAS parameter in URL which is encoded by {{URLEncoder}} at client.
> e.g. KMS as example:
> a. KMSClientProvider creates connection to KMS Server using 
> DelegationTokenAuthenticatedURL#openConnection.
> b. If KMSClientProvider is a doAsUser, KMSClientProvider will put {{doas}} 
> with url encoded user as one parameter of http request. 
> {code:java}
> // proxyuser
> if (doAs != null) {
>   extraParams.put(DO_AS, URLEncoder.encode(doAs, "UTF-8"));
> }
> {code}
> c. when KMS server receives the request, it does not decode the proxy user.
> As result, KMS Server will get the wrong proxy user if this proxy user is 
> complete Kerberos Name or it includes some special character. Some other 
> authentication and authorization exception will throws next to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15922) DelegationTokenAuthenticationFilter get wrong doAsUser since it does not decode URL

2018-11-29 Thread He Xiaoqiao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Xiaoqiao updated HADOOP-15922:
-
Attachment: HADOOP-15922.005.patch

> DelegationTokenAuthenticationFilter get wrong doAsUser since it does not 
> decode URL
> ---
>
> Key: HADOOP-15922
> URL: https://issues.apache.org/jira/browse/HADOOP-15922
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 3.3.0, 3.1.2, 3.2.1
>
> Attachments: HADOOP-15922.001.patch, HADOOP-15922.002.patch, 
> HADOOP-15922.003.patch, HADOOP-15922.004.patch, HADOOP-15922.005.patch
>
>
> DelegationTokenAuthenticationFilter get wrong doAsUser when proxy user from 
> client is complete kerberos name (e.g., user/hostn...@realm.com, actually it 
> is acceptable), because DelegationTokenAuthenticationFilter does not decode 
> DOAS parameter in URL which is encoded by {{URLEncoder}} at client.
> e.g. KMS as example:
> a. KMSClientProvider creates connection to KMS Server using 
> DelegationTokenAuthenticatedURL#openConnection.
> b. If KMSClientProvider is a doAsUser, KMSClientProvider will put {{doas}} 
> with url encoded user as one parameter of http request. 
> {code:java}
> // proxyuser
> if (doAs != null) {
>   extraParams.put(DO_AS, URLEncoder.encode(doAs, "UTF-8"));
> }
> {code}
> c. when KMS server receives the request, it does not decode the proxy user.
> As result, KMS Server will get the wrong proxy user if this proxy user is 
> complete Kerberos Name or it includes some special character. Some other 
> authentication and authorization exception will throws next to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15960) Update guava to 27.0-jre in hadoop-common

2018-11-29 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15960:

Status: Patch Available  (was: In Progress)

> Update guava to 27.0-jre in hadoop-common
> -
>
> Key: HADOOP-15960
> URL: https://issues.apache.org/jira/browse/HADOOP-15960
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15960.000.WIP.patch
>
>
> com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
> [CVE-2018-10237|https://nvd.nist.gov/vuln/detail/CVE-2018-10237].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15959) revert HADOOP-12751

2018-11-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703373#comment-16703373
 ] 

Hadoop QA commented on HADOOP-15959:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  0s{color} | {color:orange} hadoop-common-project: The patch generated 1 new 
+ 127 unchanged - 4 fixed = 128 total (was 131) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
52s{color} | {color:green} hadoop-auth in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 34s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}110m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-15959 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12950013/HADOOP-15959-001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux dcaec2c452f7 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b71cc7f |
| 

[jira] [Updated] (HADOOP-15960) Update guava to 27.0-jre in hadoop-common

2018-11-29 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15960:

Description: com.google.guava:guava should be upgraded to 27.0-jre due to 
new CVE's found 
[CVE-2018-10237|https://nvd.nist.gov/vuln/detail/CVE-2018-10237].  (was: 
com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
[CVE-2018-10237|http://example.com].)

> Update guava to 27.0-jre in hadoop-common
> -
>
> Key: HADOOP-15960
> URL: https://issues.apache.org/jira/browse/HADOOP-15960
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15960.000.WIP.patch
>
>
> com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
> [CVE-2018-10237|https://nvd.nist.gov/vuln/detail/CVE-2018-10237].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15960) Update guava to 27.0-jre in hadoop-common

2018-11-29 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15960:

Description: com.google.guava:guava should be upgraded to 27.0-jre due to 
new CVE's found [CVE-2018-10237|http://example.com].  (was: 
com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
CVE-2018-10237.)

> Update guava to 27.0-jre in hadoop-common
> -
>
> Key: HADOOP-15960
> URL: https://issues.apache.org/jira/browse/HADOOP-15960
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15960.000.WIP.patch
>
>
> com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
> [CVE-2018-10237|http://example.com].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15960) Update guava to 27.0-jre in hadoop-common

2018-11-29 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703365#comment-16703365
 ] 

Gabor Bota commented on HADOOP-15960:
-

I'll search for other guava updates and link them here.
Uploaded my initial patch for this. It compiles, and I've ran tests for a few 
modules, but I'm not really sure if the way I modified 
{{ensure-jars-have-correct-contents.sh}} is the right one.

This was the error message I fixed with adding {{allowed_expr+="|^afu/"}}.
{noformat}
[INFO] Artifact looks correct: 'hadoop-client-api-3.3.0-SNAPSHOT.jar'
[ERROR] Found artifact with unexpected contents: 
'hadoop-client-modules/hadoop-client-runtime/target/hadoop-client-runtime-3.3.0-SNAPSHOT.jar'
Please check the following and either correct the build or update
the allowed list with reasoning.

afu/
afu/org/
afu/org/checkerframework/
afu/org/checkerframework/checker/formatter/
afu/org/checkerframework/checker/formatter/FormatUtil$Conversion.class

afu/org/checkerframework/checker/formatter/FormatUtil$ExcessiveOrMissingFormatArgumentException.class

afu/org/checkerframework/checker/formatter/FormatUtil$IllegalFormatConversionCategoryException.class
afu/org/checkerframework/checker/formatter/FormatUtil.class
afu/org/checkerframework/checker/nullness/
afu/org/checkerframework/checker/nullness/NullnessUtils.class
afu/org/checkerframework/checker/regex/

afu/org/checkerframework/checker/regex/RegexUtil$CheckedPatternSyntaxException.class
afu/org/checkerframework/checker/regex/RegexUtil.class
afu/org/checkerframework/checker/units/
afu/org/checkerframework/checker/units/UnitsTools.class
afu/plume/RegexUtil$CheckedPatternSyntaxException.class
afu/plume/RegexUtil.class
{noformat}

> Update guava to 27.0-jre in hadoop-common
> -
>
> Key: HADOOP-15960
> URL: https://issues.apache.org/jira/browse/HADOOP-15960
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15960.000.WIP.patch
>
>
> com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
> CVE-2018-10237.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15960) Update guava to 27.0-jre in hadoop-common

2018-11-29 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15960:

Attachment: HADOOP-15960.000.WIP.patch

> Update guava to 27.0-jre in hadoop-common
> -
>
> Key: HADOOP-15960
> URL: https://issues.apache.org/jira/browse/HADOOP-15960
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15960.000.WIP.patch
>
>
> com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
> CVE-2018-10237.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15960) Update guava to 27.0-jre in hadoop-common

2018-11-29 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703352#comment-16703352
 ] 

Steve Loughran commented on HADOOP-15960:
-

link to [CVE|https://nvd.nist.gov/vuln/detail/CVE-2018-10237]

this is a DoS when deserializing java objects.

> Update guava to 27.0-jre in hadoop-common
> -
>
> Key: HADOOP-15960
> URL: https://issues.apache.org/jira/browse/HADOOP-15960
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
> CVE-2018-10237.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15922) DelegationTokenAuthenticationFilter get wrong doAsUser since it does not decode URL

2018-11-29 Thread Daryn Sharp (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703329#comment-16703329
 ] 

Daryn Sharp commented on HADOOP-15922:
--

I was asked to take a look at this patch. The problem is not on the 
server-side. The query strings params (before this patch) are already correctly 
decoded. The real problem is the client is double encoding in 
{{DelegationTokenAuthenticator#doDelegationTokenOperation}}.
{code:java}
// proxyuser
if (doAsUser != null) {
  params.put(DelegationTokenAuthenticatedURL.DO_AS,
  URLEncoder.encode(doAsUser, "UTF-8"));
}
[...a few lines later...]
for (Map.Entry entry : params.entrySet()) {
  sb.append(separator).append(entry.getKey()).append("=").
  append(URLEncoder.encode(entry.getValue(), "UTF8"));
  separator = "&";
}
{code}
Please revert and remove the encoding of the DO_AS param when inserted in the 
map.

> DelegationTokenAuthenticationFilter get wrong doAsUser since it does not 
> decode URL
> ---
>
> Key: HADOOP-15922
> URL: https://issues.apache.org/jira/browse/HADOOP-15922
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: 3.3.0, 3.1.2, 3.2.1
>
> Attachments: HADOOP-15922.001.patch, HADOOP-15922.002.patch, 
> HADOOP-15922.003.patch, HADOOP-15922.004.patch
>
>
> DelegationTokenAuthenticationFilter get wrong doAsUser when proxy user from 
> client is complete kerberos name (e.g., user/hostn...@realm.com, actually it 
> is acceptable), because DelegationTokenAuthenticationFilter does not decode 
> DOAS parameter in URL which is encoded by {{URLEncoder}} at client.
> e.g. KMS as example:
> a. KMSClientProvider creates connection to KMS Server using 
> DelegationTokenAuthenticatedURL#openConnection.
> b. If KMSClientProvider is a doAsUser, KMSClientProvider will put {{doas}} 
> with url encoded user as one parameter of http request. 
> {code:java}
> // proxyuser
> if (doAs != null) {
>   extraParams.put(DO_AS, URLEncoder.encode(doAs, "UTF-8"));
> }
> {code}
> c. when KMS server receives the request, it does not decode the proxy user.
> As result, KMS Server will get the wrong proxy user if this proxy user is 
> complete Kerberos Name or it includes some special character. Some other 
> authentication and authorization exception will throws next to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15960) Update guava to 27.0-jre in hadoop-common

2018-11-29 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703348#comment-16703348
 ] 

Steve Loughran commented on HADOOP-15960:
-

If there's one thing which causes trouble, it's guava updates. 
please search for predecessors here, link to them and tread carefully

> Update guava to 27.0-jre in hadoop-common
> -
>
> Key: HADOOP-15960
> URL: https://issues.apache.org/jira/browse/HADOOP-15960
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
> CVE-2018-10237.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15229) Add FileSystem builder-based openFile() API to match createFile()

2018-11-29 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703305#comment-16703305
 ] 

Steve Loughran commented on HADOOP-15229:
-

[~yuzhousun]

I'm ignoring the other formats in this JIRA, as that's going to take extra 
testing. What I am trying to do here is get the API right —using async file 
open & the s3 select as the validation—

w.r.t output vs input formats, again, I've simplified my life here. There's no 
control whatsoever on what you get back, which lines code up for 
straightforward parsing.

I propose a followup patch, ideally with some $0 to use, stable, test datasets 
for all supported formats. We make heavy use of the landsat.csv.gz file as it 
keeps costs low for anyone testing and there's no setup time, but its 
unstable...once you start making queries off it its nice to know what's going 
to come back as valid answers.

[~owen.omalley]

will do. Keeping impl stuff in its own package helps in java9+ too, doesn't it? 
Though for other FS impls, it'd still need to be public

> Add FileSystem builder-based openFile() API to match createFile()
> -
>
> Key: HADOOP-15229
> URL: https://issues.apache.org/jira/browse/HADOOP-15229
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15229-001.patch, HADOOP-15229-002.patch, 
> HADOOP-15229-003.patch, HADOOP-15229-004.patch, HADOOP-15229-004.patch, 
> HADOOP-15229-005.patch, HADOOP-15229-006.patch
>
>
> Replicate HDFS-1170 and HADOOP-14365 with an API to open files.
> A key requirement of this is not HDFS, it's to put in the fadvise policy for 
> working with object stores, where getting the decision to do a full GET and 
> TCP abort on seek vs smaller GETs is fundamentally different: the wrong 
> option can cost you minutes. S3A and Azure both have adaptive policies now 
> (first backward seek), but they still don't do it that well.
> Columnar formats (ORC, Parquet) should be able to say "fs.input.fadvise" 
> "random" as an option when they open files; I can imagine other options too.
> The Builder model of [~eddyxu] is the one to mimic, method for method. 
> Ideally with as much code reuse as possible



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15960) Update guava to 27.0-jre in hadoop-common

2018-11-29 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15960:

Description: 
com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
CVE-2018-10237.



  was:com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's 
found CVE-2018-10237


> Update guava to 27.0-jre in hadoop-common
> -
>
> Key: HADOOP-15960
> URL: https://issues.apache.org/jira/browse/HADOOP-15960
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
> CVE-2018-10237.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15960) Update guava to 27.0-jre in hadoop-common

2018-11-29 Thread Gabor Bota (JIRA)
Gabor Bota created HADOOP-15960:
---

 Summary: Update guava to 27.0-jre in hadoop-common
 Key: HADOOP-15960
 URL: https://issues.apache.org/jira/browse/HADOOP-15960
 Project: Hadoop Common
  Issue Type: Bug
  Components: common, security
Affects Versions: 3.1.0
Reporter: Gabor Bota
Assignee: Gabor Bota


com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
CVE-2018-10237



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-15960) Update guava to 27.0-jre in hadoop-common

2018-11-29 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-15960 started by Gabor Bota.
---
> Update guava to 27.0-jre in hadoop-common
> -
>
> Key: HADOOP-15960
> URL: https://issues.apache.org/jira/browse/HADOOP-15960
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
> CVE-2018-10237.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15960) Update guava to 27.0-jre in hadoop-common

2018-11-29 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15960:

Description: com.google.guava:guava should be upgraded to 27.0-jre due to 
new CVE's found CVE-2018-10237.  (was: com.google.guava:guava should be 
upgraded to 27.0-jre due to new CVE's found CVE-2018-10237.

)

> Update guava to 27.0-jre in hadoop-common
> -
>
> Key: HADOOP-15960
> URL: https://issues.apache.org/jira/browse/HADOOP-15960
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
> CVE-2018-10237.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15229) Add FileSystem builder-based openFile() API to match createFile()

2018-11-29 Thread Yuzhou Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703217#comment-16703217
 ] 

Yuzhou Sun commented on HADOOP-15229:
-

Hey just some trivial comments about S3 Select part:
 * s3_select.md: S3 Select also supports parquet input format now.

 * outputSerialization (although not sure whether it’s the best jira to discuss 
it): Would it be better to use input serialization configurations as default 
for output?  e.g. If input csv field delimiter is set to `|`, output will also 
use `|` if not specified. But a disadvantage of this is that it’ll be tricky if 
input format and output format are different (e.g. input is parquet but output 
is csv).

> Add FileSystem builder-based openFile() API to match createFile()
> -
>
> Key: HADOOP-15229
> URL: https://issues.apache.org/jira/browse/HADOOP-15229
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15229-001.patch, HADOOP-15229-002.patch, 
> HADOOP-15229-003.patch, HADOOP-15229-004.patch, HADOOP-15229-004.patch, 
> HADOOP-15229-005.patch, HADOOP-15229-006.patch
>
>
> Replicate HDFS-1170 and HADOOP-14365 with an API to open files.
> A key requirement of this is not HDFS, it's to put in the fadvise policy for 
> working with object stores, where getting the decision to do a full GET and 
> TCP abort on seek vs smaller GETs is fundamentally different: the wrong 
> option can cost you minutes. S3A and Azure both have adaptive policies now 
> (first backward seek), but they still don't do it that well.
> Columnar formats (ORC, Parquet) should be able to say "fs.input.fadvise" 
> "random" as an option when they open files; I can imagine other options too.
> The Builder model of [~eddyxu] is the one to mimic, method for method. 
> Ideally with as much code reuse as possible



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15959) revert HADOOP-12751

2018-11-29 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15959:

Attachment: HADOOP-15959-001.patch

> revert HADOOP-12751
> ---
>
> Key: HADOOP-15959
> URL: https://issues.apache.org/jira/browse/HADOOP-15959
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 3.2.0, 3.1.1, 2.9.2, 3.0.3, 2.7.7, 2.8.5
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-15959-001.patch
>
>
> HADOOP-12751 doesn't quite work right. Revert.
> (this patch is so jenkins can do the test runs)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15959) revert HADOOP-12751

2018-11-29 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15959:

Status: Patch Available  (was: Open)

> revert HADOOP-12751
> ---
>
> Key: HADOOP-15959
> URL: https://issues.apache.org/jira/browse/HADOOP-15959
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.5, 2.7.7, 3.0.3, 2.9.2, 3.1.1, 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-15959-001.patch
>
>
> HADOOP-12751 doesn't quite work right. Revert.
> (this patch is so jenkins can do the test runs)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15845) s3guard init and destroy command will create/destroy tables if ddb.table & region are set

2018-11-29 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota reassigned HADOOP-15845:
---

Assignee: Gabor Bota

> s3guard init and destroy command will create/destroy tables if ddb.table & 
> region are set
> -
>
> Key: HADOOP-15845
> URL: https://issues.apache.org/jira/browse/HADOOP-15845
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
>
> If you have s3guard set up with a table name and a region, then s3guard init 
> will automatically create the table, without you specifying a bucket or URI.
> I had expected the command just to print out its arguments, but it actually 
> did the init with the default bucket values
> Even worse, `hadoop s3guard destroy` will destroy the table. 
> This is too dangerous to allow. The command must require either the name of a 
> bucket or an an explicit ddb table URI



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15959) revert HADOOP-12751

2018-11-29 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15959:
---

 Summary: revert HADOOP-12751
 Key: HADOOP-15959
 URL: https://issues.apache.org/jira/browse/HADOOP-15959
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.8.5, 2.7.7, 3.0.3, 2.9.2, 3.1.1, 3.2.0
Reporter: Steve Loughran
Assignee: Steve Loughran


HADOOP-12751 doesn't quite work right. Revert.

(this patch is so jenkins can do the test runs)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15957) WASB: Add asterisk wildcard support for PageBlobDirSet

2018-11-29 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703085#comment-16703085
 ] 

Steve Loughran commented on HADOOP-15957:
-

Production code seems OK to my reading.

Test-wise, there's too much cut-and-paste of the asserts, without anything to 
debug failures. I always prefer test cases where a failure in a jenkins run 
provides enough to track down the problem, and lines like
{code}
assertEquals(false, store.isPageBlobKey("data/dir/recovered.txt"));
assertEquals(true, store.isPageBlobKey("data/mypageblobfiles/recovered.txt"));
{code}

dont' do that much

how about some method
{code}
expectPageBlobKey(outcome, store, path) {
  assertEquals("unexpected isPageBlobKey(" + path + ") result",
 expected, store.isPageBlobKey(path))
}


// with uses like
expectPageBlobKey(false, store, "data/dir/recovered.txt"));
expectPageBlobKey(true, store, "data/mypageblobfiles/recovered.txt"));

// ..etc

{code}

> WASB: Add asterisk wildcard support for PageBlobDirSet
> --
>
> Key: HADOOP-15957
> URL: https://issues.apache.org/jira/browse/HADOOP-15957
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-15957-001.patch
>
>
> In WASB, property "*fs.azure.page.blob.dir*" only support literal directory 
> name.
> We need to add support for wildcard '*' to represent for any directory name.
> For example, the following pattern should be supported:
> {code:java}
> /dir1/dir2 
> /dir1/*/dir3
> /dir1/*/*/dir4
> /dir1/*/*/file
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14556) S3A to support Delegation Tokens

2018-11-29 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16703063#comment-16703063
 ] 

Steve Loughran commented on HADOOP-14556:
-

One other comment on [~elgoiri]'s feedback


> IAMInstanceCredentialsProvider#getCredentials leaves comments behind.

I actually switched to the commented one, which wraps the exception raised by 
the IAM provider.

why so? The default error message you get in the absence of any credentials is 
the "cannot connect to 169.xx.xx.xx" error from the IAM provider which cannot 
talk to the IAM server. Because we have that on the default chain, and unless 
you are in an EC2 deployment (were it will never fail as you always get the 
VM's credentials), it is guaranteed to fail. So I'm wrapping that as an 
{{NoAwsCredentialsException}} as that's what it means. The error raising in 
{{AWSCredentialProviderList}} is tweaked to move from throwing the last 
exception to "the most recent exception which isn't just a 
{{NoAwsCredentialsException}}. 

That means if you have an auth chain where your DT plugin is failing for a 
complex reason, that failure gets thrown, even if you have a fallback chain of 
other things afterwards (env vars,etc)

> S3A to support Delegation Tokens
> 
>
> Key: HADOOP-14556
> URL: https://issues.apache.org/jira/browse/HADOOP-14556
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-14556-001.patch, HADOOP-14556-002.patch, 
> HADOOP-14556-003.patch, HADOOP-14556-004.patch, HADOOP-14556-005.patch, 
> HADOOP-14556-007.patch, HADOOP-14556-008.patch, HADOOP-14556-009.patch, 
> HADOOP-14556-010.patch, HADOOP-14556-010.patch, HADOOP-14556-011.patch, 
> HADOOP-14556-012.patch, HADOOP-14556-013.patch, HADOOP-14556-014.patch, 
> HADOOP-14556-015.patch, HADOOP-14556-016.patch, HADOOP-14556-017.patch, 
> HADOOP-14556-018a.patch, HADOOP-14556-019.patch, HADOOP-14556-020.patch, 
> HADOOP-14556-021.patch, HADOOP-14556-022.patch, HADOOP-14556-023.patch, 
> HADOOP-14556.oath-002.patch, HADOOP-14556.oath.patch
>
>
> S3A to support delegation tokens where
> * an authenticated client can request a token via 
> {{FileSystem.getDelegationToken()}}
> * Amazon's token service is used to request short-lived session secret & id; 
> these will be saved in the token and  marshalled with jobs
> * A new authentication provider will look for a token for the current user 
> and authenticate the user if found
> This will not support renewals; the lifespan of a token will be limited to 
> the initial duration. Also, as you can't request an STS token from a 
> temporary session, IAM instances won't be able to issue tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15893) fs.TrashPolicyDefault: can't create trash directory and race condition

2018-11-29 Thread sunlisheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunlisheng updated HADOOP-15893:

Description: 
there is race condition in method moveToTrash class TrashPolicyDefault

try {
 if (!fs.mkdirs(baseTrashPath, PERMISSION))

{ // create current LOG.warn("Can't create(mkdir) trash directory: " + 
baseTrashPath); return false; }

} catch (FileAlreadyExistsException e) {
 // find the path which is not a directory, and modify baseTrashPath
 // & trashPath, then mkdirs
 Path existsFilePath = baseTrashPath;
 while (!fs.exists(existsFilePath))

{ existsFilePath = existsFilePath.getParent(); }

{color:#ff}// case{color}

{color:#ff}  other thread deletes existsFilePath here ,the results doesn't  
meet expectation{color}

{color:#ff} for example{color}

{color:#ff}   there is 
/user/u_sunlisheng/.Trash/Current/user/u_sunlisheng/b{color}

{color:#ff}   when delete /user/u_sunlisheng/b/a. if existsFilePath is 
deleted, the result is 
/user/u_sunlisheng/.Trash/Current/user/u_sunlisheng+timstamp/b/a{color}

{color:#ff}  so  when existsFilePath is deleted, don't modify 
baseTrashPath.    {color}

baseTrashPath = new Path(baseTrashPath.toString().replace(
 existsFilePath.toString(), existsFilePath.toString() + Time.now())
 );

trashPath = new Path(baseTrashPath, trashPath.getName());
 // retry, ignore current failure
 --i;
 continue;
 } catch (IOException e)

{ LOG.warn("Can't create trash directory: " + baseTrashPath, e); cause = e; 
break; }

 

  was:
there is race condition in method moveToTrash class TrashPolicyDefault

try {
 if (!fs.mkdirs(baseTrashPath, PERMISSION))

{ // create current LOG.warn("Can't create(mkdir) trash directory: " + 
baseTrashPath); return false; }

} catch (FileAlreadyExistsException e) {
 // find the path which is not a directory, and modify baseTrashPath
 // & trashPath, then mkdirs
 Path existsFilePath = baseTrashPath;
 while (!fs.exists(existsFilePath))

{ existsFilePath = existsFilePath.getParent(); }

{color:#FF}// case{color}

{color:#FF}  other thread deletes existsFilePath here ,the results doesn't  
meet expectation{color}

{color:#FF} for example{color}

{color:#FF}   there is 
/user/u_sunlisheng/.Trash/Current/user/u_sunlisheng/b{color}

{color:#FF}   when delete /user/u_sunlisheng/b/a. if existsFilePath is 
deleted, the result is 
/user/u_sunlisheng/.Trash/Current/user/u_sunlisheng+timstamp/b/a{color}

{color:#FF}  so  when existsFilePath is deleted, don't modify 
baseTrashPath.    {color}

baseTrashPath = new Path(baseTrashPath.toString().replace(
 existsFilePath.toString(), existsFilePath.toString() + Time.now())
 );

trashPath = new Path(baseTrashPath, trashPath.getName());
 // retry, ignore current failure
 --i;
 continue;
 } catch (IOException e)

{ LOG.warn("Can't create trash directory: " + baseTrashPath, e); cause = e; 
break; }

After catch FileAlreadyExistsException, the  same name file is deleted.

So don't modify baseTrashPath when existsFilePath is deleted.

But this case show hardly in untitest.


> fs.TrashPolicyDefault: can't create trash directory and race condition
> --
>
> Key: HADOOP-15893
> URL: https://issues.apache.org/jira/browse/HADOOP-15893
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: sunlisheng
>Priority: Major
> Attachments: HADOOP-15893.001.patch, HADOOP-15893.002.patch
>
>
> there is race condition in method moveToTrash class TrashPolicyDefault
> try {
>  if (!fs.mkdirs(baseTrashPath, PERMISSION))
> { // create current LOG.warn("Can't create(mkdir) trash directory: " + 
> baseTrashPath); return false; }
> } catch (FileAlreadyExistsException e) {
>  // find the path which is not a directory, and modify baseTrashPath
>  // & trashPath, then mkdirs
>  Path existsFilePath = baseTrashPath;
>  while (!fs.exists(existsFilePath))
> { existsFilePath = existsFilePath.getParent(); }
> {color:#ff}// case{color}
> {color:#ff}  other thread deletes existsFilePath here ,the results 
> doesn't  meet expectation{color}
> {color:#ff} for example{color}
> {color:#ff}   there is 
> /user/u_sunlisheng/.Trash/Current/user/u_sunlisheng/b{color}
> {color:#ff}   when delete /user/u_sunlisheng/b/a. if existsFilePath is 
> deleted, the result is 
> /user/u_sunlisheng/.Trash/Current/user/u_sunlisheng+timstamp/b/a{color}
> {color:#ff}  so  when existsFilePath is deleted, don't modify 
> baseTrashPath.    {color}
> baseTrashPath = new Path(baseTrashPath.toString().replace(
>  existsFilePath.toString(), existsFilePath.toString() + Time.now())
>  );
> trashPath = new Path(baseTrashPath, trashPath.getName());
>  // retry, ignore current failure
>  --i;
>  continue;
>  } catch (IOException e)
> { LOG.warn("Can't 

[jira] [Commented] (HADOOP-15893) fs.TrashPolicyDefault: can't create trash directory and race condition

2018-11-29 Thread sunlisheng (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16702927#comment-16702927
 ] 

sunlisheng commented on HADOOP-15893:
-

I add a patch.

> fs.TrashPolicyDefault: can't create trash directory and race condition
> --
>
> Key: HADOOP-15893
> URL: https://issues.apache.org/jira/browse/HADOOP-15893
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: sunlisheng
>Priority: Major
> Attachments: HADOOP-15893.001.patch, HADOOP-15893.002.patch
>
>
> there is race condition in method moveToTrash class TrashPolicyDefault
> try {
>  if (!fs.mkdirs(baseTrashPath, PERMISSION))
> { // create current LOG.warn("Can't create(mkdir) trash directory: " + 
> baseTrashPath); return false; }
> } catch (FileAlreadyExistsException e) {
>  // find the path which is not a directory, and modify baseTrashPath
>  // & trashPath, then mkdirs
>  Path existsFilePath = baseTrashPath;
>  while (!fs.exists(existsFilePath))
> { existsFilePath = existsFilePath.getParent(); }
> {color:#FF}// case{color}
> {color:#FF}  other thread deletes existsFilePath here ,the results 
> doesn't  meet expectation{color}
> {color:#FF} for example{color}
> {color:#FF}   there is 
> /user/u_sunlisheng/.Trash/Current/user/u_sunlisheng/b{color}
> {color:#FF}   when delete /user/u_sunlisheng/b/a. if existsFilePath is 
> deleted, the result is 
> /user/u_sunlisheng/.Trash/Current/user/u_sunlisheng+timstamp/b/a{color}
> {color:#FF}  so  when existsFilePath is deleted, don't modify 
> baseTrashPath.    {color}
> baseTrashPath = new Path(baseTrashPath.toString().replace(
>  existsFilePath.toString(), existsFilePath.toString() + Time.now())
>  );
> trashPath = new Path(baseTrashPath, trashPath.getName());
>  // retry, ignore current failure
>  --i;
>  continue;
>  } catch (IOException e)
> { LOG.warn("Can't create trash directory: " + baseTrashPath, e); cause = e; 
> break; }
> After catch FileAlreadyExistsException, the  same name file is deleted.
> So don't modify baseTrashPath when existsFilePath is deleted.
> But this case show hardly in untitest.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15893) fs.TrashPolicyDefault: can't create trash directory and race condition

2018-11-29 Thread sunlisheng (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16702928#comment-16702928
 ] 

sunlisheng commented on HADOOP-15893:
-

try {
 if (!fs.mkdirs(baseTrashPath, PERMISSION)) { // create current
 LOG.warn("Can't create(mkdir) trash directory: "+baseTrashPath);
 return false;
 }
} catch (FileAlreadyExistsException e) {
 // find the path which is not a directory, and modify baseTrashPath
 // & trashPath, then mkdirs
 Path existsFilePath = baseTrashPath;
 while (!fs.exists(existsFilePath)) {
 existsFilePath = existsFilePath.getParent();
 }

 {color:#FF}// Case don't modify baseTrashPath when existsFilePath is 
deleted{color}
{color:#FF} try {{color}
{color:#FF} FileStatus fileStatus = fs.getFileStatus(existsFilePath);{color}
{color:#FF} if (!fileStatus.isDirectory()) {{color}
{color:#FF} baseTrashPath = new 
Path(baseTrashPath.toString().replace({color}
{color:#FF} existsFilePath.toString(), existsFilePath.toString() + 
Time.now()){color}
{color:#FF} );{color}
 trashPath = new Path(baseTrashPath, trashPath.getName());
 }
 } catch (FileNotFoundException e1) {
 LOG.warn("The existFilePath is not found " + existsFilePath);
 }

 // retry, ignore current failure
 --i;
 continue;
} catch (IOException e) {
 LOG.warn("Can't create trash directory: "+baseTrashPath);
 cause = e;
 break;
}

> fs.TrashPolicyDefault: can't create trash directory and race condition
> --
>
> Key: HADOOP-15893
> URL: https://issues.apache.org/jira/browse/HADOOP-15893
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: sunlisheng
>Priority: Major
> Attachments: HADOOP-15893.001.patch, HADOOP-15893.002.patch
>
>
> there is race condition in method moveToTrash class TrashPolicyDefault
> try {
>  if (!fs.mkdirs(baseTrashPath, PERMISSION))
> { // create current LOG.warn("Can't create(mkdir) trash directory: " + 
> baseTrashPath); return false; }
> } catch (FileAlreadyExistsException e) {
>  // find the path which is not a directory, and modify baseTrashPath
>  // & trashPath, then mkdirs
>  Path existsFilePath = baseTrashPath;
>  while (!fs.exists(existsFilePath))
> { existsFilePath = existsFilePath.getParent(); }
> {color:#FF}// case{color}
> {color:#FF}  other thread deletes existsFilePath here ,the results 
> doesn't  meet expectation{color}
> {color:#FF} for example{color}
> {color:#FF}   there is 
> /user/u_sunlisheng/.Trash/Current/user/u_sunlisheng/b{color}
> {color:#FF}   when delete /user/u_sunlisheng/b/a. if existsFilePath is 
> deleted, the result is 
> /user/u_sunlisheng/.Trash/Current/user/u_sunlisheng+timstamp/b/a{color}
> {color:#FF}  so  when existsFilePath is deleted, don't modify 
> baseTrashPath.    {color}
> baseTrashPath = new Path(baseTrashPath.toString().replace(
>  existsFilePath.toString(), existsFilePath.toString() + Time.now())
>  );
> trashPath = new Path(baseTrashPath, trashPath.getName());
>  // retry, ignore current failure
>  --i;
>  continue;
>  } catch (IOException e)
> { LOG.warn("Can't create trash directory: " + baseTrashPath, e); cause = e; 
> break; }
> After catch FileAlreadyExistsException, the  same name file is deleted.
> So don't modify baseTrashPath when existsFilePath is deleted.
> But this case show hardly in untitest.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15893) fs.TrashPolicyDefault: can't create trash directory and race condition

2018-11-29 Thread sunlisheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunlisheng updated HADOOP-15893:

Description: 
there is race condition in method moveToTrash class TrashPolicyDefault

try {
 if (!fs.mkdirs(baseTrashPath, PERMISSION))

{ // create current LOG.warn("Can't create(mkdir) trash directory: " + 
baseTrashPath); return false; }

} catch (FileAlreadyExistsException e) {
 // find the path which is not a directory, and modify baseTrashPath
 // & trashPath, then mkdirs
 Path existsFilePath = baseTrashPath;
 while (!fs.exists(existsFilePath))

{ existsFilePath = existsFilePath.getParent(); }

{color:#FF}// case{color}

{color:#FF}  other thread deletes existsFilePath here ,the results doesn't  
meet expectation{color}

{color:#FF} for example{color}

{color:#FF}   there is 
/user/u_sunlisheng/.Trash/Current/user/u_sunlisheng/b{color}

{color:#FF}   when delete /user/u_sunlisheng/b/a. if existsFilePath is 
deleted, the result is 
/user/u_sunlisheng/.Trash/Current/user/u_sunlisheng+timstamp/b/a{color}

{color:#FF}  so  when existsFilePath is deleted, don't modify 
baseTrashPath.    {color}

baseTrashPath = new Path(baseTrashPath.toString().replace(
 existsFilePath.toString(), existsFilePath.toString() + Time.now())
 );

trashPath = new Path(baseTrashPath, trashPath.getName());
 // retry, ignore current failure
 --i;
 continue;
 } catch (IOException e)

{ LOG.warn("Can't create trash directory: " + baseTrashPath, e); cause = e; 
break; }

After catch FileAlreadyExistsException, the  same name file is deleted.

So don't modify baseTrashPath when existsFilePath is deleted.

But this case show hardly in untitest.

  was:
there is race condition in method moveToTrash class TrashPolicyDefault

try {
 if (!fs.mkdirs(baseTrashPath, PERMISSION)) { // create current
 LOG.warn("Can't create(mkdir) trash directory: " + baseTrashPath);
 return false;
 }
} catch (FileAlreadyExistsException e) {
 // find the path which is not a directory, and modify baseTrashPath
 // & trashPath, then mkdirs
 Path existsFilePath = baseTrashPath;
 while (!fs.exists(existsFilePath)) {
 existsFilePath = existsFilePath.getParent();
 }

{color:#FF}// case delete existsFilePath here {color}

baseTrashPath = new Path(baseTrashPath.toString().replace(
 existsFilePath.toString(), existsFilePath.toString() + Time.now())
 );


 trashPath = new Path(baseTrashPath, trashPath.getName());
 // retry, ignore current failure
 --i;
 continue;
} catch (IOException e) {
 LOG.warn("Can't create trash directory: " + baseTrashPath, e);
 cause = e;
 break;
}

After catch FileAlreadyExistsException, the  same name file is deleted.

So don't modify baseTrashPath when existsFilePath is deleted.

But this case show hardly in untitest.


> fs.TrashPolicyDefault: can't create trash directory and race condition
> --
>
> Key: HADOOP-15893
> URL: https://issues.apache.org/jira/browse/HADOOP-15893
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: sunlisheng
>Priority: Major
> Attachments: HADOOP-15893.001.patch, HADOOP-15893.002.patch
>
>
> there is race condition in method moveToTrash class TrashPolicyDefault
> try {
>  if (!fs.mkdirs(baseTrashPath, PERMISSION))
> { // create current LOG.warn("Can't create(mkdir) trash directory: " + 
> baseTrashPath); return false; }
> } catch (FileAlreadyExistsException e) {
>  // find the path which is not a directory, and modify baseTrashPath
>  // & trashPath, then mkdirs
>  Path existsFilePath = baseTrashPath;
>  while (!fs.exists(existsFilePath))
> { existsFilePath = existsFilePath.getParent(); }
> {color:#FF}// case{color}
> {color:#FF}  other thread deletes existsFilePath here ,the results 
> doesn't  meet expectation{color}
> {color:#FF} for example{color}
> {color:#FF}   there is 
> /user/u_sunlisheng/.Trash/Current/user/u_sunlisheng/b{color}
> {color:#FF}   when delete /user/u_sunlisheng/b/a. if existsFilePath is 
> deleted, the result is 
> /user/u_sunlisheng/.Trash/Current/user/u_sunlisheng+timstamp/b/a{color}
> {color:#FF}  so  when existsFilePath is deleted, don't modify 
> baseTrashPath.    {color}
> baseTrashPath = new Path(baseTrashPath.toString().replace(
>  existsFilePath.toString(), existsFilePath.toString() + Time.now())
>  );
> trashPath = new Path(baseTrashPath, trashPath.getName());
>  // retry, ignore current failure
>  --i;
>  continue;
>  } catch (IOException e)
> { LOG.warn("Can't create trash directory: " + baseTrashPath, e); cause = e; 
> break; }
> After catch FileAlreadyExistsException, the  same name file is deleted.
> So don't modify baseTrashPath when existsFilePath is deleted.
> But this case show hardly in untitest.



--
This message was sent by Atlassian JIRA

[jira] [Updated] (HADOOP-15893) fs.TrashPolicyDefault: can't create trash directory and race condition

2018-11-29 Thread sunlisheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunlisheng updated HADOOP-15893:

Description: 
there is race condition in method moveToTrash class TrashPolicyDefault

try {
 if (!fs.mkdirs(baseTrashPath, PERMISSION)) { // create current
 LOG.warn("Can't create(mkdir) trash directory: " + baseTrashPath);
 return false;
 }
} catch (FileAlreadyExistsException e) {
 // find the path which is not a directory, and modify baseTrashPath
 // & trashPath, then mkdirs
 Path existsFilePath = baseTrashPath;
 while (!fs.exists(existsFilePath)) {
 existsFilePath = existsFilePath.getParent();
 }

{color:#FF}// case delete existsFilePath here {color}

baseTrashPath = new Path(baseTrashPath.toString().replace(
 existsFilePath.toString(), existsFilePath.toString() + Time.now())
 );


 trashPath = new Path(baseTrashPath, trashPath.getName());
 // retry, ignore current failure
 --i;
 continue;
} catch (IOException e) {
 LOG.warn("Can't create trash directory: " + baseTrashPath, e);
 cause = e;
 break;
}

After catch FileAlreadyExistsException, the  same name file is deleted.

So don't modify baseTrashPath when existsFilePath is deleted.

But this case show hardly in untitest.

  was:
there is in method moveToTrash class TrashPolicyDefault

After catch FileAlreadyExistsException, the  same name file is deleted.

So don't modify baseTrashPath when existsFilePath is deleted.

But this case show hardly in untitest.


> fs.TrashPolicyDefault: can't create trash directory and race condition
> --
>
> Key: HADOOP-15893
> URL: https://issues.apache.org/jira/browse/HADOOP-15893
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: sunlisheng
>Priority: Major
> Attachments: HADOOP-15893.001.patch, HADOOP-15893.002.patch
>
>
> there is race condition in method moveToTrash class TrashPolicyDefault
> try {
>  if (!fs.mkdirs(baseTrashPath, PERMISSION)) { // create current
>  LOG.warn("Can't create(mkdir) trash directory: " + baseTrashPath);
>  return false;
>  }
> } catch (FileAlreadyExistsException e) {
>  // find the path which is not a directory, and modify baseTrashPath
>  // & trashPath, then mkdirs
>  Path existsFilePath = baseTrashPath;
>  while (!fs.exists(existsFilePath)) {
>  existsFilePath = existsFilePath.getParent();
>  }
> {color:#FF}// case delete existsFilePath here {color}
> baseTrashPath = new Path(baseTrashPath.toString().replace(
>  existsFilePath.toString(), existsFilePath.toString() + Time.now())
>  );
>  trashPath = new Path(baseTrashPath, trashPath.getName());
>  // retry, ignore current failure
>  --i;
>  continue;
> } catch (IOException e) {
>  LOG.warn("Can't create trash directory: " + baseTrashPath, e);
>  cause = e;
>  break;
> }
> After catch FileAlreadyExistsException, the  same name file is deleted.
> So don't modify baseTrashPath when existsFilePath is deleted.
> But this case show hardly in untitest.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15893) fs.TrashPolicyDefault: can't create trash directory and race condition

2018-11-29 Thread sunlisheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunlisheng updated HADOOP-15893:

Description: 
there is in method moveToTrash class TrashPolicyDefault

After catch FileAlreadyExistsException, the  same name file is deleted.

So don't modify baseTrashPath when existsFilePath is deleted.

But this case show hardly in untitest.

  was:
After catch FileAlreadyExistsException, the  same name file is deleted.

So don't modify baseTrashPath when existsFilePath is deleted.

But this case show hardly in untitest.


> fs.TrashPolicyDefault: can't create trash directory and race condition
> --
>
> Key: HADOOP-15893
> URL: https://issues.apache.org/jira/browse/HADOOP-15893
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: sunlisheng
>Priority: Major
> Attachments: HADOOP-15893.001.patch, HADOOP-15893.002.patch
>
>
> there is in method moveToTrash class TrashPolicyDefault
> After catch FileAlreadyExistsException, the  same name file is deleted.
> So don't modify baseTrashPath when existsFilePath is deleted.
> But this case show hardly in untitest.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org