[jira] [Comment Edited] (HADOOP-16971) testFileContextResolveAfs creates dangling link and fails for subsequent runs

2020-04-17 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17086303#comment-17086303
 ] 

Ctest edited comment on HADOOP-16971 at 4/18/20, 5:10 AM:
--

[~ayushtkn] Sorry for the confusion.

To reproduce: Run the test 
org.apache.hadoop.fs.TestFileContextResolveAfs#testFileContextResolveAfs. 

The error basically is: the test creates a file and a symlink to the file, but 
it accidentally made the symlink into a dangling link. And current FileSystem 
in Hadoop is unable to delete dangling links. So, the test will fail in the 
second run trying to create the same symlink to link the file because the 
symlink it intends to create has already existed (created in the first run).


was (Author: ctest.team):
[~ayushtkn] Sorry for the confusion.

To reproduce: Run the test 
org.apache.hadoop.fs.TestFileContextResolveAfs#testFileContextResolveAfs. 

The error basically is: the test creates a file and a symlink to the file, but 
it accidentally made the symlink into a dangling link. And current FileSystem 
in Hadoop is unable to delete dangling links. So, the test will fail in the 
second run trying to link the file because the symlink it intends to create has 
already existed (created in the first run).

> testFileContextResolveAfs creates dangling link and fails for subsequent runs
> -
>
> Key: HADOOP-16971
> URL: https://issues.apache.org/jira/browse/HADOOP-16971
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, fs, test
>Affects Versions: 3.2.1, 3.4.0
>Reporter: Ctest
>Priority: Minor
>  Labels: easyfix, fs, symlink, test
> Attachments: HADOOP-16971.000.patch
>
>
> In the test testFileContextResolveAfs, the symlink TestFileContextResolveAfs2 
> (linked to TestFileContextResolveAfs1) cannot be deleted when the test 
> finishes.
> This is because TestFileContextResolveAfs1 was always deleted before 
> TestFileContextResolveAfs2 when they were both passed into 
> FileSystem#deleteOnExit. This caused TestFileContextResolveAfs2 to become a 
> dangling link, which FileSystem in Hadoop currently cannot delete. (This is 
> because Files#exists will return false for dangling links.)
> As a result, the test `testFileContextResolveAfs` only passed for the first 
> run. And for later runs of this test, it will fail by throwing the following 
> exception: 
> {code:java}
> fs.FileUtil (FileUtil.java:symLink(821)) - Command 'ln -s 
> mypath/TestFileContextResolveAfs1 mypath/TestFileContextResolveAfs2' failed 1 
> with: ln: mypath/TestFileContextResolveAfs2: File exists
> java.io.IOException: Error 1 creating symlink 
> file:mypath/TestFileContextResolveAfs2 to mypath/TestFileContextResolveAfs1
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16971) testFileContextResolveAfs creates dangling link and fails for subsequent runs

2020-04-17 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17086303#comment-17086303
 ] 

Ctest commented on HADOOP-16971:


[~ayushtkn] Sorry for the confusion.

To reproduce: Run the test 
org.apache.hadoop.fs.TestFileContextResolveAfs#testFileContextResolveAfs. 

The error basically is: the test creates a file and a symlink to the file, but 
it accidentally made the symlink into a dangling link. And current FileSystem 
in Hadoop is unable to delete dangling links. So, the test will fail in the 
second run trying to link the file because the symlink it intends to create has 
already existed (created in the first run).

> testFileContextResolveAfs creates dangling link and fails for subsequent runs
> -
>
> Key: HADOOP-16971
> URL: https://issues.apache.org/jira/browse/HADOOP-16971
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, fs, test
>Affects Versions: 3.2.1, 3.4.0
>Reporter: Ctest
>Priority: Minor
>  Labels: easyfix, fs, symlink, test
> Attachments: HADOOP-16971.000.patch
>
>
> In the test testFileContextResolveAfs, the symlink TestFileContextResolveAfs2 
> (linked to TestFileContextResolveAfs1) cannot be deleted when the test 
> finishes.
> This is because TestFileContextResolveAfs1 was always deleted before 
> TestFileContextResolveAfs2 when they were both passed into 
> FileSystem#deleteOnExit. This caused TestFileContextResolveAfs2 to become a 
> dangling link, which FileSystem in Hadoop currently cannot delete. (This is 
> because Files#exists will return false for dangling links.)
> As a result, the test `testFileContextResolveAfs` only passed for the first 
> run. And for later runs of this test, it will fail by throwing the following 
> exception: 
> {code:java}
> fs.FileUtil (FileUtil.java:symLink(821)) - Command 'ln -s 
> mypath/TestFileContextResolveAfs1 mypath/TestFileContextResolveAfs2' failed 1 
> with: ln: mypath/TestFileContextResolveAfs2: File exists
> java.io.IOException: Error 1 creating symlink 
> file:mypath/TestFileContextResolveAfs2 to mypath/TestFileContextResolveAfs1
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] liuml07 commented on issue #1949: HADOOP-16964. Modify constant for AbstractFileSystem

2020-04-17 Thread GitBox
liuml07 commented on issue #1949: HADOOP-16964. Modify constant for 
AbstractFileSystem
URL: https://github.com/apache/hadoop/pull/1949#issuecomment-615554195
 
 
   @20100507 This PR is closed? If you make the change, please also add space 
before and after `+`, so it would be 
   ```
   return new Path(CommonConfigurationKeys.FS_HOME_DIR_DEFAULT + "/" + username)
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16993) Hadoop 3.1.2 download link is broken

2020-04-17 Thread Mingliang Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-16993:
---
Hadoop Flags: Reviewed
  Resolution: Fixed
  Status: Resolved  (was: Patch Available)

> Hadoop 3.1.2 download link is broken
> 
>
> Key: HADOOP-16993
> URL: https://issues.apache.org/jira/browse/HADOOP-16993
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: website
>Reporter: Arpit Agarwal
>Assignee: Akira Ajisaka
>Priority: Major
>
> Remove broken Hadoop 3.1.2 download links from the website.
> https://hadoop.apache.org/releases.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16997) Provide GroupIdentityProvider and MappedUserIdentityProvider for FairCallQueue namenode RPC Queue throttling for grouping user requests

2020-04-17 Thread Vijay Singh (Jira)
Vijay Singh created HADOOP-16997:


 Summary: Provide GroupIdentityProvider and 
MappedUserIdentityProvider for FairCallQueue namenode RPC Queue throttling for 
grouping user requests
 Key: HADOOP-16997
 URL: https://issues.apache.org/jira/browse/HADOOP-16997
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.3
Reporter: Vijay Singh
 Fix For: 3.0.4


Currently, in a multi-tenant cluster FairCallQueue Identity is limited to 
UserIdentityProvider. Tenants tend to get passed burst RPC loads by using 
different service ID. This Jira requests that GroupIdentityProvider and 
MappedUserIdentityProvider be implemented to allow better tenant experience for 
clusters with multiple tenants.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16996) ---

2020-04-17 Thread Maziar Mirzazad (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maziar Mirzazad resolved HADOOP-16996.
--
Resolution: Invalid

> ---
> ---
>
> Key: HADOOP-16996
> URL: https://issues.apache.org/jira/browse/HADOOP-16996
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Maziar Mirzazad
>Priority: Minor
>
> --
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16996) ---

2020-04-17 Thread Maziar Mirzazad (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maziar Mirzazad updated HADOOP-16996:
-
Summary: ---  (was: Add capability in hadoop-client to automatically login 
from a client/service keytab)

> ---
> ---
>
> Key: HADOOP-16996
> URL: https://issues.apache.org/jira/browse/HADOOP-16996
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Maziar Mirzazad
>Priority: Minor
>
> With current hadoop client implementation, every single service need to add 
> UGI.loginFromKeyTab() before doing HDFS or M/R API calls.
> To avoid that, we are proposing adding Keytab based login to hadoop client 
> library for Kerberized clusters with configurable default paths for Keytabs.
> This improvement should avoid extra login tries in case a valid TGT is 
> available.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16996) ---

2020-04-17 Thread Maziar Mirzazad (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maziar Mirzazad updated HADOOP-16996:
-
Description: 
--

 

  was:
With current hadoop client implementation, every single service need to add 
UGI.loginFromKeyTab() before doing HDFS or M/R API calls.

To avoid that, we are proposing adding Keytab based login to hadoop client 
library for Kerberized clusters with configurable default paths for Keytabs.

This improvement should avoid extra login tries in case a valid TGT is 
available.


> ---
> ---
>
> Key: HADOOP-16996
> URL: https://issues.apache.org/jira/browse/HADOOP-16996
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Maziar Mirzazad
>Priority: Minor
>
> --
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16996) Add capability in hadoop-client to automatically login from a client/service keytab

2020-04-17 Thread Maziar Mirzazad (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maziar Mirzazad updated HADOOP-16996:
-
Fix Version/s: (was: 2.9.2)

> Add capability in hadoop-client to automatically login from a client/service 
> keytab
> ---
>
> Key: HADOOP-16996
> URL: https://issues.apache.org/jira/browse/HADOOP-16996
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.0.0-alpha
>Reporter: Maziar Mirzazad
>Priority: Minor
>
> With current hadoop client implementation, every single service need to add 
> UGI.loginFromKeyTab() before doing HDFS or M/R API calls.
> To avoid that, we are proposing adding Keytab based login to hadoop client 
> library for Kerberized clusters with configurable default paths for Keytabs.
> This improvement should avoid extra login tries in case a valid TGT is 
> available.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16996) Add capability in hadoop-client to automatically login from a client/service keytab

2020-04-17 Thread Maziar Mirzazad (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maziar Mirzazad updated HADOOP-16996:
-
Affects Version/s: (was: 2.0.0-alpha)

> Add capability in hadoop-client to automatically login from a client/service 
> keytab
> ---
>
> Key: HADOOP-16996
> URL: https://issues.apache.org/jira/browse/HADOOP-16996
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Maziar Mirzazad
>Priority: Minor
>
> With current hadoop client implementation, every single service need to add 
> UGI.loginFromKeyTab() before doing HDFS or M/R API calls.
> To avoid that, we are proposing adding Keytab based login to hadoop client 
> library for Kerberized clusters with configurable default paths for Keytabs.
> This improvement should avoid extra login tries in case a valid TGT is 
> available.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16996) Add capability in hadoop-client to automatically login from a client/service keytab

2020-04-17 Thread Maziar Mirzazad (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maziar Mirzazad updated HADOOP-16996:
-
Component/s: (was: security)

> Add capability in hadoop-client to automatically login from a client/service 
> keytab
> ---
>
> Key: HADOOP-16996
> URL: https://issues.apache.org/jira/browse/HADOOP-16996
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.0.0-alpha
>Reporter: Maziar Mirzazad
>Priority: Minor
> Fix For: 2.9.2
>
>
> With current hadoop client implementation, every single service need to add 
> UGI.loginFromKeyTab() before doing HDFS or M/R API calls.
> To avoid that, we are proposing adding Keytab based login to hadoop client 
> library for Kerberized clusters with configurable default paths for Keytabs.
> This improvement should avoid extra login tries in case a valid TGT is 
> available.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16996) Add capability in hadoop-client to automatically login from a client/service keytab

2020-04-17 Thread Maziar Mirzazad (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maziar Mirzazad updated HADOOP-16996:
-
Description: 
With current hadoop client implementation, every single service need to add 
UGI.loginFromKeyTab() before doing HDFS or M/R API calls.

To avoid that, we are proposing adding Keytab based login to hadoop client 
library for Kerberized clusters with configurable default paths for Keytabs.

This improvement should avoid extra login tries in case a valid TGT is 
available.

  was:
At Twitter we are planning to Kerberize our hadoop infrastructure, and we have 
many services that are going to use those clusters.

With current hadoop client implementation, every single service need to change 
the application and add UGI.loginFromKeyTab() before doing HDFS or M/R API 
calls.

To avoid that, we are proposing adding Keytab based login to hadoop client 
library for Kerberized clusters with configurable default paths for Keytabs.

This improvement should avoid extra login tries in case a valid TGT is 
available.


> Add capability in hadoop-client to automatically login from a client/service 
> keytab
> ---
>
> Key: HADOOP-16996
> URL: https://issues.apache.org/jira/browse/HADOOP-16996
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.0.0-alpha
>Reporter: Maziar Mirzazad
>Priority: Minor
> Fix For: 2.9.2
>
>
> With current hadoop client implementation, every single service need to add 
> UGI.loginFromKeyTab() before doing HDFS or M/R API calls.
> To avoid that, we are proposing adding Keytab based login to hadoop client 
> library for Kerberized clusters with configurable default paths for Keytabs.
> This improvement should avoid extra login tries in case a valid TGT is 
> available.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16996) Add capability in hadoop-client to automatically login from a client/service keytab

2020-04-17 Thread Maziar Mirzazad (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maziar Mirzazad updated HADOOP-16996:
-
Description: 
At Twitter we are planning to Kerberize our hadoop infrastructure, and we have 
many services that are going to use those clusters.

With current hadoop client implementation, every single service need to change 
the application and add UGI.loginFromKeyTab() before doing HDFS or M/R API 
calls.

To avoid that, we are proposing adding Keytab based login to hadoop client 
library for Kerberized clusters with configurable default paths for Keytabs.

This improvement should avoid extra login tries in case a valid TGT is 
available.

  was:Services using a kerberos keytab needs to do UGI.loginFromKeyTab() before 
doing any HDFS or M/R API calls. Instead of every service doing this, we can 
add keytab based login to hadoop-client library.


> Add capability in hadoop-client to automatically login from a client/service 
> keytab
> ---
>
> Key: HADOOP-16996
> URL: https://issues.apache.org/jira/browse/HADOOP-16996
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.0.0-alpha
>Reporter: Maziar Mirzazad
>Priority: Minor
> Fix For: 2.9.2
>
>
> At Twitter we are planning to Kerberize our hadoop infrastructure, and we 
> have many services that are going to use those clusters.
> With current hadoop client implementation, every single service need to 
> change the application and add UGI.loginFromKeyTab() before doing HDFS or M/R 
> API calls.
> To avoid that, we are proposing adding Keytab based login to hadoop client 
> library for Kerberized clusters with configurable default paths for Keytabs.
> This improvement should avoid extra login tries in case a valid TGT is 
> available.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16996) Add capability in hadoop-client to automatically login from a client/service keytab

2020-04-17 Thread Maziar Mirzazad (Jira)
Maziar Mirzazad created HADOOP-16996:


 Summary: Add capability in hadoop-client to automatically login 
from a client/service keytab
 Key: HADOOP-16996
 URL: https://issues.apache.org/jira/browse/HADOOP-16996
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Maziar Mirzazad
 Fix For: 2.9.2


Services using a kerberos keytab needs to do UGI.loginFromKeyTab() before doing 
any HDFS or M/R API calls. Instead of every service doing this, we can add 
keytab based login to hadoop-client library.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16972) Ignore AuthenticationFilterInitializer for KMSWebServer

2020-04-17 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17086097#comment-17086097
 ] 

Hudson commented on HADOOP-16972:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18157 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18157/])
HADOOP-16972. Ignore AuthenticationFilterInitializer for KMSWebServer. (github: 
rev ac40daece17e9a6339927dbcadab76034bd7882c)
* (edit) 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebServer.java
* (edit) 
hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java


> Ignore AuthenticationFilterInitializer for KMSWebServer
> ---
>
> Key: HADOOP-16972
> URL: https://issues.apache.org/jira/browse/HADOOP-16972
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 3.3.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Blocker
> Fix For: 3.3.0
>
>
> KMS does not work if hadoop.http.filter.initializers is set to 
> AuthenticationFilterInitializer since KMS uses its own authentication filter. 
> This is problematic when KMS is on the same node with other Hadoop services 
> and shares core-site.xml with them. The filter initializers configuration 
> should be tweaked as done for httpfs in HDFS-14845.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16972) Ignore AuthenticationFilterInitializer for KMSWebServer

2020-04-17 Thread Masatake Iwasaki (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-16972:
--
   Fix Version/s: 3.3.0
Hadoop Flags: Reviewed
Target Version/s: 3.3.0
  Resolution: Fixed
  Status: Resolved  (was: Patch Available)

Thanks, [~eyang]. I committed this to trunk and branch-3.3.

> Ignore AuthenticationFilterInitializer for KMSWebServer
> ---
>
> Key: HADOOP-16972
> URL: https://issues.apache.org/jira/browse/HADOOP-16972
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 3.3.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Blocker
> Fix For: 3.3.0
>
>
> KMS does not work if hadoop.http.filter.initializers is set to 
> AuthenticationFilterInitializer since KMS uses its own authentication filter. 
> This is problematic when KMS is on the same node with other Hadoop services 
> and shares core-site.xml with them. The filter initializers configuration 
> should be tweaked as done for httpfs in HDFS-14845.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] iwasakims merged pull request #1961: HADOOP-16972. Ignore AuthenticationFilterInitializer for KMSWebServer.

2020-04-17 Thread GitBox
iwasakims merged pull request #1961: HADOOP-16972. Ignore 
AuthenticationFilterInitializer for KMSWebServer.
URL: https://github.com/apache/hadoop/pull/1961
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16972) Ignore AuthenticationFilterInitializer for KMSWebServer

2020-04-17 Thread Eric Yang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-16972:
---
Priority: Blocker  (was: Major)

> Ignore AuthenticationFilterInitializer for KMSWebServer
> ---
>
> Key: HADOOP-16972
> URL: https://issues.apache.org/jira/browse/HADOOP-16972
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 3.3.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Blocker
>
> KMS does not work if hadoop.http.filter.initializers is set to 
> AuthenticationFilterInitializer since KMS uses its own authentication filter. 
> This is problematic when KMS is on the same node with other Hadoop services 
> and shares core-site.xml with them. The filter initializers configuration 
> should be tweaked as done for httpfs in HDFS-14845.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16972) Ignore AuthenticationFilterInitializer for KMSWebServer

2020-04-17 Thread Eric Yang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17086064#comment-17086064
 ] 

Eric Yang commented on HADOOP-16972:


[~iwasakims] Thank you for the pointer that kms-dt is a different type from 
hdfs-dt.  Your patch is right way to address this problem in the short term.

It is not a good idea to make separate token issuer a common practice unless 
there are good reasons.  Synchronization of session becomes a problem when 
token expiration unsynchronized due to API calls at different time.  HttpFS is 
working in the absence of contacting namenode.  Hence, it is kind of ok to 
allow HttpFS manages a separate token set for a specific use case.

In theory, KMS security does not benefit from having separated token kind.  
This implementation is more for performance reason to reduce round trip with 
namenode for user credential validation.  However, there are more disadvantages 
in doing so, like unsynchronized session, and additional logic/payload to 
populate different token types to the right place.  Since Hadoop community has 
already done some of the hard work to solve the problems superficially.  This 
patch is good stop gap solution, and I would prefer to fix KMS to use global 
AuthenticationFilter to avoid session problems, and reduce config logistics.  
These changes are beyond my participation in KMS code or scope of this issue.

+1 for fixing this in 3.3.0 to prevent regression.

> Ignore AuthenticationFilterInitializer for KMSWebServer
> ---
>
> Key: HADOOP-16972
> URL: https://issues.apache.org/jira/browse/HADOOP-16972
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 3.3.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>
> KMS does not work if hadoop.http.filter.initializers is set to 
> AuthenticationFilterInitializer since KMS uses its own authentication filter. 
> This is problematic when KMS is on the same node with other Hadoop services 
> and shares core-site.xml with them. The filter initializers configuration 
> should be tweaked as done for httpfs in HDFS-14845.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16971) testFileContextResolveAfs creates dangling link and fails for subsequent runs

2020-04-17 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17086046#comment-17086046
 ] 

Ayush Saxena commented on HADOOP-16971:
---

Got a bit confused from the description.
Can you help with the steps to repro this, as in what sequence to run the 
tests, so as to lead to the failure

> testFileContextResolveAfs creates dangling link and fails for subsequent runs
> -
>
> Key: HADOOP-16971
> URL: https://issues.apache.org/jira/browse/HADOOP-16971
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, fs, test
>Affects Versions: 3.2.1, 3.4.0
>Reporter: Ctest
>Priority: Minor
>  Labels: easyfix, fs, symlink, test
> Attachments: HADOOP-16971.000.patch
>
>
> In the test testFileContextResolveAfs, the symlink TestFileContextResolveAfs2 
> (linked to TestFileContextResolveAfs1) cannot be deleted when the test 
> finishes.
> This is because TestFileContextResolveAfs1 was always deleted before 
> TestFileContextResolveAfs2 when they were both passed into 
> FileSystem#deleteOnExit. This caused TestFileContextResolveAfs2 to become a 
> dangling link, which FileSystem in Hadoop currently cannot delete. (This is 
> because Files#exists will return false for dangling links.)
> As a result, the test `testFileContextResolveAfs` only passed for the first 
> run. And for later runs of this test, it will fail by throwing the following 
> exception: 
> {code:java}
> fs.FileUtil (FileUtil.java:symLink(821)) - Command 'ln -s 
> mypath/TestFileContextResolveAfs1 mypath/TestFileContextResolveAfs2' failed 1 
> with: ln: mypath/TestFileContextResolveAfs2: File exists
> java.io.IOException: Error 1 creating symlink 
> file:mypath/TestFileContextResolveAfs2 to mypath/TestFileContextResolveAfs1
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1963: HADOOP-16798. S3A Committer thread pool shutdown problems.

2020-04-17 Thread GitBox
hadoop-yetus commented on issue #1963: HADOOP-16798. S3A Committer thread pool 
shutdown problems.
URL: https://github.com/apache/hadoop/pull/1963#issuecomment-615403186
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 39s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  19m 53s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m  5s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m  0s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 57s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 51s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   1m  3s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 27s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 32s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  59m 24s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.8 Server=19.03.8 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1963/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1963 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux f3bf80f4b706 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3601054 |
   | Default Java | 1.8.0_242 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1963/1/testReport/ |
   | Max. process+thread count | 415 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1963/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16971) testFileContextResolveAfs creates dangling link and fails for subsequent runs

2020-04-17 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17085958#comment-17085958
 ] 

Ctest commented on HADOOP-16971:


Hi, any update on this?

> testFileContextResolveAfs creates dangling link and fails for subsequent runs
> -
>
> Key: HADOOP-16971
> URL: https://issues.apache.org/jira/browse/HADOOP-16971
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, fs, test
>Affects Versions: 3.2.1, 3.4.0
>Reporter: Ctest
>Priority: Minor
>  Labels: easyfix, fs, symlink, test
> Attachments: HADOOP-16971.000.patch
>
>
> In the test testFileContextResolveAfs, the symlink TestFileContextResolveAfs2 
> (linked to TestFileContextResolveAfs1) cannot be deleted when the test 
> finishes.
> This is because TestFileContextResolveAfs1 was always deleted before 
> TestFileContextResolveAfs2 when they were both passed into 
> FileSystem#deleteOnExit. This caused TestFileContextResolveAfs2 to become a 
> dangling link, which FileSystem in Hadoop currently cannot delete. (This is 
> because Files#exists will return false for dangling links.)
> As a result, the test `testFileContextResolveAfs` only passed for the first 
> run. And for later runs of this test, it will fail by throwing the following 
> exception: 
> {code:java}
> fs.FileUtil (FileUtil.java:symLink(821)) - Command 'ln -s 
> mypath/TestFileContextResolveAfs1 mypath/TestFileContextResolveAfs2' failed 1 
> with: ln: mypath/TestFileContextResolveAfs2: File exists
> java.io.IOException: Error 1 creating symlink 
> file:mypath/TestFileContextResolveAfs2 to mypath/TestFileContextResolveAfs1
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1963: HADOOP-16798. S3A Committer thread pool shutdown problems.

2020-04-17 Thread GitBox
steveloughran commented on issue #1963: HADOOP-16798. S3A Committer thread pool 
shutdown problems.
URL: https://github.com/apache/hadoop/pull/1963#issuecomment-615377723
 
 
   Test run against london with exactly the same flags which triggered the 
failure -you have to have an overloaded test system to cause the execution 
delays which trigger speculative task execution and then this failure. All 
good, at least on this run. 
   
   ```
   -Dparallel-tests -DtestsThreadCount=8 -Dscale -Dmarkers=keep -Dauth
   ```
   
   Rerunning with 12 threads to see if that can trigger it
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran opened a new pull request #1963: HADOOP-16798. S3A Committer thread pool shutdown problems.

2020-04-17 Thread GitBox
steveloughran opened a new pull request #1963: HADOOP-16798. S3A Committer 
thread pool shutdown problems.
URL: https://github.com/apache/hadoop/pull/1963
 
 
   
   Contributed by Steve Loughran.
   
   Fixes a condition which can cause job commit to fail if a task was
   aborted < 60s before the job commit commenced: the task abort
   will shut down the thread pool with a hard exit after 60s; the
   job commit POST requests would be scheduled through the same pool,
   so be interrupted and fail. At present the access is synchronized,
   but presumably the executor shutdown code is calling wait() and releasing
   locks.
   
   Task abort is triggered from the AM when task attempts succeed but
   there are still active speculative task attempts running. Thus it
   only surfaces when speculation is enabled and the final tasks are
   speculating, which, given they are the stragglers, is not
   unheard of.
   
   The fix copies and clears the threadPool field in a synchronized block,
   then shuts it down; job commit will encounter the empty field and
   demand-create a new one. As would a sequence of task aborts.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on issue #1955: HADOOP-13873 log DNS addresses on s3a init

2020-04-17 Thread GitBox
mukund-thakur commented on issue #1955: HADOOP-13873 log DNS addresses on s3a 
init
URL: https://github.com/apache/hadoop/pull/1955#issuecomment-615360512
 
 
   Thanks


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16959) Resolve hadoop-cos dependency conflict

2020-04-17 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17085909#comment-17085909
 ] 

Hadoop QA commented on HADOOP-16959:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
44s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.3 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  9m  
6s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
51s{color} | {color:green} branch-3.3 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
54s{color} | {color:green} branch-3.3 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
36s{color} | {color:green} branch-3.3 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} branch-3.3 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-cloud-storage-project/hadoop-cloud-storage {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
38s{color} | {color:red} hadoop-cloud-storage-project/hadoop-cos in branch-3.3 
has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} branch-3.3 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-cloud-storage-project/hadoop-cloud-storage {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} hadoop-cloud-storage-project/hadoop-cos generated 0 
new + 1 unchanged - 4 fixed = 1 total (was 5) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
22s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
26s{color} | {color:green} hadoop-cos in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
24s{color} | {color:green} hadoop-cloud-storage in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}108m 25s{color} 

[jira] [Commented] (HADOOP-16995) ITestS3AConfiguration proxy tests fail when bucket probes == 0

2020-04-17 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17085882#comment-17085882
 ] 

Steve Loughran commented on HADOOP-16995:
-

also this looks probe related
{code}
[INFO] Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 160.941 
s - in org.apache.hadoop.fs.s3a.ITestS3ATemporaryCredentials
[INFO] Running org.apache.hadoop.fs.s3a.ITestS3GuardCreate
[ERROR] Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 18.184 
s <<< FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3ABucketExistence
[ERROR] testNoBucketProbing(org.apache.hadoop.fs.s3a.ITestS3ABucketExistence)  
Time elapsed: 2.259 s  <<< FAILURE!
java.lang.AssertionError: Expected a 
org.apache.hadoop.fs.s3a.UnknownStoreException to be thrown, but got the 
result: : false
at 
org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:499)
at 
org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:384)
at 
org.apache.hadoop.fs.s3a.ITestS3ABucketExistence.expectUnknownStore(ITestS3ABucketExistence.java:98)
at 
org.apache.hadoop.fs.s3a.ITestS3ABucketExistence.testNoBucketProbing(ITestS3ABucketExistence.java:78)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
{code}



> ITestS3AConfiguration proxy tests fail when bucket probes == 0
> --
>
> Key: HADOOP-16995
> URL: https://issues.apache.org/jira/browse/HADOOP-16995
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Mukund Thakur
>Priority: Minor
>
> when bucket probes are disabled, proxy config tests in ITestS3AConfiguration 
> fail because the probes aren't being attempted in initialize()
> {code}
>   
> fs.s3a.bucket.probe
> 0
>  
> {code}
> Cause: HADOOP-16711
> Fix: call unsetBaseAndBucketOverrides for bucket probe in test conf, then set 
> the probe value to 2 just to be resilient to future default changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1861: HADOOP-13230. Optionally retain directory markers

2020-04-17 Thread GitBox
steveloughran commented on issue #1861: HADOOP-13230. Optionally retain 
directory markers
URL: https://github.com/apache/hadoop/pull/1861#issuecomment-615315234
 
 
   failure in restricted permission test as ls empty dir now succeeds in list 
(no HEAD, see)
   
   ```
   [INFO] Running 
org.apache.hadoop.fs.s3a.auth.delegation.ITestSessionDelegationTokens[INFO] 
Running 
org.apache.hadoop.fs.s3a.auth.delegation.ITestSessionDelegationInFileystem[ERROR]
 Tests run: 3, Failures: 1, Errors: 0, Skipped: 2, Time elapsed: 31.512 s <<< 
FAILURE! - in org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess[ERROR] 
testNoReadAccess[raw](org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess)  
Time elapsed: 26.071 s  <<< FAILURE! java.lang.AssertionError: Expected a 
java.nio.file.AccessDeniedException to be thrown, but got the result: : 
[Lorg.apache.hadoop.fs.s3a.S3AFileStatus;@53d4d5a
at 
org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:499)
at 
org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:384)
at 
org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess.accessDeniedIf(ITestRestrictedReadAccess.java:697)
at 
org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess.checkBasicFileOperations(ITestRestrictedReadAccess.java:413)
at 
org.apache.hadoop.fs.s3a.auth.ITestRestrictedReadAccess.testNoReadAccess(ITestRestrictedReadAccess.java:298)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16798) job commit failure in S3A MR magic committer test

2020-04-17 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17085875#comment-17085875
 ] 

Steve Loughran commented on HADOOP-16798:
-

cause is thread pool cleanup code in HADOOP-16570

> job commit failure in S3A MR magic committer test
> -
>
> Key: HADOOP-16798
> URL: https://issues.apache.org/jira/browse/HADOOP-16798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: stdout
>
>
> failure in 
> {code}
> ITestS3ACommitterMRJob.test_200_execute:304->Assert.fail:88 Job 
> job_1578669113137_0003 failed in state FAILED with cause Job commit failed: 
> java.util.concurrent.RejectedExecutionException: Task 
> java.util.concurrent.FutureTask@6e894de2 rejected from 
> org.apache.hadoop.util.concurrent.HadoopThreadPoolExecutor@225eed53[Terminated,
>  pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0]
> {code}
> Stack implies thread pool rejected it, but toString says "Terminated". Race 
> condition?
> *update 2020-04-22*: it's caused when a task is aborted in the AM -the 
> threadpool is disposed of, and while that is shutting down in one thread, 
> task commit is initiated using the same thread pool. When the task 
> committer's destroy operation times out, it kills all the active uploads.
> Proposed: destroyThreadPool immediately copies reference to current thread 
> pool and nullifies it, so that any new operation needing a thread pool will 
> create a new one



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1948: HADOOP-16986. s3a to not need wildfly on the classpath

2020-04-17 Thread GitBox
hadoop-yetus commented on issue #1948: HADOOP-16986. s3a to not need wildfly on 
the classpath
URL: https://github.com/apache/hadoop/pull/1948#issuecomment-615311403
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 32s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 50s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  19m 10s |  trunk passed  |
   | +1 :green_heart: |  compile  |  16m 53s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   2m 36s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 20s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 23s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 46s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m 10s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 17s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 34s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 26s |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 12s |  the patch passed  |
   | +1 :green_heart: |  javac  |  17m 12s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 35s |  root: The patch generated 1 new 
+ 1 unchanged - 0 fixed = 2 total (was 1)  |
   | +1 :green_heart: |  mvnsite  |   2m 19s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 20s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   3m 31s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   8m 38s |  hadoop-common in the patch failed.  |
   | -1 :x: |  unit  |   0m 53s |  hadoop-aws in the patch failed.  |
   | +0 :ok: |  asflicense  |   0m 37s |  ASF License check generated no 
output?  |
   |  |   | 121m 45s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ha.TestZKFailoverController |
   |   | hadoop.ha.TestZKFailoverControllerStress |
   |   | hadoop.crypto.key.TestValueQueue |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.8 Server=19.03.8 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1948/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1948 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux afeae74f7b15 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 2fe122e |
   | Default Java | 1.8.0_242 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1948/5/artifact/out/diff-checkstyle-root.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1948/5/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1948/5/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1948/5/testReport/ |
   | Max. process+thread count | 2837 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1948/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache 

[jira] [Commented] (HADOOP-16951) Tidy Up Text and ByteWritables Classes

2020-04-17 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17085871#comment-17085871
 ] 

Hudson commented on HADOOP-16951:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18155 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18155/])
HADOOP-16951: Tidy Up Text and ByteWritables Classes. (github: rev 
eca05917d60f8a06f2a04815db818a7d3afbd2ce)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/BytesWritable.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/Text.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestText.java


> Tidy Up Text and ByteWritables Classes
> --
>
> Key: HADOOP-16951
> URL: https://issues.apache.org/jira/browse/HADOOP-16951
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Fix For: 3.4.0
>
>
> # Remove superfluous code
>  # Remove superfluous comments
>  # Checkstyle fixes
>  # Remove methods that simply call {{super}}.method()
>  # Use Java 8 facilities to streamline code where applicable
>  # Simplify and unify some of the constructs between the two classes
>  
> The one meaningful change is that I am suggesting that the expanding of the 
> arrays be 1.5x instead of 2x per expansion.  I pulled this idea from open JDK.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16798) job commit failure in S3A MR magic committer test

2020-04-17 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16798:

Description: 
failure in 
{code}
ITestS3ACommitterMRJob.test_200_execute:304->Assert.fail:88 Job 
job_1578669113137_0003 failed in state FAILED with cause Job commit failed: 
java.util.concurrent.RejectedExecutionException: Task 
java.util.concurrent.FutureTask@6e894de2 rejected from 
org.apache.hadoop.util.concurrent.HadoopThreadPoolExecutor@225eed53[Terminated, 
pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0]
{code}

Stack implies thread pool rejected it, but toString says "Terminated". Race 
condition?

*update 2020-04-22*: it's caused when a task is aborted in the AM -the 
threadpool is disposed of, and while that is shutting down in one thread, task 
commit is initiated using the same thread pool. When the task committer's 
destroy operation times out, it kills all the active uploads.

Proposed: destroyThreadPool immediately copies reference to current thread pool 
and nullifies it, so that any new operation needing a thread pool will create a 
new one

  was:
failure in 
{code}
ITestS3ACommitterMRJob.test_200_execute:304->Assert.fail:88 Job 
job_1578669113137_0003 failed in state FAILED with cause Job commit failed: 
java.util.concurrent.RejectedExecutionException: Task 
java.util.concurrent.FutureTask@6e894de2 rejected from 
org.apache.hadoop.util.concurrent.HadoopThreadPoolExecutor@225eed53[Terminated, 
pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0]
{code}

Stack implies thread pool rejected it, but toString says "Terminated". Race 
condition?


> job commit failure in S3A MR magic committer test
> -
>
> Key: HADOOP-16798
> URL: https://issues.apache.org/jira/browse/HADOOP-16798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: stdout
>
>
> failure in 
> {code}
> ITestS3ACommitterMRJob.test_200_execute:304->Assert.fail:88 Job 
> job_1578669113137_0003 failed in state FAILED with cause Job commit failed: 
> java.util.concurrent.RejectedExecutionException: Task 
> java.util.concurrent.FutureTask@6e894de2 rejected from 
> org.apache.hadoop.util.concurrent.HadoopThreadPoolExecutor@225eed53[Terminated,
>  pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0]
> {code}
> Stack implies thread pool rejected it, but toString says "Terminated". Race 
> condition?
> *update 2020-04-22*: it's caused when a task is aborted in the AM -the 
> threadpool is disposed of, and while that is shutting down in one thread, 
> task commit is initiated using the same thread pool. When the task 
> committer's destroy operation times out, it kills all the active uploads.
> Proposed: destroyThreadPool immediately copies reference to current thread 
> pool and nullifies it, so that any new operation needing a thread pool will 
> create a new one



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16798) job commit failure in S3A MR magic committer test

2020-04-17 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16798:

Summary: job commit failure in S3A MR magic committer test  (was: job 
commit failure in S3A MR test, executor rejected submission)

> job commit failure in S3A MR magic committer test
> -
>
> Key: HADOOP-16798
> URL: https://issues.apache.org/jira/browse/HADOOP-16798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: stdout
>
>
> failure in 
> {code}
> ITestS3ACommitterMRJob.test_200_execute:304->Assert.fail:88 Job 
> job_1578669113137_0003 failed in state FAILED with cause Job commit failed: 
> java.util.concurrent.RejectedExecutionException: Task 
> java.util.concurrent.FutureTask@6e894de2 rejected from 
> org.apache.hadoop.util.concurrent.HadoopThreadPoolExecutor@225eed53[Terminated,
>  pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0]
> {code}
> Stack implies thread pool rejected it, but toString says "Terminated". Race 
> condition?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16951) Tidy Up Text and ByteWritables Classes

2020-04-17 Thread Jira


[ 
https://issues.apache.org/jira/browse/HADOOP-16951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17085864#comment-17085864
 ] 

Íñigo Goiri commented on HADOOP-16951:
--

Here is the commit:
https://github.com/apache/hadoop/commit/eca05917d60f8a06f2a04815db818a7d3afbd2ce
I added the description to the PR and now it should show in the history as 
contributed by you and all.

> Tidy Up Text and ByteWritables Classes
> --
>
> Key: HADOOP-16951
> URL: https://issues.apache.org/jira/browse/HADOOP-16951
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Fix For: 3.4.0
>
>
> # Remove superfluous code
>  # Remove superfluous comments
>  # Checkstyle fixes
>  # Remove methods that simply call {{super}}.method()
>  # Use Java 8 facilities to streamline code where applicable
>  # Simplify and unify some of the constructs between the two classes
>  
> The one meaningful change is that I am suggesting that the expanding of the 
> arrays be 1.5x instead of 2x per expansion.  I pulled this idea from open JDK.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16951) Tidy Up Text and ByteWritables Classes

2020-04-17 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-16951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HADOOP-16951.
--
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Tidy Up Text and ByteWritables Classes
> --
>
> Key: HADOOP-16951
> URL: https://issues.apache.org/jira/browse/HADOOP-16951
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Fix For: 3.4.0
>
>
> # Remove superfluous code
>  # Remove superfluous comments
>  # Checkstyle fixes
>  # Remove methods that simply call {{super}}.method()
>  # Use Java 8 facilities to streamline code where applicable
>  # Simplify and unify some of the constructs between the two classes
>  
> The one meaningful change is that I am suggesting that the expanding of the 
> arrays be 1.5x instead of 2x per expansion.  I pulled this idea from open JDK.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri merged pull request #1932: HADOOP-16951: Tidy Up Text and ByteWritables Classes

2020-04-17 Thread GitBox
goiri merged pull request #1932: HADOOP-16951: Tidy Up Text and ByteWritables 
Classes
URL: https://github.com/apache/hadoop/pull/1932
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16951) Tidy Up Text and ByteWritables Classes

2020-04-17 Thread Jira


[ 
https://issues.apache.org/jira/browse/HADOOP-16951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17085862#comment-17085862
 ] 

Íñigo Goiri commented on HADOOP-16951:
--

The latest code in the PR LGTM.
I'll approve there and merge the PR.
That makes the process cleaner except for cherry-picking which I haven't found 
a way to do with the Web UI.

> Tidy Up Text and ByteWritables Classes
> --
>
> Key: HADOOP-16951
> URL: https://issues.apache.org/jira/browse/HADOOP-16951
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
>
> # Remove superfluous code
>  # Remove superfluous comments
>  # Checkstyle fixes
>  # Remove methods that simply call {{super}}.method()
>  # Use Java 8 facilities to streamline code where applicable
>  # Simplify and unify some of the constructs between the two classes
>  
> The one meaningful change is that I am suggesting that the expanding of the 
> arrays be 1.5x instead of 2x per expansion.  I pulled this idea from open JDK.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1948: HADOOP-16986. s3a to not need wildfly on the classpath

2020-04-17 Thread GitBox
steveloughran commented on issue #1948: HADOOP-16986. s3a to not need wildfly 
on the classpath
URL: https://github.com/apache/hadoop/pull/1948#issuecomment-615297603
 
 
   last test run (-s3guard, auth scale) with s3 ireland -proxy config tests 
filed (JIRA issued), and we got a failure of a job commit in an MR job. Got 
more logs there and hope to make progress next week 
[HADOOP-16798](https://issues.apache.org/jira/browse/HADOOP-16798)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16798) job commit failure in S3A MR test, executor rejected submission

2020-04-17 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17085851#comment-17085851
 ] 

Steve Loughran commented on HADOOP-16798:
-

seen again, got more trace. Hypothesis: the AM is killing one task attempt, and 
that is triggering the thread pool shutdown. If time-to-commit > 60s, we get a 
timeout and the thread pool is interrupted

```
2020-04-17 15:16:27,330 [AsyncDispatcher event handler] INFO  impl.JobImpl 
(JobImpl.java:transition(1979)) - Num completed Tasks: 9
2020-04-17 15:16:27,380 [RMCommunicator Allocator] INFO  
rm.RMContainerAllocator (RMContainerAllocator.java:log(1626)) - Before 
Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:5 
AssignedReds:0 CompletedMaps:9 CompletedReds:0 ContAlloc:13 ContRel:2 
HostLocal:11 RackLocal:0
2020-04-17 15:16:27,382 [RMCommunicator Allocator] INFO  
rm.RMContainerAllocator 
(RMContainerAllocator.java:processFinishedContainer(897)) - Received completed 
container container_1587132747259_0003_01_09
2020-04-17 15:16:27,383 [RMCommunicator Allocator] INFO  
rm.RMContainerAllocator 
(RMContainerAllocator.java:processFinishedContainer(897)) - Received completed 
container container_1587132747259_0003_01_10
2020-04-17 15:16:27,383 [AsyncDispatcher event handler] INFO  
impl.TaskAttemptImpl (TaskAttemptImpl.java:transition(2524)) - Diagnostics 
report from attempt_1587132747259_0003_m_07_0: 
2020-04-17 15:16:27,383 [RMCommunicator Allocator] INFO  
rm.RMContainerAllocator (RMContainerAllocator.java:log(1626)) - After 
Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:3 
AssignedReds:0 CompletedMaps:9 CompletedReds:0 ContAlloc:13 ContRel:2 
HostLocal:11 RackLocal:0
2020-04-17 15:16:27,383 [AsyncDispatcher event handler] INFO  
impl.TaskAttemptImpl (TaskAttemptImpl.java:handle(1391)) - 
attempt_1587132747259_0003_m_07_0 TaskAttempt Transitioned from 
SUCCESS_FINISHING_CONTAINER to SUCCEEDED
2020-04-17 15:16:27,383 [AsyncDispatcher event handler] INFO  
impl.TaskAttemptImpl (TaskAttemptImpl.java:transition(2524)) - Diagnostics 
report from attempt_1587132747259_0003_m_08_0: 
2020-04-17 15:16:27,384 [AsyncDispatcher event handler] INFO  
impl.TaskAttemptImpl (TaskAttemptImpl.java:handle(1391)) - 
attempt_1587132747259_0003_m_08_0 TaskAttempt Transitioned from 
SUCCESS_FINISHING_CONTAINER to SUCCEEDED
2020-04-17 15:16:27,384 [ContainerLauncher #7] INFO  
launcher.ContainerLauncherImpl (ContainerLauncherImpl.java:run(382)) - 
Processing the event EventType: CONTAINER_COMPLETED for container 
container_1587132747259_0003_01_09 taskAttempt 
attempt_1587132747259_0003_m_07_0
2020-04-17 15:16:27,384 [ContainerLauncher #8] INFO  
launcher.ContainerLauncherImpl (ContainerLauncherImpl.java:run(382)) - 
Processing the event EventType: CONTAINER_COMPLETED for container 
container_1587132747259_0003_01_10 taskAttempt 
attempt_1587132747259_0003_m_08_0
2020-04-17 15:16:27,437 [IPC Server handler 13 on default port 49732] INFO  
mapred.TaskAttemptListenerImpl (TaskAttemptListenerImpl.java:resetLog(685)) - 
Progress of TaskAttempt attempt_1587132747259_0003_m_06_0 is : 1.0
2020-04-17 15:16:27,441 [IPC Server handler 12 on default port 49732] INFO  
mapred.TaskAttemptListenerImpl (TaskAttemptListenerImpl.java:done(283)) - Done 
acknowledgment from attempt_1587132747259_0003_m_06_0
2020-04-17 15:16:27,441 [AsyncDispatcher event handler] INFO  
impl.TaskAttemptImpl (TaskAttemptImpl.java:handle(1391)) - 
attempt_1587132747259_0003_m_06_0 TaskAttempt Transitioned from 
COMMIT_PENDING to SUCCESS_FINISHING_CONTAINER
2020-04-17 15:16:27,442 [AsyncDispatcher event handler] INFO  impl.TaskImpl 
(TaskImpl.java:sendTaskSucceededEvents(759)) - Task succeeded with attempt 
attempt_1587132747259_0003_m_06_0
2020-04-17 15:16:27,443 [AsyncDispatcher event handler] INFO  impl.TaskImpl 
(TaskImpl.java:transition(963)) - Issuing kill to other attempt 
attempt_1587132747259_0003_m_06_1
2020-04-17 15:16:27,444 [AsyncDispatcher event handler] INFO  impl.TaskImpl 
(TaskImpl.java:handle(665)) - task_1587132747259_0003_m_06 Task 
Transitioned from RUNNING to SUCCEEDED
2020-04-17 15:16:27,444 [AsyncDispatcher event handler] INFO  impl.JobImpl 
(JobImpl.java:transition(1979)) - Num completed Tasks: 10
2020-04-17 15:16:27,445 [AsyncDispatcher event handler] INFO  impl.JobImpl 
(JobImpl.java:handle(1020)) - job_1587132747259_0003Job Transitioned from 
RUNNING to COMMITTING
2020-04-17 15:16:27,446 [AsyncDispatcher event handler] INFO  
impl.TaskAttemptImpl (TaskAttemptImpl.java:handle(1391)) - 
attempt_1587132747259_0003_m_06_1 TaskAttempt Transitioned from RUNNING to 
KILL_CONTAINER_CLEANUP
2020-04-17 15:16:27,446 [ContainerLauncher #4] INFO  
launcher.ContainerLauncherImpl (ContainerLauncherImpl.java:run(382)) - 
Processing the event EventType: CONTAINER_REMOTE_CLEANUP for container 

[jira] [Updated] (HADOOP-16798) job commit failure in S3A MR test, executor rejected submission

2020-04-17 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16798:

Attachment: stdout

> job commit failure in S3A MR test, executor rejected submission
> ---
>
> Key: HADOOP-16798
> URL: https://issues.apache.org/jira/browse/HADOOP-16798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: stdout
>
>
> failure in 
> {code}
> ITestS3ACommitterMRJob.test_200_execute:304->Assert.fail:88 Job 
> job_1578669113137_0003 failed in state FAILED with cause Job commit failed: 
> java.util.concurrent.RejectedExecutionException: Task 
> java.util.concurrent.FutureTask@6e894de2 rejected from 
> org.apache.hadoop.util.concurrent.HadoopThreadPoolExecutor@225eed53[Terminated,
>  pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0]
> {code}
> Stack implies thread pool rejected it, but toString says "Terminated". Race 
> condition?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-1540) Support file exclusion list in distcp

2020-04-17 Thread Steven Rand (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-1540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Rand reassigned HADOOP-1540:
---

Assignee: (was: Steven Rand)

> Support file exclusion list in distcp
> -
>
> Key: HADOOP-1540
> URL: https://issues.apache.org/jira/browse/HADOOP-1540
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 2.6.0
>Reporter: Senthil Subramanian
>Priority: Minor
>  Labels: BB2015-05-TBR, distcp
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-1540.009.patch
>
>
> There should be a way to ignore specific paths (eg: those that have already 
> been copied over under the current srcPath). 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16977) in javaApi, UGI params should be overidden through FileSystem conf

2020-04-17 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16977:

Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

> in javaApi, UGI params should be overidden through FileSystem conf
> --
>
> Key: HADOOP-16977
> URL: https://issues.apache.org/jira/browse/HADOOP-16977
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2, 3.2.0
>Reporter: Hongbing Wang
>Priority: Major
> Attachments: HADOOP-16977.001.patch, HADOOP-16977.002.patch
>
>
> org.apache.hadoop.security.UserGroupInformation#ensureInitialized,will always 
> get the configure from the configuration files. Like below:
> {code:java}
> private static void ensureInitialized() {
>   if (conf == null) {
> synchronized(UserGroupInformation.class) {
>   if (conf == null) { // someone might have beat us
> initialize(new Configuration(), false);
>   }
> }
>   }
> }{code}
> So that, if FileSystem is created through FileSystem#get or 
> FileSystem#newInstance with conf, the conf values different from the 
> configuration files will not take effect in UserGroupInformation.  E.g:
> {code:java}
> Configuration conf = new Configuration();
> conf.set("k1","v1");
> conf.set("k2","v2");
> FileSystem fs = FileSystem.get(uri, conf);{code}
> "k1" or "k2" will not work in UserGroupInformation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16977) in javaApi, UGI params should be overidden through FileSystem conf

2020-04-17 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17085828#comment-17085828
 ] 

Steve Loughran commented on HADOOP-16977:
-

# There's some deep complexity here related to UGI launch, dynamically loaded 
resources (HdfsConfiguration), YarnConfiguration, ...
# UGI and Kerberos are one of the most sensitive bits of the code of the system 
on account of
it being low-level and foundational security code, using APIs in Java which we 
are probably the heaviest users, and integrating with native OS libraries, 
configs and remote services.
# UGI static fields are shared across all threads and all UGI instances in a 
process, including multitenant processes (hive,...)
# We are scared of UGI and changes to it.


We can't change the config on a thread because that will affect every other 
thread. If you look at its uses today, other than in tests, we only use it on 
process launch, before any attempt to start talking to remote services or start 
offering services is kicked off.


The best practise to create accounts for a user is to create a new UGI, then do 
ugi.doAs() { FileSystem.get(URI, conf)}

So, WONTFIX I'm afraid. Sorry.



> in javaApi, UGI params should be overidden through FileSystem conf
> --
>
> Key: HADOOP-16977
> URL: https://issues.apache.org/jira/browse/HADOOP-16977
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2, 3.2.0
>Reporter: Hongbing Wang
>Priority: Major
> Attachments: HADOOP-16977.001.patch, HADOOP-16977.002.patch
>
>
> org.apache.hadoop.security.UserGroupInformation#ensureInitialized,will always 
> get the configure from the configuration files. Like below:
> {code:java}
> private static void ensureInitialized() {
>   if (conf == null) {
> synchronized(UserGroupInformation.class) {
>   if (conf == null) { // someone might have beat us
> initialize(new Configuration(), false);
>   }
> }
>   }
> }{code}
> So that, if FileSystem is created through FileSystem#get or 
> FileSystem#newInstance with conf, the conf values different from the 
> configuration files will not take effect in UserGroupInformation.  E.g:
> {code:java}
> Configuration conf = new Configuration();
> conf.set("k1","v1");
> conf.set("k2","v2");
> FileSystem fs = FileSystem.get(uri, conf);{code}
> "k1" or "k2" will not work in UserGroupInformation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16977) in javaApi, UGI params should be overidden through FileSystem conf

2020-04-17 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16977:

Component/s: (was: common)
 security

> in javaApi, UGI params should be overidden through FileSystem conf
> --
>
> Key: HADOOP-16977
> URL: https://issues.apache.org/jira/browse/HADOOP-16977
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2, 3.2.0
>Reporter: Hongbing Wang
>Priority: Major
> Attachments: HADOOP-16977.001.patch, HADOOP-16977.002.patch
>
>
> org.apache.hadoop.security.UserGroupInformation#ensureInitialized,will always 
> get the configure from the configuration files. Like below:
> {code:java}
> private static void ensureInitialized() {
>   if (conf == null) {
> synchronized(UserGroupInformation.class) {
>   if (conf == null) { // someone might have beat us
> initialize(new Configuration(), false);
>   }
> }
>   }
> }{code}
> So that, if FileSystem is created through FileSystem#get or 
> FileSystem#newInstance with conf, the conf values different from the 
> configuration files will not take effect in UserGroupInformation.  E.g:
> {code:java}
> Configuration conf = new Configuration();
> conf.set("k1","v1");
> conf.set("k2","v2");
> FileSystem fs = FileSystem.get(uri, conf);{code}
> "k1" or "k2" will not work in UserGroupInformation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16959) Resolve hadoop-cos dependency conflict

2020-04-17 Thread YangY (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YangY updated HADOOP-16959:
---
Attachment: HADOOP-16959-branch-3.3.004.patch

> Resolve hadoop-cos dependency conflict
> --
>
> Key: HADOOP-16959
> URL: https://issues.apache.org/jira/browse/HADOOP-16959
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/cos
>Reporter: YangY
>Assignee: YangY
>Priority: Major
> Attachments: HADOOP-16959-branch-3.3.001.patch, 
> HADOOP-16959-branch-3.3.002.patch, HADOOP-16959-branch-3.3.003.patch, 
> HADOOP-16959-branch-3.3.004.patch
>
>
> There are some dependency conflicts between the Hadoop-common and Hadoop-cos. 
> For example, joda time lib, HTTP client lib and etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16994) hadoop output to ftp gives rename error on FileOutputCommitter.mergePaths

2020-04-17 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16994.
-
Resolution: Won't Fix

> hadoop output to ftp gives rename error on FileOutputCommitter.mergePaths
> -
>
> Key: HADOOP-16994
> URL: https://issues.apache.org/jira/browse/HADOOP-16994
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Talha Azaz
>Priority: Major
>
> i'm using spark in kubernetes cluster mode and trying to write read data from 
> DB and write in parquet format to ftp server. I'm using hadoop ftp filesystem 
> for writing. When the task completes, it tries to rename 
> /sensor_values/158535360/_temporary/0/_temporary/attempt_20200414075519__m_21_21/part-00021-d7cef14e-151b-4c3b-a8d8-4e9ab33e80f9-c000.snappy.parquet
> to 
> /sensor_values/158535360/part-00021-d7cef14e-151b-4c3b-a8d8-4e9ab33e80f9-c000.snappy.parquet
> But the problem is it gives the following error:
> ```
> Lost task 21.0 in stage 0.0 (TID 21, 10.233.90.137, executor 3): 
> org.apache.spark.SparkException: Task failed while writing rows.
>  at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:257)
>  at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:170)
>  at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:169)
>  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
>  at org.apache.spark.scheduler.Task.run(Task.scala:123)
>  at 
> org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
>  at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
>  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.IOException: Cannot rename source: 
> ftp://user:pass@host/sensor_values/158535360/_temporary/0/_temporary/attempt_20200414075519__m_21_21/part-00021-d7cef14e-151b-4c3b-a8d8-4e9ab33e80f9-c000.snappy.parquet
>  to 
> ftp://user:pass@host/sensor_values/158535360/part-00021-d7cef14e-151b-4c3b-a8d8-4e9ab33e80f9-c000.snappy.parquet
>  -only same directory renames are supported
>  at org.apache.hadoop.fs.ftp.FTPFileSystem.rename(FTPFileSystem.java:674)
>  at org.apache.hadoop.fs.ftp.FTPFileSystem.rename(FTPFileSystem.java:613)
>  at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.mergePaths(FileOutputCommitter.java:472)
>  at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.mergePaths(FileOutputCommitter.java:486)
>  at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitTask(FileOutputCommitter.java:597)
>  at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitTask(FileOutputCommitter.java:560)
>  at 
> org.apache.spark.mapred.SparkHadoopMapRedUtil$.performCommit$1(SparkHadoopMapRedUtil.scala:50)
>  at 
> org.apache.spark.mapred.SparkHadoopMapRedUtil$.commitTask(SparkHadoopMapRedUtil.scala:77)
>  at 
> org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitTask(HadoopMapReduceCommitProtocol.scala:225)
>  at 
> org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:78)
>  at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:247)
>  at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:242)
>  at 
> org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)
>  at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:248)
>  ... 10 more
> ```
> I have done the same thing on Azure filesystem using same spark and hadoop 
> implimentation. 
> Is there any configuration in hadoop or spark that needs to be changed or is 
> it just not supported in hadoop ftp file System?
> Thanks a lot!!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16994) hadoop output to ftp gives rename error on FileOutputCommitter.mergePaths

2020-04-17 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17085810#comment-17085810
 ] 

Steve Loughran commented on HADOOP-16994:
-

The File output committer uses renames to move task attempts into a completed 
task attempt dir, then when the jobis committed, into the final destination. We 
need to do this for resilience. All filesystems for which it works must support 
atomic directory renames.

HDFS, abfs: requirements are met
FTP doesnt support rename: it fails
S3A doesnt have the atomicity and rename is O(data): we have a special 
committer for it which uses multipart upload

Afraid FTP is not going to work. Either you implement your own committer using 
whatever limited operations FTP uses (hard-to-impossible), or use a different 
FS (NFS? some shared local mountpoint?)

closing as wontfix.

> hadoop output to ftp gives rename error on FileOutputCommitter.mergePaths
> -
>
> Key: HADOOP-16994
> URL: https://issues.apache.org/jira/browse/HADOOP-16994
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Talha Azaz
>Priority: Major
>
> i'm using spark in kubernetes cluster mode and trying to write read data from 
> DB and write in parquet format to ftp server. I'm using hadoop ftp filesystem 
> for writing. When the task completes, it tries to rename 
> /sensor_values/158535360/_temporary/0/_temporary/attempt_20200414075519__m_21_21/part-00021-d7cef14e-151b-4c3b-a8d8-4e9ab33e80f9-c000.snappy.parquet
> to 
> /sensor_values/158535360/part-00021-d7cef14e-151b-4c3b-a8d8-4e9ab33e80f9-c000.snappy.parquet
> But the problem is it gives the following error:
> ```
> Lost task 21.0 in stage 0.0 (TID 21, 10.233.90.137, executor 3): 
> org.apache.spark.SparkException: Task failed while writing rows.
>  at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:257)
>  at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:170)
>  at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:169)
>  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
>  at org.apache.spark.scheduler.Task.run(Task.scala:123)
>  at 
> org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
>  at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
>  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.IOException: Cannot rename source: 
> ftp://user:pass@host/sensor_values/158535360/_temporary/0/_temporary/attempt_20200414075519__m_21_21/part-00021-d7cef14e-151b-4c3b-a8d8-4e9ab33e80f9-c000.snappy.parquet
>  to 
> ftp://user:pass@host/sensor_values/158535360/part-00021-d7cef14e-151b-4c3b-a8d8-4e9ab33e80f9-c000.snappy.parquet
>  -only same directory renames are supported
>  at org.apache.hadoop.fs.ftp.FTPFileSystem.rename(FTPFileSystem.java:674)
>  at org.apache.hadoop.fs.ftp.FTPFileSystem.rename(FTPFileSystem.java:613)
>  at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.mergePaths(FileOutputCommitter.java:472)
>  at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.mergePaths(FileOutputCommitter.java:486)
>  at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitTask(FileOutputCommitter.java:597)
>  at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitTask(FileOutputCommitter.java:560)
>  at 
> org.apache.spark.mapred.SparkHadoopMapRedUtil$.performCommit$1(SparkHadoopMapRedUtil.scala:50)
>  at 
> org.apache.spark.mapred.SparkHadoopMapRedUtil$.commitTask(SparkHadoopMapRedUtil.scala:77)
>  at 
> org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitTask(HadoopMapReduceCommitProtocol.scala:225)
>  at 
> org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:78)
>  at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:247)
>  at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:242)
>  at 
> org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)
>  at 
> 

[jira] [Updated] (HADOOP-16959) Resolve hadoop-cos dependency conflict

2020-04-17 Thread YangY (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YangY updated HADOOP-16959:
---
Attachment: (was: HADOOP-16959-branch-3.3.004.patch)

> Resolve hadoop-cos dependency conflict
> --
>
> Key: HADOOP-16959
> URL: https://issues.apache.org/jira/browse/HADOOP-16959
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/cos
>Reporter: YangY
>Assignee: YangY
>Priority: Major
> Attachments: HADOOP-16959-branch-3.3.001.patch, 
> HADOOP-16959-branch-3.3.002.patch, HADOOP-16959-branch-3.3.003.patch
>
>
> There are some dependency conflicts between the Hadoop-common and Hadoop-cos. 
> For example, joda time lib, HTTP client lib and etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16959) Resolve hadoop-cos dependency conflict

2020-04-17 Thread YangY (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YangY updated HADOOP-16959:
---
Attachment: HADOOP-16959-branch-3.3.004.patch

> Resolve hadoop-cos dependency conflict
> --
>
> Key: HADOOP-16959
> URL: https://issues.apache.org/jira/browse/HADOOP-16959
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/cos
>Reporter: YangY
>Assignee: YangY
>Priority: Major
> Attachments: HADOOP-16959-branch-3.3.001.patch, 
> HADOOP-16959-branch-3.3.002.patch, HADOOP-16959-branch-3.3.003.patch, 
> HADOOP-16959-branch-3.3.004.patch
>
>
> There are some dependency conflicts between the Hadoop-common and Hadoop-cos. 
> For example, joda time lib, HTTP client lib and etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16951) Tidy Up Text and ByteWritables Classes

2020-04-17 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HADOOP-16951:

Description: 
# Remove superfluous code
 # Remove superfluous comments
 # Checkstyle fixes
 # Remove methods that simply call {{super}}.method()
 # Use Java 8 facilities to streamline code where applicable
 # Simplify and unify some of the constructs between the two classes

 

The one meaningful change is that I am suggesting that the expanding of the 
arrays be 1.5x instead of 2x per expansion.  I pulled this idea from open JDK.

  was:
# Remove superfluous code
 # Remove superfluous comments
 # Checkstyle fixes
 # Remove methods that simply call {{super}}.method()
 # Use Java 8 facilities to streamline code where applicable
 # Simplify and unify some of the constructs between the two classes


> Tidy Up Text and ByteWritables Classes
> --
>
> Key: HADOOP-16951
> URL: https://issues.apache.org/jira/browse/HADOOP-16951
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
>
> # Remove superfluous code
>  # Remove superfluous comments
>  # Checkstyle fixes
>  # Remove methods that simply call {{super}}.method()
>  # Use Java 8 facilities to streamline code where applicable
>  # Simplify and unify some of the constructs between the two classes
>  
> The one meaningful change is that I am suggesting that the expanding of the 
> arrays be 1.5x instead of 2x per expansion.  I pulled this idea from open JDK.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16951) Tidy Up Text and ByteWritables Classes

2020-04-17 Thread David Mollitor (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17085772#comment-17085772
 ] 

David Mollitor commented on HADOOP-16951:
-

[~elgoiri] Hello old friend :)

 

I was recently added to the Hadoop committers.  Thank you so much for all your 
help.

I created this Jira as a bit of tidying up a core class, but also as a first 
push for me.  Are you able to review?

 

Also, what is the process for taking a Github PR and merging it into trunk?

> Tidy Up Text and ByteWritables Classes
> --
>
> Key: HADOOP-16951
> URL: https://issues.apache.org/jira/browse/HADOOP-16951
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
>
> # Remove superfluous code
>  # Remove superfluous comments
>  # Checkstyle fixes
>  # Remove methods that simply call {{super}}.method()
>  # Use Java 8 facilities to streamline code where applicable
>  # Simplify and unify some of the constructs between the two classes



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13873) log DNS addresses on s3a init

2020-04-17 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17085757#comment-17085757
 ] 

Hudson commented on HADOOP-13873:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18154 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18154/])
HADOOP-13873. log DNS addresses on s3a initialization. (stevel: rev 
56350664a76b1ea8e1a942a251880ae3fab12f0c)
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/NetworkBinding.java


> log DNS addresses on s3a init
> -
>
> Key: HADOOP-13873
> URL: https://issues.apache.org/jira/browse/HADOOP-13873
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Mukund Thakur
>Priority: Minor
> Fix For: 3.3.0
>
>
> HADOOP-13871 has shown that network problems can kill perf, and that it's v. 
> hard to track down, even if you turn up the logging in hadoop.fs.s3a and 
> com.amazon layers to debug.
> we could maybe improve things by printing out the IPAddress of the s3 
> endpoint, as that could help with the network tracing. Printing from within 
> hadoop shows the one given to S3a, not a different one returned by any load 
> balancer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1875: HADOOP-16794. S3A reverts KMS encryption to the bucket's default KMS …

2020-04-17 Thread GitBox
steveloughran commented on issue #1875: HADOOP-16794. S3A reverts KMS 
encryption to the bucket's default KMS …
URL: https://github.com/apache/hadoop/pull/1875#issuecomment-615243321
 
 
   yeah, #1861 is big. I am going to see if I can get the changed getFileStatus 
code into hadoop 3.3.0 before it is frozen, so I don't have to worry about 
cross-version compatibility there. It will still be deleting directory markers 
as today, simply doing a LIST for marker & children and skipping HEAD + /


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13873) log DNS addresses on s3a init

2020-04-17 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-13873.
-
Fix Version/s: 3.3.0
   Resolution: Fixed

> log DNS addresses on s3a init
> -
>
> Key: HADOOP-13873
> URL: https://issues.apache.org/jira/browse/HADOOP-13873
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Mukund Thakur
>Priority: Minor
> Fix For: 3.3.0
>
>
> HADOOP-13871 has shown that network problems can kill perf, and that it's v. 
> hard to track down, even if you turn up the logging in hadoop.fs.s3a and 
> com.amazon layers to debug.
> we could maybe improve things by printing out the IPAddress of the s3 
> endpoint, as that could help with the network tracing. Printing from within 
> hadoop shows the one given to S3a, not a different one returned by any load 
> balancer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1955: HADOOP-13873 log DNS addresses on s3a init

2020-04-17 Thread GitBox
steveloughran commented on issue #1955: HADOOP-13873 log DNS addresses on s3a 
init
URL: https://github.com/apache/hadoop/pull/1955#issuecomment-615240621
 
 
   merged to trunk & branch 3.3


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran closed pull request #1955: HADOOP-13873 log DNS addresses on s3a init

2020-04-17 Thread GitBox
steveloughran closed pull request #1955: HADOOP-13873 log DNS addresses on s3a 
init
URL: https://github.com/apache/hadoop/pull/1955
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16955) Umbrella Jira for improving the Hadoop-cos support in Hadoop

2020-04-17 Thread Sammi Chen (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17085734#comment-17085734
 ] 

Sammi Chen commented on HADOOP-16955:
-

[~weichiu], sure, will try best to work with [~yuyang733] to make sure the 
feature going mature.  

> Umbrella Jira for improving the Hadoop-cos support in Hadoop
> 
>
> Key: HADOOP-16955
> URL: https://issues.apache.org/jira/browse/HADOOP-16955
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/cos
>Reporter: YangY
>Assignee: YangY
>Priority: Major
> Attachments: HADOOP-16955-branch-3.3.001.patch
>
>   Original Estimate: 48h
>  Time Spent: 4h
>  Remaining Estimate: 44h
>
> This Umbrella Jira focus on fixing some known bugs and adding some important 
> features.
>  
> bugfix:
>  # resolve the dependency conflict;
>  # fix the upload buffer returning failed when some exceptions occur;
>  # fix the issue that the single file upload can not be retried;
>  # fix the bug of checking if a file exists through listing the file 
> frequently.
> features:
>  # support SessionCredentialsProvider and InstanceCredentialsProvider, which 
> allows users to specify the credentials in URI or get it from the CVM 
> (Tencent Cloud Virtual Machine) bound to the CAM role that can access the COS 
> bucket;
>  # support the server encryption  based on SSE-COS and SSE-C;
>  # support the HTTP proxy settings;
>  # support the storage class settings;
>  # support the CRC64 checksum.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1955: HADOOP-13873 log DNS addresses on s3a init

2020-04-17 Thread GitBox
steveloughran commented on issue #1955: HADOOP-13873 log DNS addresses on s3a 
init
URL: https://github.com/apache/hadoop/pull/1955#issuecomment-615237502
 
 
   LGTM; +1
   Two unused imports; I'll fix those before I apply the patch
   ```
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:117:import
 org.apache.hadoop.net.NetUtils;:8: Unused import - 
org.apache.hadoop.net.NetUtils. [UnusedImports]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/NetworkBinding.java:24:import
 java.net.InetAddress;:8: Unused import - java.net.InetAddress. [UnusedImports]
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16995) ITestS3AConfiguration proxy tests fail when bucket probes == 0

2020-04-17 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17085729#comment-17085729
 ] 

Steve Loughran commented on HADOOP-16995:
-

[~mukund-thakur] afraid this is related to the bucket probe changes, so it's 
your homework.

> ITestS3AConfiguration proxy tests fail when bucket probes == 0
> --
>
> Key: HADOOP-16995
> URL: https://issues.apache.org/jira/browse/HADOOP-16995
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Mukund Thakur
>Priority: Minor
>
> when bucket probes are disabled, proxy config tests in ITestS3AConfiguration 
> fail because the probes aren't being attempted in initialize()
> {code}
>   
> fs.s3a.bucket.probe
> 0
>  
> {code}
> Cause: HADOOP-16711
> Fix: call unsetBaseAndBucketOverrides for bucket probe in test conf, then set 
> the probe value to 2 just to be resilient to future default changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16995) ITestS3AConfiguration proxy tests fail when bucket probes == 0

2020-04-17 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-16995:
---

 Summary: ITestS3AConfiguration proxy tests fail when bucket probes 
== 0
 Key: HADOOP-16995
 URL: https://issues.apache.org/jira/browse/HADOOP-16995
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Affects Versions: 3.3.0
Reporter: Steve Loughran
Assignee: Mukund Thakur



when bucket probes are disabled, proxy config tests in ITestS3AConfiguration 
fail because the probes aren't being attempted in initialize()

{code}
  
fs.s3a.bucket.probe
0
 
{code}

Cause: HADOOP-16711
Fix: call unsetBaseAndBucketOverrides for bucket probe in test conf, then set 
the probe value to 2 just to be resilient to future default changes.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1962: HADOOP-16953. tuning s3guard disabled warnings

2020-04-17 Thread GitBox
steveloughran commented on issue #1962: HADOOP-16953. tuning s3guard disabled 
warnings
URL: https://github.com/apache/hadoop/pull/1962#issuecomment-615233118
 
 
   Tests against london all good apart from a failure in 
`ITestS3AConfiguration` where an invalid port config isn't failing initialize; 
cause is me disabling bucket existence checks.Will file separate JIRA.
   
   ```
   mvit -Dparallel-tests -DtestsThreadCount=8 -Dscale -Dmarkers=keep -Dauth
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1962: HADOOP-16953. tuning s3guard disabled warnings

2020-04-17 Thread GitBox
hadoop-yetus removed a comment on issue #1962: HADOOP-16953. tuning s3guard 
disabled warnings
URL: https://github.com/apache/hadoop/pull/1962#issuecomment-614923205
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 48s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  23m 11s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 28s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  trunk passed  |
   | -1 :x: |  shadedclient  |  17m  1s |  branch has errors when building and 
testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 26s |  hadoop-aws in trunk failed.  |
   | +0 :ok: |  spotbugs  |  17m 57s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | -1 :x: |  findbugs  |   0m 27s |  hadoop-aws in trunk failed.  |
   ||| _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 24s |  hadoop-aws in the patch failed.  |
   | -1 :x: |  compile  |   0m 21s |  hadoop-aws in the patch failed.  |
   | -1 :x: |  javac  |   0m 21s |  hadoop-aws in the patch failed.  |
   | -0 :warning: |  checkstyle  |   0m 20s |  The patch fails to run 
checkstyle in hadoop-aws  |
   | -1 :x: |  mvnsite  |   0m 22s |  hadoop-aws in the patch failed.  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 1 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedclient  |   0m 20s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 22s |  hadoop-aws in the patch failed.  |
   | -1 :x: |  findbugs  |   0m 22s |  hadoop-aws in the patch failed.  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 22s |  hadoop-aws in the patch failed.  |
   | +0 :ok: |  asflicense  |   0m 22s |  ASF License check generated no 
output?  |
   |  |   |  48m 36s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.8 Server=19.03.8 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1962/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1962 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux 807e12fd984b 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 8505840 |
   | Default Java | 1.8.0_242 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1962/1/artifact/out/branch-javadoc-hadoop-tools_hadoop-aws.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1962/1/artifact/out/branch-findbugs-hadoop-tools_hadoop-aws.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1962/1/artifact/out/patch-mvninstall-hadoop-tools_hadoop-aws.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1962/1/artifact/out/patch-compile-hadoop-tools_hadoop-aws.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1962/1/artifact/out/patch-compile-hadoop-tools_hadoop-aws.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1962/1/artifact/out//home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1962/out/maven-patch-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1962/1/artifact/out/patch-mvnsite-hadoop-tools_hadoop-aws.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1962/1/artifact/out/whitespace-eol.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1962/1/artifact/out/patch-javadoc-hadoop-tools_hadoop-aws.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1962/1/artifact/out/patch-findbugs-hadoop-tools_hadoop-aws.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1962/1/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1962/1/testReport/ |
   | Max. process+thread count | 94 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | 

[GitHub] [hadoop] steveloughran commented on issue #1962: HADOOP-16953. tuning s3guard disabled warnings

2020-04-17 Thread GitBox
steveloughran commented on issue #1962: HADOOP-16953. tuning s3guard disabled 
warnings
URL: https://github.com/apache/hadoop/pull/1962#issuecomment-615230083
 
 
   next iteration has storediag fixed to report ExitExceptions without the 
stack, and S3AInstrumentation to check for null metrics in shutdown
   
   ```
   2020-04-17 13:43:39,020 [main] ERROR s3a.S3AFileSystem 
(S3Guard.java:logS3GuardDisabled(1100)) - S3Guard is disabled on this bucket: 
stevel-london
   2020-04-17 13:43:39,021 [main] DEBUG s3a.S3AFileSystem 
(HadoopExecutors.java:shutdown(118)) - Gracefully shutting down executor 
service. Waiting max 30 SECONDS
   2020-04-17 13:43:39,021 [main] DEBUG s3a.S3AFileSystem 
(HadoopExecutors.java:shutdown(129)) - Succesfully shutdown executor service
   2020-04-17 13:43:39,021 [main] DEBUG s3a.S3AFileSystem 
(HadoopExecutors.java:shutdown(118)) - Gracefully shutting down executor 
service. Waiting max 30 SECONDS
   2020-04-17 13:43:39,021 [main] DEBUG s3a.S3AFileSystem 
(HadoopExecutors.java:shutdown(129)) - Succesfully shutdown executor service
   2020-04-17 13:43:39,021 [main] DEBUG s3a.S3AInstrumentation 
(S3AInstrumentation.java:close(628)) - Shutting down metrics publisher
   2020-04-17 13:43:39,021 [main] DEBUG delegation.S3ADelegationTokens 
(S3ADelegationTokens.java:serviceStop(225)) - Stopping delegation tokens
   2020-04-17 13:43:39,023 [main] DEBUG auth.SignerManager 
(SignerManager.java:close(140)) - Unregistering fs from 0 initializers
   2020-04-17 13:43:39,023 [main] DEBUG s3a.S3AFileSystem 
(S3AUtils.java:closeAutocloseables(1666)) - Closing 
AWSCredentialProviderList[refcount= 1: [TemporaryAWSCredentialsProvider, 
SimpleAWSCredentialsProvider, EnvironmentVariableCredentialsProvider, 
org.apache.hadoop.fs.s3a.auth.IAMInstanceCredentialsProvider@3d246ea3]
   2020-04-17 13:43:39,023 [main] DEBUG s3a.AWSCredentialProviderList 
(AWSCredentialProviderList.java:close(315)) - Closing 
AWSCredentialProviderList[refcount= 0: [TemporaryAWSCredentialsProvider, 
SimpleAWSCredentialsProvider, EnvironmentVariableCredentialsProvider, 
org.apache.hadoop.fs.s3a.auth.IAMInstanceCredentialsProvider@3d246ea3]
   2020-04-17 13:43:39,023 [main] WARN  fs.FileSystem 
(FileSystem.java:createFileSystem(3418)) - Failed to initialize fileystem 
s3a://stevel-london/: 46: S3Guard is disabled on this bucket: stevel-london
   2020-04-17 13:43:39,024 [main] DEBUG s3a.S3AFileSystem 
(S3AFileSystem.java:close(3269)) - Filesystem s3a://stevel-london is closed
   2020-04-17 13:43:39,024 [main] INFO  diag.StoreDiag 
(DurationInfo.java:close(100)) - Creating filesystem s3a://stevel-london/: 
duration 0:00:914
   2020-04-17 13:43:39,025 [main] INFO  util.ExitUtil 
(ExitUtil.java:terminate(210)) - Exiting with status 46: S3Guard is disabled on 
this bucket: stevel-london
   ```
   
   No more stack traces


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16994) hadoop output to ftp gives rename error on FileOutputCommitter.mergePaths

2020-04-17 Thread Talha Azaz (Jira)
Talha Azaz created HADOOP-16994:
---

 Summary: hadoop output to ftp gives rename error on 
FileOutputCommitter.mergePaths
 Key: HADOOP-16994
 URL: https://issues.apache.org/jira/browse/HADOOP-16994
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Talha Azaz


i'm using spark in kubernetes cluster mode and trying to write read data from 
DB and write in parquet format to ftp server. I'm using hadoop ftp filesystem 
for writing. When the task completes, it tries to rename 
/sensor_values/158535360/_temporary/0/_temporary/attempt_20200414075519__m_21_21/part-00021-d7cef14e-151b-4c3b-a8d8-4e9ab33e80f9-c000.snappy.parquet
to 
/sensor_values/158535360/part-00021-d7cef14e-151b-4c3b-a8d8-4e9ab33e80f9-c000.snappy.parquet

But the problem is it gives the following error:

```
Lost task 21.0 in stage 0.0 (TID 21, 10.233.90.137, executor 3): 
org.apache.spark.SparkException: Task failed while writing rows.
 at 
org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:257)
 at 
org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:170)
 at 
org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:169)
 at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
 at org.apache.spark.scheduler.Task.run(Task.scala:123)
 at 
org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
 at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
 at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Cannot rename source: 
ftp://user:pass@host/sensor_values/158535360/_temporary/0/_temporary/attempt_20200414075519__m_21_21/part-00021-d7cef14e-151b-4c3b-a8d8-4e9ab33e80f9-c000.snappy.parquet
 to 
ftp://user:pass@host/sensor_values/158535360/part-00021-d7cef14e-151b-4c3b-a8d8-4e9ab33e80f9-c000.snappy.parquet
 -only same directory renames are supported
 at org.apache.hadoop.fs.ftp.FTPFileSystem.rename(FTPFileSystem.java:674)
 at org.apache.hadoop.fs.ftp.FTPFileSystem.rename(FTPFileSystem.java:613)
 at 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.mergePaths(FileOutputCommitter.java:472)
 at 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.mergePaths(FileOutputCommitter.java:486)
 at 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitTask(FileOutputCommitter.java:597)
 at 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitTask(FileOutputCommitter.java:560)
 at 
org.apache.spark.mapred.SparkHadoopMapRedUtil$.performCommit$1(SparkHadoopMapRedUtil.scala:50)
 at 
org.apache.spark.mapred.SparkHadoopMapRedUtil$.commitTask(SparkHadoopMapRedUtil.scala:77)
 at 
org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitTask(HadoopMapReduceCommitProtocol.scala:225)
 at 
org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:78)
 at 
org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:247)
 at 
org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:242)
 at 
org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)
 at 
org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:248)
 ... 10 more
```

I have done the same thing on Azure filesystem using same spark and hadoop 
implimentation. 
Is there any configuration in hadoop or spark that needs to be changed or is it 
just not supported in hadoop ftp file System?
Thanks a lot!!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16959) Resolve hadoop-cos dependency conflict

2020-04-17 Thread YangY (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17085664#comment-17085664
 ] 

YangY commented on HADOOP-16959:


The hint of the findbugs may be wrong.

 

This error does not exist in the patch.

> Resolve hadoop-cos dependency conflict
> --
>
> Key: HADOOP-16959
> URL: https://issues.apache.org/jira/browse/HADOOP-16959
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/cos
>Reporter: YangY
>Assignee: YangY
>Priority: Major
> Attachments: HADOOP-16959-branch-3.3.001.patch, 
> HADOOP-16959-branch-3.3.002.patch, HADOOP-16959-branch-3.3.003.patch
>
>
> There are some dependency conflicts between the Hadoop-common and Hadoop-cos. 
> For example, joda time lib, HTTP client lib and etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1962: HADOOP-16953. tuning s3guard disabled warnings

2020-04-17 Thread GitBox
steveloughran commented on issue #1962: HADOOP-16953. tuning s3guard disabled 
warnings
URL: https://github.com/apache/hadoop/pull/1962#issuecomment-615194730
 
 
   did a full release and storediag test run with the -D fail option; triggers 
an NPE in instrumentation shut down. Will fix
   
   ```
   2020-04-17 12:27:09,475 [main] ERROR s3a.S3AFileSystem 
(S3Guard.java:logS3GuardDisabled(1100)) - S3Guard is disabled on this bucket: 
stevel-london
   2020-04-17 12:27:09,476 [main] DEBUG s3a.S3AFileSystem 
(HadoopExecutors.java:shutdown(118)) - Gracefully shutting down executor 
service. Waiting max 30 SECONDS
   2020-04-17 12:27:09,476 [main] DEBUG s3a.S3AFileSystem 
(HadoopExecutors.java:shutdown(129)) - Succesfully shutdown executor service
   2020-04-17 12:27:09,476 [main] DEBUG s3a.S3AFileSystem 
(HadoopExecutors.java:shutdown(118)) - Gracefully shutting down executor 
service. Waiting max 30 SECONDS
   2020-04-17 12:27:09,476 [main] DEBUG s3a.S3AFileSystem 
(HadoopExecutors.java:shutdown(129)) - Succesfully shutdown executor service
   2020-04-17 12:27:09,476 [main] DEBUG s3a.S3AInstrumentation 
(S3AInstrumentation.java:close(627)) - Shutting down metrics publisher
   2020-04-17 12:27:09,477 [main] DEBUG delegation.S3ADelegationTokens 
(S3ADelegationTokens.java:serviceStop(225)) - Stopping delegation tokens
   2020-04-17 12:27:09,480 [main] DEBUG auth.SignerManager 
(SignerManager.java:close(140)) - Unregistering fs from 0 initializers
   2020-04-17 12:27:09,480 [main] DEBUG s3a.S3AFileSystem 
(S3AUtils.java:closeAutocloseables(1666)) - Closing 
AWSCredentialProviderList[refcount= 1: [TemporaryAWSCredentialsProvider, 
SimpleAWSCredentialsProvider, EnvironmentVariableCredentialsProvider, 
org.apache.hadoop.fs.s3a.auth.IAMInstanceCredentialsProvider@3d246ea3]
   2020-04-17 12:27:09,480 [main] DEBUG s3a.AWSCredentialProviderList 
(AWSCredentialProviderList.java:close(315)) - Closing 
AWSCredentialProviderList[refcount= 0: [TemporaryAWSCredentialsProvider, 
SimpleAWSCredentialsProvider, EnvironmentVariableCredentialsProvider, 
org.apache.hadoop.fs.s3a.auth.IAMInstanceCredentialsProvider@3d246ea3]
   2020-04-17 12:27:09,481 [main] WARN  fs.FileSystem 
(FileSystem.java:createFileSystem(3418)) - Failed to initialize fileystem 
s3a://stevel-london/: 49: S3Guard is disabled on this bucket: stevel-london
   2020-04-17 12:27:09,481 [main] DEBUG s3a.S3AFileSystem 
(S3AFileSystem.java:close(3269)) - Filesystem s3a://stevel-london is closed
   2020-04-17 12:27:09,482 [main] DEBUG s3a.S3AFileSystem 
(IOUtils.java:cleanupWithLogger(283)) - Exception in closing 
org.apache.hadoop.fs.s3a.S3AInstrumentation@f415a95
   java.lang.NullPointerException
at 
org.apache.hadoop.fs.s3a.S3AInstrumentation.close(S3AInstrumentation.java:624)
at org.apache.hadoop.io.IOUtils.cleanupWithLogger(IOUtils.java:280)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.stopAllServices(S3AFileSystem.java:3299)
at org.apache.hadoop.fs.s3a.S3AFileSystem.close(S3AFileSystem.java:3273)
at org.apache.hadoop.io.IOUtils.cleanupWithLogger(IOUtils.java:280)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3423)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:158)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3474)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3442)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:524)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
at 
org.apache.hadoop.fs.store.diag.StoreDiag.executeFileSystemOperations(StoreDiag.java:860)
at org.apache.hadoop.fs.store.diag.StoreDiag.run(StoreDiag.java:409)
at org.apache.hadoop.fs.store.diag.StoreDiag.run(StoreDiag.java:353)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.store.diag.StoreDiag.exec(StoreDiag.java:1166)
at org.apache.hadoop.fs.store.diag.StoreDiag.main(StoreDiag.java:1175)
at storediag.main(storediag.java:25)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
   2020-04-17 12:27:09,484 [main] INFO  diag.StoreDiag 
(DurationInfo.java:close(100)) - Creating filesystem s3a://stevel-london/: 
duration 0:01:715
   49: S3Guard is disabled on this bucket: stevel-london
at 
org.apache.hadoop.fs.s3a.s3guard.S3Guard.logS3GuardDisabled(S3Guard.java:1101)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:446)
at 

[jira] [Commented] (HADOOP-16959) Resolve hadoop-cos dependency conflict

2020-04-17 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17085653#comment-17085653
 ] 

Hadoop QA commented on HADOOP-16959:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.3 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
56s{color} | {color:green} branch-3.3 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
59s{color} | {color:green} branch-3.3 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
34s{color} | {color:green} branch-3.3 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} branch-3.3 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-cloud-storage-project/hadoop-cloud-storage {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
38s{color} | {color:red} hadoop-cloud-storage-project/hadoop-cos in branch-3.3 
has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} branch-3.3 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-cloud-storage-project/hadoop-cloud-storage {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} hadoop-cloud-storage-project/hadoop-cos generated 0 
new + 1 unchanged - 4 fixed = 1 total (was 5) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} hadoop-cos in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-cloud-storage in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}100m 30s{color} 

[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1679: HDFS-13934. Multipart uploaders to be created through FileSystem/FileContext.

2020-04-17 Thread GitBox
hadoop-yetus removed a comment on issue #1679: HDFS-13934. Multipart uploaders 
to be created through FileSystem/FileContext.
URL: https://github.com/apache/hadoop/pull/1679#issuecomment-593170640
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 13s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  markdownlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
4 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m 19s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 40s |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 44s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   2m 56s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m  5s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 48s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  4s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m  6s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 36s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 19s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  9s |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m  8s |  the patch passed  |
   | +1 :green_heart: |  javac  |  17m  8s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 51s |  root: The patch generated 31 new 
+ 308 unchanged - 3 fixed = 339 total (was 311)  |
   | +1 :green_heart: |  mvnsite  |   3m  5s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 6 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedclient  |  15m 29s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 27s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   6m  3s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   9m 18s |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   2m  9s |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 33s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 46s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 140m 37s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.TestFilterFs |
   |   | hadoop.security.TestFixKerberosTicketOrder |
   |   | hadoop.fs.TestHarFileSystem |
   |   | hadoop.fs.TestFilterFileSystem |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.6 Server=19.03.6 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1679/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1679 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux deb975b2d88c 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1a636da |
   | Default Java | 1.8.0_242 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1679/4/artifact/out/diff-checkstyle-root.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1679/4/artifact/out/whitespace-eol.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1679/4/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1679/4/testReport/ |
   | Max. process+thread count | 1341 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs-client hadoop-tools/hadoop-aws U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1679/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For 

[jira] [Commented] (HADOOP-16944) Use Yetus 0.12.0-SNAPSHOT for precommit jobs

2020-04-17 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17085594#comment-17085594
 ] 

Akira Ajisaka commented on HADOOP-16944:


Yetus 0.12.0 has been released.
Hi [~ayushtkn], would you check the PR?

> Use Yetus 0.12.0-SNAPSHOT for precommit jobs
> 
>
> Key: HADOOP-16944
> URL: https://issues.apache.org/jira/browse/HADOOP-16944
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> HADOOP-16054 wants to upgrade the ubuntu version of the docker image from 
> 16.04 to 18.04. However, ubuntu 18.04 brings maven 3.6.0 by default and the 
> pre-commit jobs fail to add comments to GitHub and JIRA. The issue was fixed 
> by YETUS-957 and upgrading the Yetus version to 0.12.0-SNAPSHOT (or 0.12.0, 
> if released) will fix the problem.
> How to upgrade Yetus version in the pre-commit jobs:
> * GitHub PR (hadoop-multibranch): Upgrade Jenkinsfile
> * JIRA (PreCommit--Build): Manually update the config in builds.apache.org



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16959) Resolve hadoop-cos dependency conflict

2020-04-17 Thread YangY (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YangY updated HADOOP-16959:
---
Attachment: HADOOP-16959-branch-3.3.003.patch

> Resolve hadoop-cos dependency conflict
> --
>
> Key: HADOOP-16959
> URL: https://issues.apache.org/jira/browse/HADOOP-16959
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/cos
>Reporter: YangY
>Assignee: YangY
>Priority: Major
> Attachments: HADOOP-16959-branch-3.3.001.patch, 
> HADOOP-16959-branch-3.3.002.patch, HADOOP-16959-branch-3.3.003.patch
>
>
> There are some dependency conflicts between the Hadoop-common and Hadoop-cos. 
> For example, joda time lib, HTTP client lib and etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1917: HADOOP-16944. Use Yetus 0.12.0 in GitHub PR

2020-04-17 Thread GitBox
hadoop-yetus commented on issue #1917: HADOOP-16944. Use Yetus 0.12.0 in GitHub 
PR
URL: https://github.com/apache/hadoop/pull/1917#issuecomment-615114835
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  24m 40s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  shelldocs  |   0m  1s |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  shadedclient  |  13m 58s |  branch has no errors when 
building and testing our client artifacts.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  There were no new shellcheck 
issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m  1s |  patch has no errors when 
building and testing our client artifacts.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 35s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  55m  5s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1917/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1917 |
   | Optional Tests | dupname asflicense shellcheck shelldocs |
   | uname | Linux 6677063af3a5 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3481895 |
   | Max. process+thread count | 476 (vs. ulimit of 5500) |
   | modules | C: . U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1917/3/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.3.7 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16473) S3Guard prune to only remove auth dir marker if files (not tombstones) are removed

2020-04-17 Thread Gabor Bota (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota reassigned HADOOP-16473:
---

Assignee: (was: Gabor Bota)

> S3Guard prune to only remove auth dir marker if files (not tombstones) are 
> removed
> --
>
> Key: HADOOP-16473
> URL: https://issues.apache.org/jira/browse/HADOOP-16473
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Minor
>
> the {{s3guard prune}} command marks all dirs as non-auth if an entry was 
> deleted. This makes sense from a performance perspective. But if only 
> tombstones are being purged, it doesn't -all it does is hurt the performance 
> of future scans



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-16961) ABFS: Adding metrics to AbfsInputStream (AbfsInputStreamStatistics)

2020-04-17 Thread Gabor Bota (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-16961 started by Gabor Bota.
---
> ABFS: Adding metrics to AbfsInputStream (AbfsInputStreamStatistics)
> ---
>
> Key: HADOOP-16961
> URL: https://issues.apache.org/jira/browse/HADOOP-16961
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> Adding metrics to AbfsInputStream (AbfsInputStreamStatistics) can improve the 
> testing and diagnostics of the connector.
> Also adding some logging.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15489) S3Guard to self update on directory listings of S3

2020-04-17 Thread Gabor Bota (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota reassigned HADOOP-15489:
---

Assignee: (was: Gabor Bota)

> S3Guard to self update on directory listings of S3
> --
>
> Key: HADOOP-15489
> URL: https://issues.apache.org/jira/browse/HADOOP-15489
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
> Environment: s3guard
>Reporter: Steve Loughran
>Priority: Minor
>
> S3Guard updates its table on a getFileStatus call, but not on a directory 
> listing.
> While this makes directory listings faster (no need to push out an update), 
> it slows down subsequent queries of the files, such as a sequence of:
> {code}
> statuses = s3a.listFiles(dir)
> for (status: statuses) {
>   if (status.isFile) {
>   try(is = s3a.open(status.getPath())) {
> ... do something
>   }
> }
> {code}
> this is because the open() is doing the getFileStatus check, even after the 
> listing.
> Updating the DDB tables after a listing would give those reads a speedup, 
> albeit at the expense of initiating a (bulk) update in the list call. Of 
> course, we could consider making that async, though that design (essentially 
> a write-buffer) would require the buffer to be checked in the reads too. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16063) Docker based pseudo-cluster definitions and test scripts for Hdfs/Yarn

2020-04-17 Thread Gabor Bota (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota reassigned HADOOP-16063:
---

Assignee: (was: Gabor Bota)

> Docker based pseudo-cluster definitions and test scripts for Hdfs/Yarn
> --
>
> Key: HADOOP-16063
> URL: https://issues.apache.org/jira/browse/HADOOP-16063
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Marton Elek
>Priority: Major
>
> During the recent releases of Apache Hadoop Ozone we had multiple experiments 
> using docker/docker-compose to support the development of ozone.
> As of now the hadoop-ozone distribution contains two directories in 
> additional the regular hadoop directories (bin, share/lib, etc
> h3. compose
> The ./compose directory of the distribution contains different type of 
> pseudo-cluster definitions. To start an ozone cluster is as easy as "cd 
> compose/ozone && docker-compose up-d"
> The clusters also could be scaled up and down (docker-compose scale 
> datanode=3)
> There are multiple cluster definitions for different use cases (for example 
> ozone+s3 or hdfs+ozone).
> The docker-compose files are based on apache/hadoop-runner image which is an 
> "empty" image. It doesnt' contain any hadoop distribution. Instead the 
> current hadoop is used (the ../.. is mapped as a volume at /opt/hadoop)
> With this approach it's very easy to 1) start a cluster from the distribution 
> 2) test any patch from the dev tree, as after any build a new cluster can be 
> started easily (with multiple nodes and datanodes)
> h3. smoketest
> We also started to use a simple robotframework based test suite. (see 
> ./smoketest directory). It's a high level test definition very similar to the 
> smoketests which are executed manually by the contributors during a release 
> vote.
> But it's a formal definition to start cluster from different docker-compose 
> definitions and execute simple shell scripts (and compare the output).
>  
> I believe that both approaches helped a lot during the development of ozone 
> and I propose to do the same improvements on the main hadoop distribution.
> I propose to provide docker-compose based example cluster definitions for 
> yarn/hdfs and for different use cases (simple hdfs, router based federation, 
> etc.)
> It can help to understand the different configuration and try out new 
> features with predefined config set.
> Long term we can also add robottests to help the release votes (basic 
> wordcount/mr tests could be scripted)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13980) S3Guard CLI: Add fsck check and fix commands

2020-04-17 Thread Gabor Bota (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota reassigned HADOOP-13980:
---

Assignee: (was: Gabor Bota)

> S3Guard CLI: Add fsck check and fix commands
> 
>
> Key: HADOOP-13980
> URL: https://issues.apache.org/jira/browse/HADOOP-13980
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Priority: Major
>
> As discussed in HADOOP-13650, we want to add an S3Guard CLI command which 
> compares S3 with MetadataStore, and returns a failure status if any 
> invariants are violated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] 20100507 closed pull request #1949: HADOOP-16964. Modify constant for AbstractFileSystem

2020-04-17 Thread GitBox
20100507 closed pull request #1949: HADOOP-16964. Modify constant for 
AbstractFileSystem
URL: https://github.com/apache/hadoop/pull/1949
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org