RE: [VOTE] Release Apache Hadoop 3.2.1 - RC0

2019-09-13 Thread Thomas Marquardt
I built release-3.2.1-RC0 and verified that the hadoop-azure tests are passing, 
so WASB and ABFS look great.  +1

-Original Message-
From: runlin zhang  
Sent: Thursday, September 12, 2019 1:06 AM
To: Rohith Sharma K S 
Cc: Hdfs-dev ; yarn-dev 
; mapreduce-dev ; 
Hadoop Common 
Subject: Re: [VOTE] Release Apache Hadoop 3.2.1 - RC0

+1

> 在 2019年9月11日,下午3:26,Rohith Sharma K S  写道:
> 
> Hi folks,
> 
> I have put together a release candidate (RC0) for Apache Hadoop 3.2.1.
> 
> The RC is available at:
> https://nam06.safelinks.protection.outlook.com/?url=http:%2F%2Fhome.ap
> ache.org%2F~rohithsharmaks%2Fhadoop-3.2.1-RC0%2Fdata=02%7C01%7Ctm
> arq%40microsoft.com%7Cd987fa2749894f6556c108d737580754%7C72f988bf86f14
> 1af91ab2d7cd011db47%7C1%7C1%7C637038723538965168sdata=ItMaCqAX58U
> MR%2Bm8Tx5wCatU4j80rsLcGumldCUgwoE%3Dreserved=0
> 
> The RC tag in git is release-3.2.1-RC0:
> https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgith
> ub.com%2Fapache%2Fhadoop%2Ftree%2Frelease-3.2.1-RC0data=02%7C01%7
> Ctmarq%40microsoft.com%7Cd987fa2749894f6556c108d737580754%7C72f988bf86
> f141af91ab2d7cd011db47%7C1%7C1%7C637038723538975159sdata=2BrIY0xb
> 8D%2Bkx59R5jSfiMHpN8R72y3HaFlGFlZQwW0%3Dreserved=0
> 
> 
> The maven artifacts are staged at
> https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Frepo
> sitory.apache.org%2Fcontent%2Frepositories%2Forgapachehadoop-1226%2F
> mp;data=02%7C01%7Ctmarq%40microsoft.com%7Cd987fa2749894f6556c108d73758
> 0754%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C1%7C637038723538975159
> mp;sdata=Xdt%2FcPEGc7H%2BWrimqf4BIq4G44ejQ4uA4icsyn%2FjiII%3Drese
> rved=0
> 
> You can find my public key at:
> https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdist
> .apache.org%2Frepos%2Fdist%2Frelease%2Fhadoop%2Fcommon%2FKEYSdata
> =02%7C01%7Ctmarq%40microsoft.com%7Cd987fa2749894f6556c108d737580754%7C
> 72f988bf86f141af91ab2d7cd011db47%7C1%7C1%7C637038723538975159sdat
> a=Ors%2BdrV8jVjaLSjc44Wwk2NSMVCkcrZDpPtH9f4FV84%3Dreserved=0
> 
> This vote will run for 7 days(5 weekdays), ending on 18th Sept at 
> 11:59 pm PST.
> 
> I have done testing with a pseudo cluster and distributed shell job. 
> My +1 to start.
> 
> Thanks & Regards
> Rohith Sharma K S


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16555) Update commons-compress to 1.19

2019-09-13 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HADOOP-16555.
--
Resolution: Fixed

There was a trivial conflict cherrypicking the commits into lower branches. 
Uploaded  [^HADOOP-16555.branch-3.2.patch]  for future reference. 

We may have to cherry pick this into branch-2.

> Update commons-compress to 1.19
> ---
>
> Key: HADOOP-16555
> URL: https://issues.apache.org/jira/browse/HADOOP-16555
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: release-blocker
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HADOOP-16555.branch-3.2.patch
>
>
> We depend on commons-compress 1.18. The 1.19 release just went out. I think 
> we should update it.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-09-13 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1258/

[Sep 12, 2019 2:39:57 AM] (abmodi) YARN-9819. Make 
TestOpportunisticContainerAllocatorAMService more
[Sep 12, 2019 3:41:57 AM] (aajisaka) HDFS-14840. Use Java Conccurent Instead of 
Synchronization in
[Sep 12, 2019 7:20:10 AM] (abmodi) YARN-9816. 
EntityGroupFSTimelineStore#scanActiveLogs fails when
[Sep 12, 2019 11:12:46 AM] (github) HADOOP-16423. S3Guard fsck: Check metadata 
consistency between S3 and
[Sep 12, 2019 11:17:54 AM] (github) HADOOP-16562. [pb-upgrade] Update docker 
image to have 3.7.1 protoc
[Sep 12, 2019 1:41:50 PM] (surendralilhore) HDFS-14699. Erasure Coding: Storage 
not considered in live replica when
[Sep 12, 2019 2:13:18 PM] (surendralilhore) HDFS-14798. Synchronize 
invalidateBlocks in DatanodeDescriptor.
[Sep 12, 2019 3:48:14 PM] (nanda) HDDS-2076. Read fails because the block 
cannot be located in the
[Sep 12, 2019 5:04:57 PM] (github) HADOOP-16566. S3Guard fsck: Use 
org.apache.hadoop.util.StopWatch instead
[Sep 12, 2019 6:47:13 PM] (surendralilhore) HDFS-14754. Erasure Coding : The 
number of Under-Replicated Blocks never




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

Failed CTEST tests :

   test_test_libhdfs_ops_hdfs_static 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_libhdfs_threaded_hdfspp_test_shim_static 
   test_hdfspp_mini_dfs_smoke_hdfspp_test_shim_static 
   libhdfs_mini_stress_valgrind_hdfspp_test_static 
   memcheck_libhdfs_mini_stress_valgrind_hdfspp_test_static 
   test_libhdfs_mini_stress_hdfspp_test_shim_static 
   test_hdfs_ext_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.hdfs.TestMultipleNNPortQOP 
   hadoop.hdfs.server.balancer.TestBalancer 
   hadoop.hdfs.tools.TestDFSZKFailoverController 
   hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup 
   hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.fs.adl.live.TestAdlSdkConfiguration 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1258/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1258/artifact/out/diff-compile-javac-root.txt
  [332K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1258/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1258/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1258/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1258/artifact/out/diff-patch-pylint.txt
  [220K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1258/artifact/out/diff-patch-shellcheck.txt
  [24K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1258/artifact/out/diff-patch-shelldocs.txt
  [44K]

   whitespace:

   

[jira] [Reopened] (HADOOP-16565) Fix "com.amazonaws.SdkClientException: Unable to find a region via the region provider chain."

2019-09-13 Thread Gabor Bota (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota reopened HADOOP-16565:
-

Reopened to add a message.

> Fix "com.amazonaws.SdkClientException: Unable to find a region via the region 
> provider chain."
> --
>
> Key: HADOOP-16565
> URL: https://issues.apache.org/jira/browse/HADOOP-16565
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> The error found during testing in the following tests:
> {noformat}
> [ERROR]   ITestS3ATemporaryCredentials.testInvalidSTSBinding:257 ? SdkClient 
> Unable to f...
> [ERROR]   ITestS3ATemporaryCredentials.testSTS:130 ? SdkClient Unable to find 
> a region v...
> [ERROR]   
> ITestS3ATemporaryCredentials.testSessionRequestExceptionTranslation:441->lambda$testSessionRequestExceptionTranslation$5:442
>  ? SdkClient
> [ERROR]   ITestS3ATemporaryCredentials.testSessionTokenExpiry:222 ? SdkClient 
> Unable to ...
> [ERROR]   ITestS3ATemporaryCredentials.testSessionTokenPropagation:193 ? 
> SdkClient Unabl...
> [ERROR]   ITestDelegatedMRJob.testJobSubmissionCollectsTokens:286 ? SdkClient 
> Unable to ...
> [ERROR]   ITestSessionDelegationInFileystem.testAddTokensFromFileSystem:235 ? 
> SdkClient ...
> [ERROR]   
> ITestSessionDelegationInFileystem.testCanRetrieveTokenFromCurrentUserCreds:260->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88
>  ? SdkClient
> [ERROR]   
> ITestSessionDelegationInFileystem.testDTCredentialProviderFromCurrentUserCreds:278->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88
>  ? SdkClient
> [ERROR]   
> ITestSessionDelegationInFileystem.testDelegatedFileSystem:308->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88
>  ? SdkClient
> [ERROR]   
> ITestSessionDelegationInFileystem.testDelegationBindingMismatch1:432->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88
>  ? SdkClient
> [ERROR]   ITestSessionDelegationInFileystem.testFileSystemBoundToCreator:681 
> ? SdkClient
> [ERROR]   ITestSessionDelegationInFileystem.testGetDTfromFileSystem:212 ? 
> SdkClient Unab...
> [ERROR]   
> ITestSessionDelegationInFileystem.testHDFSFetchDTCommand:606->lambda$testHDFSFetchDTCommand$3:607
>  ? SdkClient
> [ERROR]   ITestSessionDelegationInFileystem.testYarnCredentialPickup:576 ? 
> SdkClient Una...
> [ERROR]   ITestSessionDelegationTokens.testCreateAndUseDT:176 ? SdkClient 
> Unable to find...
> [ERROR]   ITestSessionDelegationTokens.testSaveLoadTokens:121 ? SdkClient 
> Unable to find...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16576) ITestS3GuardDDBRootOperations. test_100_FilesystemPrune failure

2019-09-13 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-16576:
---

 Summary: ITestS3GuardDDBRootOperations. test_100_FilesystemPrune 
failure
 Key: HADOOP-16576
 URL: https://issues.apache.org/jira/browse/HADOOP-16576
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Affects Versions: 3.3.0
Reporter: Steve Loughran


Failure in  
test_100_FilesystemPrune(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardDDBRootOperations)
  : No DynamoDB table name configuredjj 

fs.s3a.s3guard.ddb.region =eu-west-1; no region is defined.

This is surfacing on a branch which doesn't have my pending prune init code, so 
even though an FS was passed in, it wasn't used for init. Assumption: this will 
go away forever when that patch is in



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16565) Fix "com.amazonaws.SdkClientException: Unable to find a region via the region provider chain."

2019-09-13 Thread Gabor Bota (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota resolved HADOOP-16565.
-
Resolution: Workaround

> Fix "com.amazonaws.SdkClientException: Unable to find a region via the region 
> provider chain."
> --
>
> Key: HADOOP-16565
> URL: https://issues.apache.org/jira/browse/HADOOP-16565
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> The error found during testing in the following tests:
> {noformat}
> [ERROR]   ITestS3ATemporaryCredentials.testInvalidSTSBinding:257 ? SdkClient 
> Unable to f...
> [ERROR]   ITestS3ATemporaryCredentials.testSTS:130 ? SdkClient Unable to find 
> a region v...
> [ERROR]   
> ITestS3ATemporaryCredentials.testSessionRequestExceptionTranslation:441->lambda$testSessionRequestExceptionTranslation$5:442
>  ? SdkClient
> [ERROR]   ITestS3ATemporaryCredentials.testSessionTokenExpiry:222 ? SdkClient 
> Unable to ...
> [ERROR]   ITestS3ATemporaryCredentials.testSessionTokenPropagation:193 ? 
> SdkClient Unabl...
> [ERROR]   ITestDelegatedMRJob.testJobSubmissionCollectsTokens:286 ? SdkClient 
> Unable to ...
> [ERROR]   ITestSessionDelegationInFileystem.testAddTokensFromFileSystem:235 ? 
> SdkClient ...
> [ERROR]   
> ITestSessionDelegationInFileystem.testCanRetrieveTokenFromCurrentUserCreds:260->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88
>  ? SdkClient
> [ERROR]   
> ITestSessionDelegationInFileystem.testDTCredentialProviderFromCurrentUserCreds:278->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88
>  ? SdkClient
> [ERROR]   
> ITestSessionDelegationInFileystem.testDelegatedFileSystem:308->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88
>  ? SdkClient
> [ERROR]   
> ITestSessionDelegationInFileystem.testDelegationBindingMismatch1:432->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88
>  ? SdkClient
> [ERROR]   ITestSessionDelegationInFileystem.testFileSystemBoundToCreator:681 
> ? SdkClient
> [ERROR]   ITestSessionDelegationInFileystem.testGetDTfromFileSystem:212 ? 
> SdkClient Unab...
> [ERROR]   
> ITestSessionDelegationInFileystem.testHDFSFetchDTCommand:606->lambda$testHDFSFetchDTCommand$3:607
>  ? SdkClient
> [ERROR]   ITestSessionDelegationInFileystem.testYarnCredentialPickup:576 ? 
> SdkClient Una...
> [ERROR]   ITestSessionDelegationTokens.testCreateAndUseDT:176 ? SdkClient 
> Unable to find...
> [ERROR]   ITestSessionDelegationTokens.testSaveLoadTokens:121 ? SdkClient 
> Unable to find...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16421) ITestS3GuardOutOfBandOperations.deleteAfterTombstoneExpiryOobCreate failure

2019-09-13 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16421.
-
Resolution: Cannot Reproduce

> ITestS3GuardOutOfBandOperations.deleteAfterTombstoneExpiryOobCreate failure
> ---
>
> Key: HADOOP-16421
> URL: https://issues.apache.org/jira/browse/HADOOP-16421
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
> Environment: AWS Ireland with a versioned object store set to delete 
> old entries after 24h.
>Reporter: Steve Loughran
>Priority: Minor
>
> Saw a failure of 
> ITestS3GuardOutOfBandOperations.deleteAfterTombstoneExpiryOobCreate
> {code}
> java.lang.AssertionError: This file should throw FNFE when reading through 
> the raw fs, and the guarded fs deleted the file.: 
> {code}
> Hypothesis: because the store is versioned, the old file was still there. 
> Doesn't explain why I've not seen this before tho'



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16429) DynamoDBMetaStore deleteSubtree to delete leaf nodes first

2019-09-13 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16429.
-
Resolution: Done

seems to be done; as well as the deleteSubtree doing leaf nodes first, 
HADOOP-16430 is moving to a list of all children and delete, which is now 
linear rather than treewalk: leaf nodes are all you get

> DynamoDBMetaStore deleteSubtree to delete leaf nodes first
> --
>
> Key: HADOOP-16429
> URL: https://issues.apache.org/jira/browse/HADOOP-16429
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Major
>
> in {{deleteSubtree(path)}}, the DynamoDB metastore walks down the tree, 
> returning elements to delete. But it will delete parent entries before 
> children, so if an operation fails partway through, there will be orphans
> Better: DescendantsIterator to return all the leaf nodes before their parents 
> so the deletion is done bottom up
> Also: push the deletions off into their own async queue/pool so that they 
> don't become the bottleneck on the process



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16392) S3Guard Diff tool to list+ compare the etag and version fields

2019-09-13 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16392.
-
Resolution: Duplicate

> S3Guard Diff tool to list+ compare the etag and version fields
> --
>
> Key: HADOOP-16392
> URL: https://issues.apache.org/jira/browse/HADOOP-16392
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Minor
>
> now that S3Guard supports etags and version IDs, the diff command should list 
> and compare them,



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16326) S3Guard: Remove LocalMetadataStore

2019-09-13 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16326.
-
Resolution: Won't Fix

> S3Guard: Remove LocalMetadataStore
> --
>
> Key: HADOOP-16326
> URL: https://issues.apache.org/jira/browse/HADOOP-16326
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Priority: Minor
>
> We use LocalMetadataStore MetadataStore implementation in S3Guard only for 
> testing. 
> Inside it uses Guava's cache for storing metadatas. We try to mimic how 
> dynamo should work under the hood, but with every new feature or API 
> modification what we do on MetadataStore interface level it gets more and 
> more complicated to implement the same feature with different behavior.
> I want to start a debate on why we need to remove that, or why we want to 
> keep it. 
> I could rant about why is it annoying to have this implementation when we 
> need to get some things right to work with dynamo, and then do a totally 
> different set of modification in the LocalMetadata to get the same outcome, 
> and also add more tests just for the thing we use for testing. 
> There are also areas in our ever-growing testing matrix that would need more 
> attention instead of fixing tests for our test implementation. But on the 
> other hand, it is good that we have another impl for the API which we can use 
> for drafting new ideas.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16274) transient failure of ITestS3GuardToolDynamoDB.testDestroyUnknownTable

2019-09-13 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16274.
-
Resolution: Cannot Reproduce

> transient failure of ITestS3GuardToolDynamoDB.testDestroyUnknownTable
> -
>
> Key: HADOOP-16274
> URL: https://issues.apache.org/jira/browse/HADOOP-16274
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Minor
>
> Experienced a transient failure of a test
> {code}
> [ERROR] 
> testDestroyUnknownTable(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB)
>   Time elapsed: 143.671 s  <<< ERROR!
> java.lang.IllegalArgumentException: Table ireland-team is not deleted.
> {code}
> * The test run blocked for a while; I'd assumed network problems, but maybe 
> it was retrying
> * verified on aWS console that the table was gone
> * Not surfaced on reruns
> I'm assuming this was transient, but anything going near creating tables runs 
> a risk of creating bills. We need to move to on-demand table creation as soon 
> as we upgrade the SDK



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16280) S3Guard: Retry failed read with backoff in Authoritative mode when file can be opened

2019-09-13 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16280.
-
Fix Version/s: 3.3.0
   Resolution: Duplicate

HADOOP-16490 added exactly this retry logic

> S3Guard: Retry failed read with backoff in Authoritative mode when file can 
> be opened
> -
>
> Key: HADOOP-16280
> URL: https://issues.apache.org/jira/browse/HADOOP-16280
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Gabor Bota
>Priority: Major
> Fix For: 3.3.0
>
>
> When using S3Guard in authoritative mode a file can be reported from AWS S3 
> that's missing like it is described in the following exception:
> {noformat}
> java.io.FileNotFoundException: re-open 
> s3a://cloudera-dev-gabor-ireland/test/TMCDOR-021df1ad-633f-47b8-97f5-6cd93f0b82d0
>  at 0 on 
> s3a://cloudera-dev-gabor-ireland/test/TMCDOR-021df1ad-633f-47b8-97f5-6cd93f0b82d0:
>  
> com.amazonaws.services.s3.model.AmazonS3Exception: The specified key does not 
> exist. (Service: Amazon S3; Status Code: 404; Error 
> Code: NoSuchKey; Request ID: E1FF9EA9B5DBBD7E; S3 Extended Request ID: 
> NzNIL4+dyA89WTnfbcwuYQK+hCfx51TfavwgC3oEvQI0IQ9M/zAspbXOfBIis8/nTolc4tRB9ik=),
>  S3 Extended Request ID: 
> NzNIL4+dyA89WTnfbcwuYQK+hCfx51TfavwgC3oEvQI0IQ9M/zAspbXOfBIis8/nTolc4tRB9ik=:NoSuchKey
> {noformat}
> But the metadata in S3Guard (e.g dynamo db) is there, so it can be opened. 
> The operation will not fail when it's opened, it will fail when we try to 
> read it, so the call
> {noformat}
> FSDataInputStream is = guardedFs.open(testFilePath);{noformat}
> won't fail, but the next call
> {noformat}
> byte[] firstRead = new byte[text.length()];
> is.read(firstRead, 0, firstRead.length);
> {noformat}
> will fail with the exception message like what's above.
> Once Authoritative mode is on, we assume that there is no out of band 
> operation, so the file will appear eventually. We should re-try in this case.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15800) ITestS3GuardListConsistency#testConsistentListAfterDelete fails when running with dynamo

2019-09-13 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-15800.
-
Resolution: Cannot Reproduce

> ITestS3GuardListConsistency#testConsistentListAfterDelete fails when running 
> with dynamo
> 
>
> Key: HADOOP-15800
> URL: https://issues.apache.org/jira/browse/HADOOP-15800
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Priority: Major
>
> I've seen a new failure when running verify for HADOOP-15621. First I thought 
> it was my new patch, but it happens on trunk. This is a major issue, it could 
> be because of implementation issue in dynamo.
> {noformat}
> [ERROR] 
> testConsistentListAfterDelete(org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency)
>  Time elapsed: 2.212 s <<< FAILURE!
> java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertFalse(Assert.java:64)
> at org.junit.Assert.assertFalse(Assert.java:74)
> at 
> org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency.testConsistentListAfterDelete(ITestS3GuardListConsistency.java:205)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){noformat}
> Tested against {{us-west-1}}, I was on the same region. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16575) ITestS3ARemoteFileChanged tests fail if you set the bucket to versionid tracking

2019-09-13 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-16575:
---

 Summary: ITestS3ARemoteFileChanged tests fail if you set the 
bucket to versionid tracking
 Key: HADOOP-16575
 URL: https://issues.apache.org/jira/browse/HADOOP-16575
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Affects Versions: 3.3.0
Reporter: Steve Loughran


If you enable versionid tracking for a bucket, those tests in 
ITestS3ARemoteFileChanged which try to generate failures off etag conflict all 
fail.

Fix: clear the relevant bucket option before the test run



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16574) ITestS3AAWSCredentialsProvider tests fail if a bucket has DTs enabled

2019-09-13 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-16574:
---

 Summary: ITestS3AAWSCredentialsProvider tests fail if a bucket has 
DTs enabled
 Key: HADOOP-16574
 URL: https://issues.apache.org/jira/browse/HADOOP-16574
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Affects Versions: 3.3.0
Reporter: Steve Loughran


If you enable DTs on a bucket, then those tests which force failures from bad 
credential providers fail -the IOE they look for is wrapped in a 
ServiceStateException

Proposed: catch those and rethrow the nested IOE



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-09-13 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/443/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.util.TestReadWriteDiskValidator 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/443/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/443/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/443/artifact/out/diff-compile-cc-root-jdk1.8.0_222.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/443/artifact/out/diff-compile-javac-root-jdk1.8.0_222.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/443/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/443/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/443/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/443/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/443/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/443/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/443/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/443/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/443/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/443/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/443/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/443/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_222.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/443/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [160K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/443/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [232K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/443/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/443/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [20K]
   

[jira] [Created] (HADOOP-16573) IAM role created by S3A DT doesn't include DynamoDB scan

2019-09-13 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-16573:
---

 Summary: IAM role created by S3A DT doesn't include DynamoDB scan
 Key: HADOOP-16573
 URL: https://issues.apache.org/jira/browse/HADOOP-16573
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.3.0
Reporter: Steve Loughran


You can't run {{s3guard prune}} with role DTs as we don't create it with 
permissons to do so.

I think it may actually be useful to have an option where we don't restrict the 
role. This doesn't just help with debugging, it would let things like SQS 
integration pick up the creds from S3A.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16572) S3A DT support to warn when loading expired token

2019-09-13 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-16572:
---

 Summary: S3A DT support to warn when loading expired token
 Key: HADOOP-16572
 URL: https://issues.apache.org/jira/browse/HADOOP-16572
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.3.0
 Environment: CLI with HADOOP_TOKEN_PATH pointing at a file from the 
day before, containing an assumed Role DT, which was being loaded ahead of any 
setting in the XML file
Reporter: Steve Loughran
Assignee: Steve Loughran


(This just cost me half an hour as somehow as working CLI command stopped 
working since the day before, and I've been playing with endpoints and signing 
before I realised it)

_If the DT provider code loads a token from a file, it doesn't check or warn 
for an expired token -all you get is a 400 Bad request failure_

This not at all obvious.

Proposed
* WARN if now > expiry
* extra entry in troubleshooting for 400



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16571) s3a to improve diags on s3a bad request message

2019-09-13 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-16571:
---

 Summary: s3a to improve diags on s3a bad request message
 Key: HADOOP-16571
 URL: https://issues.apache.org/jira/browse/HADOOP-16571
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Steve Loughran


if you have the endpoint or signing algorithm wrong, your first sign of trouble 
on an s3a FS operation is a 400 bad request during init.

Proposed: include those properties on failure.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16570) S3A committers leak threads on job commit

2019-09-13 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-16570:
---

 Summary: S3A committers leak threads on job commit
 Key: HADOOP-16570
 URL: https://issues.apache.org/jira/browse/HADOOP-16570
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 3.1.2, 3.2.0
Reporter: Steve Loughran
Assignee: Steve Loughran


The fixed size ThreadPool created in AbstractS3ACommitter doesn't get cleaned 
up at EOL; as a result you leak the no. of threads set in 
"fs.s3a.committer.threads"

Not visible in MR/distcp jobs, but ultimately causes OOM on Spark



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16262) Add some optional modules instructions in BUILDING.txt

2019-09-13 Thread Wanqiang Ji (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wanqiang Ji resolved HADOOP-16262.
--
Resolution: Not A Problem

> Add some optional modules instructions in BUILDING.txt
> --
>
> Key: HADOOP-16262
> URL: https://issues.apache.org/jira/browse/HADOOP-16262
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Attachments: HADOOP-16262.001.patch
>
>
> Had three project modules missing in pom.xml. Such as:
> {code:xml}
> hadoop-hdds
> hadoop-ozone
> hadoop-submarine
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org