[jira] [Commented] (HADOOP-16547) s3guard prune command doesn't get AWS auth chain from FS

2019-09-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16932766#comment-16932766
 ] 

Hudson commented on HADOOP-16547:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17326 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17326/])
HADOOP-16547. make sure that s3guard prune sets up the FS (#1402). (gabor.bota: 
rev 5db32b8ced8dc7533737caab88b97e151d2b223f)
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/AbstractS3GuardToolTestBase.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java


> s3guard prune command doesn't get AWS auth chain from FS
> 
>
> Key: HADOOP-16547
> URL: https://issues.apache.org/jira/browse/HADOOP-16547
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> s3guard prune command doesn't get AWS auth chain from any FS, so it just 
> drives the DDB store from the conf settings. If S3A is set up to use 
> Delegation tokens then the DTs/custom AWS auth sequence is not picked up, so 
> you get an auth failure.
> Fix:
> # instantiate the FS before calling initMetadataStore
> # review other commands to make sure problem isn't replicated



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16547) s3guard prune command doesn't get AWS auth chain from FS

2019-09-18 Thread Gabor Bota (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16932692#comment-16932692
 ] 

Gabor Bota commented on HADOOP-16547:
-

+1 on PR 1402. Committed.

> s3guard prune command doesn't get AWS auth chain from FS
> 
>
> Key: HADOOP-16547
> URL: https://issues.apache.org/jira/browse/HADOOP-16547
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> s3guard prune command doesn't get AWS auth chain from any FS, so it just 
> drives the DDB store from the conf settings. If S3A is set up to use 
> Delegation tokens then the DTs/custom AWS auth sequence is not picked up, so 
> you get an auth failure.
> Fix:
> # instantiate the FS before calling initMetadataStore
> # review other commands to make sure problem isn't replicated



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16547) s3guard prune command doesn't get AWS auth chain from FS

2019-09-17 Thread Aaron Fabbri (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16931680#comment-16931680
 ] 

Aaron Fabbri commented on HADOOP-16547:
---

Current patch +1 LTGM after you address [~gabor.bota]'s comment about adding a 
test. 

 

> s3guard prune command doesn't get AWS auth chain from FS
> 
>
> Key: HADOOP-16547
> URL: https://issues.apache.org/jira/browse/HADOOP-16547
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> s3guard prune command doesn't get AWS auth chain from any FS, so it just 
> drives the DDB store from the conf settings. If S3A is set up to use 
> Delegation tokens then the DTs/custom AWS auth sequence is not picked up, so 
> you get an auth failure.
> Fix:
> # instantiate the FS before calling initMetadataStore
> # review other commands to make sure problem isn't replicated



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16547) s3guard prune command doesn't get AWS auth chain from FS

2019-09-16 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16930627#comment-16930627
 ] 

Steve Loughran commented on HADOOP-16547:
-

I've also verified that the test failure of HADOOP-16576 goes away with this 
patch. I'm happy with it!

> s3guard prune command doesn't get AWS auth chain from FS
> 
>
> Key: HADOOP-16547
> URL: https://issues.apache.org/jira/browse/HADOOP-16547
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> s3guard prune command doesn't get AWS auth chain from any FS, so it just 
> drives the DDB store from the conf settings. If S3A is set up to use 
> Delegation tokens then the DTs/custom AWS auth sequence is not picked up, so 
> you get an auth failure.
> Fix:
> # instantiate the FS before calling initMetadataStore
> # review other commands to make sure problem isn't replicated



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16547) s3guard prune command doesn't get AWS auth chain from FS

2019-09-13 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16929243#comment-16929243
 ] 

Steve Loughran commented on HADOOP-16547:
-

with the test plan, I can verify that prune, set-capacity, destroy and init 
fail to work with Delegation Token auth. with the patch, all of these *except 
init* work. Init is special as because its initing the bucket, it doesn't want 
a filesystem. I'm not worrying about this, as its a major admin command which 
you wouldn't normally use through DTs.

Old code

{code}
~/P/R/fsck bin/hadoop s3guard prune -seconds 0 -tombstone 
s3a://hwdev-steve-ireland-new/
java.nio.file.AccessDeniedException: hwdev-steve-ireland-new: 
org.apache.hadoop.fs.s3a.auth.NoAuthWithAWSException: No AWS Credentials 
provided by TemporaryAWSCredentialsProvider SimpleAWSCredentialsProvider 
EnvironmentVariableCredentialsProvider IAMInstanceCredentialsProvider : 
com.amazonaws.SdkClientException: Unable to load AWS credentials from 
environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and AWS_SECRET_KEY 
(or AWS_SECRET_ACCESS_KEY))
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:200)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initTable(DynamoDBMetadataStore.java:1840)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:521)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.initMetadataStore(S3GuardTool.java:318)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Prune.run(S3GuardTool.java:1072)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:402)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:1767)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.main(S3GuardTool.java:1776)
Caused by: org.apache.hadoop.fs.s3a.auth.NoAuthWithAWSException: No AWS 
Credentials provided by TemporaryAWSCredentialsProvider 
SimpleAWSCredentialsProvider EnvironmentVariableCredentialsProvider 
IAMInstanceCredentialsProvider : com.amazonaws.SdkClientException: Unable to 
load AWS credentials from environment variables (AWS_ACCESS_KEY_ID (or 
AWS_ACCESS_KEY) and AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY))
at 
org.apache.hadoop.fs.s3a.AWSCredentialProviderList.getCredentials(AWSCredentialProviderList.java:216)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.getCredentialsFromContext(AmazonHttpClient.java:1225)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.runBeforeRequestHandlers(AmazonHttpClient.java:801)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:751)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:744)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:726)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:686)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:668)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:532)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:512)
at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.doInvoke(AmazonDynamoDBClient.java:4279)
at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:4246)
at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.executeDescribeTable(AmazonDynamoDBClient.java:1905)
at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.describeTable(AmazonDynamoDBClient.java:1871)
at 
com.amazonaws.services.dynamodbv2.document.Table.describe(Table.java:137)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initTable(DynamoDBMetadataStore.java:1775)
... 7 more
Caused by: com.amazonaws.SdkClientException: Unable to load AWS credentials 
from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and 
AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY))
at 
com.amazonaws.auth.EnvironmentVariableCredentialsProvider.getCredentials(EnvironmentVariableCredentialsProvider.java:50)
at 
org.apache.hadoop.fs.s3a.AWSCredentialProviderList.getCredentials(AWSCredentialProviderList.java:177)
... 22 more
2019-09-13 14:19:23,373 [main] INFO  util.ExitUtil 
(ExitUtil.java:terminate(210)) - Exiting with status -1: 
java.nio.file.AccessDeniedException: hwdev-steve-ireland-new: 
org.apache.hadoop.fs.s3a.auth.NoAuthWithAWSException: No AWS Credentials 
provided by TemporaryAWSCredentialsProvider SimpleAWSCredentialsProvider 
EnvironmentVariableCredentialsProvider IAMInstanceCredentialsProvider : 
com.amazonaws.SdkClientException: Una

[jira] [Commented] (HADOOP-16547) s3guard prune command doesn't get AWS auth chain from FS

2019-09-13 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16929239#comment-16929239
 ] 

Steve Loughran commented on HADOOP-16547:
-

* build a full Hadoop distro without the patch
* Enable delegation tokens for a test bucket.
* use fetchdt to collect a token for that bucket
* set the HADOOP_TOKEN_FILE_LOCATION environment variable to point at the token 
file
* unset the AWS credentials
* set the ddb enabled, region and bucket names for all buckets, so ddb can try 
to init even when unbound to an FS.
* execute all the s3guard cli operations; observe which fail
* build a hadoop release *with* the patch, verify the failing operations now 
succeed


> s3guard prune command doesn't get AWS auth chain from FS
> 
>
> Key: HADOOP-16547
> URL: https://issues.apache.org/jira/browse/HADOOP-16547
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> s3guard prune command doesn't get AWS auth chain from any FS, so it just 
> drives the DDB store from the conf settings. If S3A is set up to use 
> Delegation tokens then the DTs/custom AWS auth sequence is not picked up, so 
> you get an auth failure.
> Fix:
> # instantiate the FS before calling initMetadataStore
> # review other commands to make sure problem isn't replicated



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16547) s3guard prune command doesn't get AWS auth chain from FS

2019-09-12 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16928767#comment-16928767
 ] 

Steve Loughran commented on HADOOP-16547:
-

to recreate the problem
you also need to set the bucket name or it doesn't init
{code}
~/P/R/fsck bin/hadoop s3guard prune -seconds 0 -tombstone 
s3a://hwdev-steve-ireland-new/
java.lang.IllegalArgumentException: No DynamoDB table name configured
at 
com.google.common.base.Preconditions.checkArgument(Preconditions.java:141)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:497)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.initMetadataStore(S3GuardTool.java:318)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Prune.run(S3GuardTool.java:1072)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:402)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:1767)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.main(S3GuardTool.java:1776)
2019-09-12 18:26:53,786 [main] INFO  util.ExitUtil 
(ExitUtil.java:terminate(210)) 
{code}

> s3guard prune command doesn't get AWS auth chain from FS
> 
>
> Key: HADOOP-16547
> URL: https://issues.apache.org/jira/browse/HADOOP-16547
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> s3guard prune command doesn't get AWS auth chain from any FS, so it just 
> drives the DDB store from the conf settings. If S3A is set up to use 
> Delegation tokens then the DTs/custom AWS auth sequence is not picked up, so 
> you get an auth failure.
> Fix:
> # instantiate the FS before calling initMetadataStore
> # review other commands to make sure problem isn't replicated



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16547) s3guard prune command doesn't get AWS auth chain from FS

2019-09-12 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16928726#comment-16928726
 ] 

Steve Loughran commented on HADOOP-16547:
-

actually, there's a simpler way to verify the move; look at the stack trace 
when uninited and verify it now happens if FS instantiation, rather than 
metastore init

{code}
2019-09-12 17:50:20,820 [main] INFO  service.AbstractService 
(AbstractService.java:noteFailure(267)) - Service S3ADelegationTokens failed in 
state STARTED
org.apache.hadoop.fs.s3a.auth.delegation.DelegationTokenIOException: no 
credentials in configuration or environment variables:  No AWS credentials
at 
org.apache.hadoop.fs.s3a.auth.MarshalledCredentials.validate(MarshalledCredentials.java:336)
at 
org.apache.hadoop.fs.s3a.auth.delegation.FullCredentialsTokenBinding.loadAWSCredentials(FullCredentialsTokenBinding.java:106)
at 
org.apache.hadoop.fs.s3a.auth.delegation.FullCredentialsTokenBinding.deployUnbonded(FullCredentialsTokenBinding.java:119)
at 
org.apache.hadoop.fs.s3a.auth.delegation.S3ADelegationTokens.deployUnbonded(S3ADelegationTokens.java:245)
at 
org.apache.hadoop.fs.s3a.auth.delegation.S3ADelegationTokens.bindToAnyDelegationToken(S3ADelegationTokens.java:278)
at 
org.apache.hadoop.fs.s3a.auth.delegation.S3ADelegationTokens.serviceStart(S3ADelegationTokens.java:199)
at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.bindAWSClient(S3AFileSystem.java:517)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:366)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3370)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:136)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3419)
at org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:3393)
at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:555)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.initS3AFileSystem(S3GuardTool.java:360)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.maybeInitFilesystem(S3GuardTool.java:381)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Prune.run(S3GuardTool.java:1098)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:425)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:1700)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.main(S3GuardTool.java:1709)
2019-09-12 17:50:20,822 [main] DEBUG delegation.S3ADelegationTokens 
(S3ADelegationTokens.java:serviceStop(221)) - Stopping delegation tokens
org.apache.hadoop.service.ServiceStateException: 
org.apache.hadoop.fs.s3a.auth.delegation.DelegationTokenIOException: no 
credentials in configuration or environment variables:  No AWS credentials
at 
org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:105)
at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:203)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.bindAWSClient(S3AFileSystem.java:517)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:366)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3370)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:136)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3419)
at org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:3393)
at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:555)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.initS3AFileSystem(S3GuardTool.java:360)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.maybeInitFilesystem(S3GuardTool.java:381)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Prune.run(S3GuardTool.java:1098)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:425)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:1700)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.main(S3GuardTool.java:1709)
Caused by: org.apache.hadoop.fs.s3a.auth.delegation.DelegationTokenIOException: 
no credentials in configuration or environment variables:  No AWS credentials
at 
org.apache.hadoop.fs.s3a.auth.MarshalledCredentials.validate(MarshalledCredentials.java:336)
at 
org.apache.hadoop.fs.s3a.auth.delegation.FullCredentialsTokenBinding.loadAWSCredentials(FullCredentialsTokenBinding.java:106)
at 
org.apache.hadoop.fs.s3a.auth.delegation.FullCredentialsTokenBinding.deployUnbonded(FullCredentialsTokenBinding.java:119)

[jira] [Commented] (HADOOP-16547) s3guard prune command doesn't get AWS auth chain from FS

2019-09-12 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16928722#comment-16928722
 ] 

Steve Loughran commented on HADOOP-16547:
-

latest patch will seem to do the load, but need to compare with the unpatched 
version to verify this is a fix. Note the trace also relies on HADOOP-16568 for 
the full DT to work; my next test run will use session creds instead

{code}
bin/hadoop s3guard prune -seconds 0 -tombstone s3a://hwdev-steve-ireland-new/
2019-09-12 17:47:01,207 [main] DEBUG security.UserGroupInformation 
(UserGroupInformation.java:login(260)) - hadoop login
2019-09-12 17:47:01,210 [main] DEBUG security.UserGroupInformation 
(UserGroupInformation.java:commit(193)) - hadoop login commit
2019-09-12 17:47:01,215 [main] DEBUG security.UserGroupInformation 
(UserGroupInformation.java:commit(221)) - using local user:UnixPrincipal: stevel
2019-09-12 17:47:01,215 [main] DEBUG security.UserGroupInformation 
(UserGroupInformation.java:commit(227)) - Using user: "UnixPrincipal: stevel" 
with name stevel
2019-09-12 17:47:01,215 [main] DEBUG security.UserGroupInformation 
(UserGroupInformation.java:commit(241)) - User entry: "stevel"
2019-09-12 17:47:01,217 [main] DEBUG security.UserGroupInformation 
(UserGroupInformation.java:createLoginUser(768)) - Reading credentials from 
location /Users/stevel/Projects/Releases/secrets.bin
2019-09-12 17:47:01,269 [main] DEBUG security.UserGroupInformation 
(UserGroupInformation.java:createLoginUser(773)) - Loaded 1 tokens from 
/Users/stevel/Projects/Releases/secrets.bin
2019-09-12 17:47:01,269 [main] DEBUG security.UserGroupInformation 
(UserGroupInformation.java:createLoginUser(815)) - UGI loginUser:stevel 
(auth:SIMPLE)
2019-09-12 17:47:01,761 [main] DEBUG delegation.S3ADelegationTokens 
(S3ADelegationTokens.java:serviceInit(185)) - Filesystem 
s3a://hwdev-steve-ireland-new is using delegation tokens of kind 
S3ADelegationToken/Full
2019-09-12 17:47:02,018 [main] DEBUG delegation.S3ADelegationTokens 
(S3ADelegationTokens.java:lookupToken(606)) - Looking for token for service 
s3a://hwdev-steve-ireland-new in credentials
2019-09-12 17:47:02,021 [main] DEBUG delegation.S3ADelegationTokens 
(S3ADelegationTokens.java:lookupToken(610)) - Found token of kind 
S3ADelegationToken/Full
2019-09-12 17:47:02,057 [main] INFO  delegation.S3ADelegationTokens 
(S3ADelegationTokens.java:bindToDelegationToken(327)) - Using delegation token 
S3ATokenIdentifier{S3ADelegationToken/Full; uri=s3a://hwdev-steve-ireland-new; 
timestamp=1568301567114; encryption=(no encryption); 
50f24529-7fa3-4099-b776-f00c9a83ad96; Created on HW13176-2.local/192.168.1.139 
at time 2019-09-12T15:19:25.280Z.; source = Hadoop configuration data}; full 
credentials (valid)
2019-09-12 17:47:02,057 [main] INFO  delegation.S3ADelegationTokens 
(DurationInfo.java:(72)) - Starting: Creating Delegation Token
2019-09-12 17:47:02,059 [main] INFO  delegation.S3ADelegationTokens 
(DurationInfo.java:close(87)) - Creating Delegation Token: duration 0:00.002s
2019-09-12 17:47:02,059 [main] DEBUG delegation.S3ADelegationTokens 
(S3ADelegationTokens.java:serviceStart(200)) - S3A Delegation support token 
S3ATokenIdentifier{S3ADelegationToken/Full; uri=s3a://hwdev-steve-ireland-new; 
timestamp=1568301567114; encryption=(no encryption); 
50f24529-7fa3-4099-b776-f00c9a83ad96; Created on HW13176-2.local/192.168.1.139 
at time 2019-09-12T15:19:25.280Z.; source = Hadoop configuration data}; full 
credentials (valid) with Token binding S3ADelegationToken/Full
2019-09-12 17:47:03,520 [main] INFO  s3guard.S3GuardTool 
(S3GuardTool.java:initMetadataStore(323)) - Metadata store 
DynamoDBMetadataStore{region=eu-west-1, tableName=hwdev-steve-ireland-new, 
tableArn=arn:aws:dynamodb:eu-west-1:980678866538:table/hwdev-steve-ireland-new} 
is initialized.
2019-09-12 17:47:03,540 [main] INFO  s3guard.DynamoDBMetadataStore 
(DurationInfo.java:(72)) - Starting: Pruning DynamoDB Store
2019-09-12 17:47:03,574 [main] INFO  s3guard.DynamoDBMetadataStore 
(DurationInfo.java:close(87)) - Pruning DynamoDB Store: duration 0:00.034s
2019-09-12 17:47:03,575 [main] INFO  s3guard.DynamoDBMetadataStore 
(DynamoDBMetadataStore.java:innerPrune(1605)) - Finished pruning 0 items in 
batches of 25
2019-09-12 17:47:03,580 [shutdown-hook-0] DEBUG delegation.S3ADelegationTokens 
(S3ADelegationTokens.java:serviceStop(221)) - Stopping delegation tokens

{code}

> s3guard prune command doesn't get AWS auth chain from FS
> 
>
> Key: HADOOP-16547
> URL: https://issues.apache.org/jira/browse/HADOOP-16547
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> s3guard prune command doesn't

[jira] [Commented] (HADOOP-16547) s3guard prune command doesn't get AWS auth chain from FS

2019-09-06 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16924403#comment-16924403
 ] 

Steve Loughran commented on HADOOP-16547:
-

more: to get to this you have to have set the fs.s3a.s3guard.ddb.region 
property, or provide the -region option, otherwise the FS is instantiated to 
work out the region. 

> s3guard prune command doesn't get AWS auth chain from FS
> 
>
> Key: HADOOP-16547
> URL: https://issues.apache.org/jira/browse/HADOOP-16547
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> s3guard prune command doesn't get AWS auth chain from any FS, so it just 
> drives the DDB store from the conf settings. If S3A is set up to use 
> Delegation tokens then the DTs/custom AWS auth sequence is not picked up, so 
> you get an auth failure.
> Fix:
> # instantiate the FS before calling initMetadataStore
> # review other commands to make sure problem isn't replicated



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16547) s3guard prune command doesn't get AWS auth chain from FS

2019-09-04 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16922637#comment-16922637
 ] 

Steve Loughran commented on HADOOP-16547:
-

{code}
 hadoop s3guard prune -days 7 -hours 4 -minutes 0 -seconds 1 s3a://landsat-pds/ 
java.nio.file.AccessDeniedException: spark-sql-102039-j8n: 
org.apache.hadoop.fs.s3a.auth.NoAuthWithAWSException: No AWS Credentials 
provided by TemporaryAWSCredentialsProvider SimpleAWSCredentialsProvider 
EnvironmentVariableCredentialsProvider IAMInstanceCredentialsProvider : 
com.amazonaws.SdkClientException: Unable to load AWS credentials from 
environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and AWS_SECRET_KEY 
(or AWS_SECRET_ACCESS_KEY))
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:200)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initTable(DynamoDBMetadataStore.java:1811)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:520)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.initMetadataStore(S3GuardTool.java:317)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Prune.run(S3GuardTool.java:1071)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:401)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:1672)
at 
org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.main(S3GuardTool.java:1681)
Caused by: org.apache.hadoop.fs.s3a.auth.NoAuthWithAWSException: No AWS 
Credentials provided by TemporaryAWSCredentialsProvider 
SimpleAWSCredentialsProvider EnvironmentVariableCredentialsProvider 
IAMInstanceCredentialsProvider : com.amazonaws.SdkClientException: Unable to 
load AWS credentials from environment variables (AWS_ACCESS_KEY_ID (or 
AWS_ACCESS_KEY) and AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY))
at 
org.apache.hadoop.fs.s3a.AWSCredentialProviderList.getCredentials(AWSCredentialProviderList.java:216)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.getCredentialsFromContext(AmazonHttpClient.java:1225)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.runBeforeRequestHandlers(AmazonHttpClient.java:801)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:751)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:744)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:726)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:686)
at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:668)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:532)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:512)
at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.doInvoke(AmazonDynamoDBClient.java:4279)
at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:4246)
at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.executeDescribeTable(AmazonDynamoDBClient.java:1905)
at 
com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.describeTable(AmazonDynamoDBClient.java:1871)
at 
com.amazonaws.services.dynamodbv2.document.Table.describe(Table.java:137)
at 
org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initTable(DynamoDBMetadataStore.java:1746)
... 7 more
Caused by: com.amazonaws.SdkClientException: Unable to load AWS credentials 
from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and 
AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY))
at 
com.amazonaws.auth.EnvironmentVariableCredentialsProvider.getCredentials(EnvironmentVariableCredentialsProvider.java:50)
at 
org.apache.hadoop.fs.s3a.AWSCredentialProviderList.getCredentials(AWSCredentialProviderList.java:177)
... 22 more
{code}


> s3guard prune command doesn't get AWS auth chain from FS
> 
>
> Key: HADOOP-16547
> URL: https://issues.apache.org/jira/browse/HADOOP-16547
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> s3guard prune command doesn't get AWS auth chain from any FS, so it just 
> drives the DDB store from the conf settings. If S3A is set up to use 
> Delegation tokens then the DTs/custom AWS auth sequence is not picked up, so 
> you get an auth failure.
> Fix:
> # instantiate the FS before calling initMetadataStore
> # review other