Yeah, that’s what I thought. I found this:
https://issues.apache.org/jira/browse/HADOOP-3733. Posted a couple of questions
there, but prior to that, the last comment was over a year ago. Thanks for the
response!
-Terry
From: Elliot West mailto:tea...@gmail.com>>
Reply-To: "user@hive.apache.org<mailto:user@hive.apache.org>"
mailto:user@hive.apache.org>>
Date: Tuesday, February 2, 2016 at 7:57 AM
To: "user@hive.apache.org<mailto:user@hive.apache.org>"
mailto:user@hive.apache.org>>
Subject: Re: Hive table over S3 bucket with s3a
When I last looked at this it was recommended to simply regenerate the key as
you suggest.
On 2 February 2016 at 15:52, Terry Siu
mailto:terry@dev9.com>> wrote:
Hi,
I’m wondering if anyone has found a workaround for defining a Hive table over a
S3 bucket when the secret access key has ‘/‘ characters in it. I’m using Hive
0.14 in HDP 2.2.4 and the statement that I used is:
CREATE EXTERNAL TABLE IF NOT EXISTS s3_foo (
key INT, value STRING
)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t’
LOCATION 's3a://:@/’;
The following error is returned:
FAILED: IllegalArgumentException The bucketName parameter must be specified.
A workaround was to set the fs.s3a.access.key and fs.s3a.secret.key
configuration and then change the location URL to be s3a:///.
However, this produces the following error:
FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.DDLTask.
MetaException(message:com.amazonaws.AmazonClientException: Unable to load AWS
credentials from any provider in the chain)
Has anyone found a way to create a Hive over S3 table when the key contains ‘/‘
characters or it just standard practice to simply regenerate the keys until IAM
returns one that doesn’t have the offending characters?
Thanks,
-Terry