Hi Arjun, 
Thanks for your help.  Are there settings in S3 that would prevent Drill from 
connecting?  I’ll try hdfs shell, but I am able to connect with the CLI tool.   
My hunch is that there is a permission not set correctly on S3 or I’m missing 
some config variable in Drill. 
— C


> On Oct 20, 2017, at 14:12, Arjun kr <arjun...@outlook.com> wrote:
> 
> Hi  Charles,
> 
> 
> Any chance you can test s3 connectivity with other tools like hdfs shell or 
> hive in case you haven't tried already (and these tools available)? This may 
> help to identify if it is Drill specific issue.
> 
> 
> For connecting via hdfs , you may try below command.
> 
> 
> hadoop fs -Dfs.s3a.access.key="XXXX" -Dfs.s3a.secret.key="YYYYY" -ls 
> s3a://<bucket-name>/
> 
> 
> Enable DEBUG logging if needed.
> 
> 
> export HADOOP_ROOT_LOGGER=hadoop.root.logger=DEBUG,console
> 
> 
> Thanks,
> 
> 
> Arjun
> 
> 
> ________________________________
> From: Padma Penumarthy <ppenumar...@mapr.com>
> Sent: Friday, October 20, 2017 3:00 AM
> To: user@drill.apache.org
> Subject: Re: S3 Connection Issues
> 
> Hi Charles,
> 
> I tried us-west-2 and it worked fine for me with drill built from latest 
> source.
> I did not do anything special.
> Just enabled the S3 plugin and updated the plugin configuration like this.
> 
> {
>  "type": "file",
>  "enabled": true,
>  "connection": "s3a://<bucket-name>",
>  "config": {
>    "fs.s3a.access.key": “XXXX",
>    "fs.s3a.secret.key": “YYYY"
>  },
> 
> I am able to do show databases and also can query the parquet files I 
> uploaded to the bucket.
> 
> 0: jdbc:drill:zk=local> show databases;
> +---------------------+
> |     SCHEMA_NAME     |
> +---------------------+
> | INFORMATION_SCHEMA  |
> | cp.default          |
> | dfs.default         |
> | dfs.root            |
> | dfs.tmp             |
> | s3.default          |
> | s3.root             |
> | sys                 |
> +---------------------+
> 8 rows selected (2.892 seconds)
> 
> 
> Thanks
> Padma
> 
> On Oct 18, 2017, at 9:18 PM, Charles Givre 
> <cgi...@gmail.com<mailto:cgi...@gmail.com>> wrote:
> 
> Hi Padma,
> The bucket is is us-west-2.  I also discovered that some of the variable 
> names in the documentation on the main Drill site are incorrect.  Do I need 
> to specify the region in the configuration somewhere?
> 
> As an update, after discovering that the variable names are incorrect and 
> that I didn’t have Jets3t installed properly, I’m now getting the following 
> error:
> 
> jdbc:drill:zk=local> show databases;
> Error: RESOURCE ERROR: Failed to create schema tree.
> 
> 
> [Error Id: e6012aa2-c775-46b9-b3ee-0af7d0b0871d on 
> charless-mbp-2.fios-router.home:31010]
> 
> (org.apache.hadoop.fs.s3.S3Exception) org.jets3t.service.S3ServiceException: 
> Service Error Message. -- ResponseCode: 403, ResponseStatus: Forbidden, XML 
> Error Message: <?xml version="1.0" 
> encoding="UTF-8"?><Error><Code>SignatureDoesNotMatch</Code><Message>The 
> request signature we calculated does not match the signature you provided. 
> Check your key and signing method.</Message></Error>
>   org.apache.hadoop.fs.s3.Jets3tFileSystemStore.get():175
>   org.apache.hadoop.fs.s3.Jets3tFileSystemStore.retrieveINode():221
> 
> Thanks,
> — C
> 
> 
> On Oct 19, 2017, at 00:14, Padma Penumarthy 
> <ppenumar...@mapr.com<mailto:ppenumar...@mapr.com>> wrote:
> 
> Which AWS region are you trying to connect to ?
> We have a  problem connecting to regions which support only v4 signature
> since the version of hadoop we include in Drill is old.
> Last time I tried, using Hadoop 2.8.1 worked for me.
> 
> Thanks
> Padma
> 
> 
> On Oct 18, 2017, at 8:14 PM, Charles Givre 
> <cgi...@gmail.com<mailto:cgi...@gmail.com>> wrote:
> 
> Hello all,
> I’m trying to use Drill to query data in an S3 bucket and running into some 
> issues which I can’t seem to fix.  I followed the various instructions online 
> to set up Drill with S3, and put my keys in both the conf-site.xml and in the 
> plugin config, but every time I attempt to do anything I get the following 
> errors:
> 
> 
> jdbc:drill:zk=local> show databases;
> Error: SYSTEM ERROR: AmazonS3Exception: Status Code: 403, AWS Service: Amazon 
> S3, AWS Request ID: 56D1999BD1E62DEB, AWS Error Code: null, AWS Error 
> Message: Forbidden
> 
> 
> [Error Id: 65d0bb52-a923-4e98-8ab1-65678169140e on 
> charless-mbp-2.fios-router.home:31010] (state=,code=0)
> 0: jdbc:drill:zk=local> show databases;
> Error: SYSTEM ERROR: AmazonS3Exception: Status Code: 403, AWS Service: Amazon 
> S3, AWS Request ID: 4D2CBA8D42A9ECA0, AWS Error Code: null, AWS Error 
> Message: Forbidden
> 
> 
> [Error Id: 25a2d008-2f4d-4433-a809-b91ae063e61a on 
> charless-mbp-2.fios-router.home:31010] (state=,code=0)
> 0: jdbc:drill:zk=local> show files in s3.root;
> Error: SYSTEM ERROR: AmazonS3Exception: Status Code: 403, AWS Service: Amazon 
> S3, AWS Request ID: 2C635944EDE591F0, AWS Error Code: null, AWS Error 
> Message: Forbidden
> 
> 
> [Error Id: 02e136f5-68c0-4b47-9175-a9935bda5e1c on 
> charless-mbp-2.fios-router.home:31010] (state=,code=0)
> 0: jdbc:drill:zk=local> show schemas;
> Error: SYSTEM ERROR: AmazonS3Exception: Status Code: 403, AWS Service: Amazon 
> S3, AWS Request ID: 646EB5B2EBCF7CD2, AWS Error Code: null, AWS Error 
> Message: Forbidden
> 
> 
> [Error Id: 954aaffe-616a-4f40-9ba5-d4b7c04fe238 on 
> charless-mbp-2.fios-router.home:31010] (state=,code=0)
> 
> I have verified that the keys are correct but using the AWS CLI and 
> downloaded some of the files, but I’m kind of at a loss as to how to debug.  
> Any suggestions?
> Thanks in advance,
> — C
> 
> 
> 

Reply via email to