Hi Arjun,
I tried what you said but its not working and queries are going inENQUEUED 
state. Please find below log :-
Error
[drill-executor-1] ERROR o.a.d.exec.server.BootStrapContext -
org.apache.drill.exec.work.foreman.Foreman.run() leaked an exception.
java.lang.NoClassDefFoundError:
org/apache/hadoop/fs/GlobalStorageStatistics$StorageStatisticsProvider        at
java.lang.Class.forName0(Native Method) ~[na:1.8.0_72]        at
java.lang.Class.forName(Class.java:348) ~[na:1.8.0_72]        at
org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2134)
~[hadoop-common-2.7.1.jar:na]        at
org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2099)
~[hadoop-common-2.7.1.jar:na]        at
org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193)
~[hadoop-common-2.7.1.jar:na]        at
org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2638)
~[hadoop-common-2.7.1.jar:na]        at
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2651)
~[hadoop-common-2.7.1.jar:na]        at
org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
~[hadoop-common-2.7.1.jar:na]        at
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
~[hadoop-common-2.7.1.jar:na]        at
org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
~[hadoop-common-2.7.1.jar:na]        at
org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
~[hadoop-common-2.7.1.jar:na]        at
org.apache.hadoop.fs.FileSystem.get(FileSystem.java:170)
~[hadoop-common-2.7.1.jar:na]        at
org.apache.drill.exec.store.dfs.DrillFileSystem.<init>(DrillFileSystem.java:91)
~[drill-java-exec-1.11.0.jar:1.11.0]        at
org.apache.drill.exec.util.ImpersonationUtil$2.run(ImpersonationUtil.java:219)
~[drill-java-exec-1.11.0.jar:1.11.0]        at
org.apache.drill.exec.util.ImpersonationUtil$2.run(ImpersonationUtil.java:216)
~[drill-java-exec-1.11.0.jar:1.11.0]        at
java.security.AccessController.doPrivileged(Native Method) ~[na:1.8.0_72]
        at javax.security.auth.Subject.doAs(Subject.java:422) ~[na:1.8.0_72]
        at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
~[hadoop-common-2.7.1.jar:na]        at
org.apache.drill.exec.util.ImpersonationUtil.createFileSystem(ImpersonationUtil.java:216)
~[drill-java-exec-1.11.0.jar:1.11.0]        at
org.apache.drill.exec.util.ImpersonationUtil.createFileSystem(ImpersonationUtil.java:208)
~[drill-java-exec-1.11.0.jar:1.11.0]        at
org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory.accessible(WorkspaceSchemaFactory.java:153)
~[drill-java-exec-1.11.0.jar:1.11.0]        at
org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSchema.<init>(FileSystemSchemaFactory.java:77)
~[drill-java-exec-1.11.0.jar:1.11.0]        at
org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchemas(FileSystemSchemaFactory.java:64)
~[drill-java-exec-1.11.0.jar:1.11.0]        at
org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas(FileSystemPlugin.java:149)
~[drill-java-exec-1.11.0.jar:1.11.0]        at
org.apache.drill.exec.store.StoragePluginRegistryImpl$DrillSchemaFactory.registerSchemas(StoragePluginRegistryImpl.java:396)
~[drill-java-exec-1.11.0.jar:1.11.0]        at
org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema(SchemaTreeProvider.java:110)
~[drill-java-exec-1.11.0.jar:1.11.0]        at
org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema(SchemaTreeProvider.java:99)
~[drill-java-exec-1.11.0.jar:1.11.0]        at
org.apache.drill.exec.ops.QueryContext.getRootSchema(QueryContext.java:164)
~[drill-java-exec-1.11.0.jar:1.11.0]        at
org.apache.drill.exec.ops.QueryContext.getRootSchema(QueryContext.java:153)
~[drill-java-exec-1.11.0.jar:1.11.0]        at
org.apache.drill.exec.ops.QueryContext.getNewDefaultSchema(QueryContext.java:139)
~[drill-java-exec-1.11.0.jar:1.11.0]        at
org.apache.drill.exec.planner.sql.SqlConverter.<init>(SqlConverter.java:111)
~[drill-java-exec-1.11.0.jar:1.11.0]        at
org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan(DrillSqlWorker.java:101)
~[drill-java-exec-1.11.0.jar:1.11.0]        at
org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(DrillSqlWorker.java:79)
~[drill-java-exec-1.11.0.jar:1.11.0]        at
org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:1050)
~[drill-java-exec-1.11.0.jar:1.11.0]        at
org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:280)
~[drill-java-exec-1.11.0.jar:1.11.0]        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
~[na:1.8.0_72]        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[na:1.8.0_72]        at java.lang.Thread.run(Thread.java:745) [na:1.8.0_72]
Caused by: java.lang.ClassNotFoundException:
org.apache.hadoop.fs.GlobalStorageStatistics$StorageStatisticsProvider        at
java.net.URLClassLoader.findClass(URLClassLoader.java:381) ~[na:1.8.0_72]
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424) ~[na:1.8.0_72]
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
~[na:1.8.0_72]        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
~[na:1.8.0_72]        ... 38 common frames omitted

@padma, thanks for help but i will try to build it out using below link and if
things didn't worked out then will surely need your help :-
https://drill.apache.org/docs/compiling-drill-from-source/
Also as you have mentioned, will change hadoop version to 2.9.0 in pom file and
then build it.Let me know if anything needs to be taken care of.  





On Wed, Feb 14, 2018 9:17 AM, Padma Penumarthy ppenumar...@mapr.com  wrote:
Yes, I built it by changing the version in pom file. 

Try and see if what Arjun suggested works. 

If not, you can download the source, change the version and build or 

if you prefer, I can provide you with a private build that you can try with. 

  

Thanks 

Padma 

  

  

On Feb 13, 2018, at 1:46 AM, Anup Tiwari
<anup.tiw...@games24x7.com<mailto:anup.tiw...@games24x7.com>> wrote: 

  

Hi Padma, 

As you have mentioned "Last time I tried, using Hadoop 2.8.1 worked for me." so 

have you build drill with hadoop 2.8.1 ? If yes then can you provide steps ? 

Since i have downloaded tar ball of 1.11.0 and replaced hadoop-aws-2.7.1.jar 

with hadoop-aws-2.9.0.jar but still not able to query successfully to s3 bucket;
 

queries are going in starting state. 

We are trying to query : "ap-south-1" region which supports only v4 signature. 

  

  

  

  

  

On Thu, Oct 19, 2017 9:44 AM, Padma Penumarthy
ppenumar...@mapr.com<mailto:ppenumar...@mapr.com> wrote: 

Which AWS region are you trying to connect to ? 

  

We have a problem connecting to regions which support only v4 signature 

  

since the version of hadoop we include in Drill is old. 

  

Last time I tried, using Hadoop 2.8.1 worked for me. 

  

  

  

Thanks 

  

Padma 

  

  

  

  

  

On Oct 18, 2017, at 8:14 PM, Charles Givre
<cgi...@gmail.com<mailto:cgi...@gmail.com>> wrote: 

  

  

  

Hello all, 

  

I’m trying to use Drill to query data in an S3 bucket and running into some 

issues which I can’t seem to fix. I followed the various instructions online to 

set up Drill with S3, and put my keys in both the conf-site.xml and in the 

plugin config, but every time I attempt to do anything I get the following 

errors: 

  

  

  

  

  

jdbc:drill:zk=local> show databases; 

  

Error: SYSTEM ERROR: AmazonS3Exception: Status Code: 403, AWS Service: Amazon 

S3, AWS Request ID: 56D1999BD1E62DEB, AWS Error Code: null, AWS Error Message: 

Forbidden 

  

  

  

  

  

[Error Id: 65d0bb52-a923-4e98-8ab1-65678169140e on 

charless-mbp-2.fios-router.home:31010] (state=,code=0) 

  

0: jdbc:drill:zk=local> show databases; 

  

Error: SYSTEM ERROR: AmazonS3Exception: Status Code: 403, AWS Service: Amazon 

S3, AWS Request ID: 4D2CBA8D42A9ECA0, AWS Error Code: null, AWS Error Message: 

Forbidden 

  

  

  

  

  

[Error Id: 25a2d008-2f4d-4433-a809-b91ae063e61a on 

charless-mbp-2.fios-router.home:31010] (state=,code=0) 

  

0: jdbc:drill:zk=local> show files in s3.root; 

  

Error: SYSTEM ERROR: AmazonS3Exception: Status Code: 403, AWS Service: Amazon 

S3, AWS Request ID: 2C635944EDE591F0, AWS Error Code: null, AWS Error Message: 

Forbidden 

  

  

  

  

  

[Error Id: 02e136f5-68c0-4b47-9175-a9935bda5e1c on 

charless-mbp-2.fios-router.home:31010] (state=,code=0) 

  

0: jdbc:drill:zk=local> show schemas; 

  

Error: SYSTEM ERROR: AmazonS3Exception: Status Code: 403, AWS Service: Amazon 

S3, AWS Request ID: 646EB5B2EBCF7CD2, AWS Error Code: null, AWS Error Message: 

Forbidden 

  

  

  

  

  

[Error Id: 954aaffe-616a-4f40-9ba5-d4b7c04fe238 on 

charless-mbp-2.fios-router.home:31010] (state=,code=0) 

  

  

  

I have verified that the keys are correct but using the AWS CLI and downloaded 

some of the files, but I’m kind of at a loss as to how to debug. Any 

suggestions? 

  

Thanks in advance, 

  

— C 

  

  

  

  

  

  

  

Regards, 

Anup Tiwari 

  

Sent with Mixmax 

  





Regards,
Anup Tiwari

Sent with Mixmax

Reply via email to