Re: [1.9.0] : UserException: SYSTEM ERROR: IllegalReferenceCountException: refCnt: 0 and then SYSTEM ERROR: IOException: Failed to shutdown streamer

2018-03-12 Thread Anup Tiwari
Hi All, We are getting "IllegalReferenceCountException" issue again in for few queries from last 2 days and currently we are on Drill 1.12.0. Can anybody help me here to understand what is the exact reason behind this? On Thu, Dec 14, 2017 4:52 PM, Anup Tiwari anup.tiw...@games24x7.com wr

Re: hive connection as generic jdbc

2018-03-12 Thread Asim Kanungo
Hi Arjun, I have tried it, but no luck. I am still getting the INSTANCE error (Caused by: java.lang.NoSuchFieldError: INSTANCE). I am assuming it is happening for some version mismatch, I am poor in Java but found an article given in the below link. Can you please suggest if we can do any changes

Re: hive connection as generic jdbc

2018-03-12 Thread Arjun kr
Hi Asim, You may give it a shot by adding this uber jar to Drill 3rd party directory (Remove previously copied jars). For truststore, try giving absolute path. The test was to validate if hive uber jar works with your Hive setup. Thanks, Arjun From: Asim K

Re: hive connection as generic jdbc

2018-03-12 Thread Asim Kanungo
Hi Arjun, I have tried with the hive jdbc uber jar and able to make a successful connection. java -cp "hive-jdbc-uber-2.6.3.0-235.jar:sqlline-1.1.9-drill-r7.jar:jline-2.10.jar" sqlline.SqlLine -d org.apache.hive.jdbc.HiveDriver -u "jdbc:hive2://knox server name:port/default;ssl=true;sslTrustStore=

Re: hive connection as generic jdbc

2018-03-12 Thread Arjun kr
Hi Asim, You may try using hive uber jar in case you have not tried it. See if below link helps. https://github.com/timveil/hive-jdbc-uber-jar/releases It would be ideal to test this uber jar with a sample JDBC application before trying with Drill. java -cp "hive-jdbc-uber-2.6.3.0-235.ja

Re: Setting up drill to query AWS S3 behind a proxy

2018-03-12 Thread Arjun kr
Hi Tyler, The Hadoop-AWS module provides settings for proxy setup. You may try setting these configs in $DRILL_CONF/core-site.xml and restart drill-bits. I have not tested it though. https://hadoop.apache.org/docs/r2.7.1/hadoop-aws/tools/hadoop-aws/index.html fs.s3a.proxy.host Hostname

Re: hive connection as generic jdbc

2018-03-12 Thread Asim Kanungo
Thanks Kunal... Here are the details. { "type": "jdbc", "driver": "org.apache.hive.jdbc.HiveDriver", "url": "jdbc:hive2://knox address:port/default?ssl=true&transportMode=http&httpPath=pathdetail&sslTrustStore=mytruststore.jks&trustStorePassword=**", "username": "XXX", "password"

Re: Setting up drill to query AWS S3 behind a proxy

2018-03-12 Thread Padma Penumarthy
Not sure what exactly you mean by proxy settings. But, here is what you can do to access files on S3. Enable S3 storage plugin, update the connection string, access key and secret key in the config. If it is able to connect fine, you should see s3.root when you do show databases. Thanks Padma

Setting up drill to query AWS S3 behind a proxy

2018-03-12 Thread Edelman, Tyler
Hello, I am currently trying to set up drill locally to query a JSON file in Amazon’s AWS S3. I have not been able to configure proxy settings for drill. Could you send me a configuration example of this? Thank you, Tyler Edelman The inf

Re: [Drill 1.12.0] : RESOURCE ERROR: Not enough memory for internal partitioning and fallback mechanism for HashAgg to use unbounded memory is disabled

2018-03-12 Thread Kunal Khatua
Hi Anup Can you share details about the memory allocations (JVM, etc) you have for Drill, in addition to the cluster size? Also provide the platform details (e.g. Hadoop version), and the profiles for the succeeded and failed queries? i.e. the JSON of these queries ( e.g. http://drillbit:804

Re: [Drill 1.10.0/1.12.0] Query Started Taking Time + frequent one or more node lost connectivity error

2018-03-12 Thread Padma Penumarthy
There can be lot of issues here. Connection loss error can happen when zookeeper thinks that a node is dead because it did not get heartbeat from the node. It can be because the node is busy or you have network problems. Did anything changed in your network ? Is the data static or are you adding

Re: [Drill 1.12.0] : RESOURCE ERROR: Not enough memory for internal partitioning and fallback mechanism for HashAgg to use unbounded memory is disabled

2018-03-12 Thread Sorabh Hamirwasia
With the session option set as `drill.exec.hashagg.fallback.enabled`=TRUE; means HashAgg will not honor the operator memory limit which it was assigned to (thus not spilling to disk) and will end up consuming unbounded memory. Thanks, Sorabh From: Anup Tiwari

Re: [Drill 1.12.0] : RESOURCE ERROR: Not enough memory for internal partitioning and fallback mechanism for HashAgg to use unbounded memory is disabled

2018-03-12 Thread Anup Tiwari
Hi Kunal, I have executed below command and query got executed in 38.763 sec. alter session set `drill.exec.hashagg.fallback.enabled`=TRUE; Can you tell me what is the problems in setting this variable? Since you have mentioned it will risk instability. On Mon, Mar 12, 2018 6:27 PM, Anup T

Re: [Drill 1.12.0] : RESOURCE ERROR: Not enough memory for internal partitioning and fallback mechanism for HashAgg to use unbounded memory is disabled

2018-03-12 Thread Anup Tiwari
Hi Kunal, I am still getting this error for some other query and i have increased planner.memory.max_query_memory_per_node variable from 2 GB to 10 GB on session level but still getting this issue. Can you tell me how this was getting handled in Earlier Drill Versions(<1.11.0)? On Mon, Mar

Re: Drill Blog on Medium.com

2018-03-12 Thread Rahul Raj
I would like to see an article on creating a new sample storage plugin with details on the different components involved, like the internal drill memory representation, data types etc. I dont think the existing plugins are self explanatory for a beginner. Regards, Rahul On Tue, Mar 6, 2018 at 6:4

Re: [Drill 1.12.0] : RESOURCE ERROR: Not enough memory for internal partitioning and fallback mechanism for HashAgg to use unbounded memory is disabled

2018-03-12 Thread Anup Tiwari
Hi Kunal, Thanks for info and i went with option 1 and increased planner.memory.max_query_memory_per_node and now queries are working fine. Will let you in case of any issues. On Mon, Mar 12, 2018 2:30 AM, Kunal Khatua ku...@apache.org wrote: Here is the background of your issue: https:/

[Drill 1.10.0/1.12.0] Query Started Taking Time + frequent one or more node lost connectivity error

2018-03-12 Thread Anup Tiwari
Hi All, From last couple of days i am stuck in a problem. I have a query which left joins 3 drill tables(parquet), everyday it is used to take around 15-20 mins but from last couple of days it is taking more than 45 mins and when i tried to drill down i can see in operator profile that 40% query