Hi All,
We are getting "IllegalReferenceCountException" issue again in for few queries
from last 2 days and currently we are on Drill 1.12.0. Can anybody help me here
to understand what is the exact reason behind this?
On Thu, Dec 14, 2017 4:52 PM, Anup Tiwari anup.tiw...@games24x7.com wr
Hi Arjun,
I have tried it, but no luck. I am still getting the INSTANCE error (Caused
by: java.lang.NoSuchFieldError: INSTANCE).
I am assuming it is happening for some version mismatch, I am poor in Java
but found an article given in the below link.
Can you please suggest if we can do any changes
Hi Asim,
You may give it a shot by adding this uber jar to Drill 3rd party directory
(Remove previously copied jars). For truststore, try giving absolute path. The
test was to validate if hive uber jar works with your Hive setup.
Thanks,
Arjun
From: Asim K
Hi Arjun,
I have tried with the hive jdbc uber jar and able to make a successful
connection.
java -cp
"hive-jdbc-uber-2.6.3.0-235.jar:sqlline-1.1.9-drill-r7.jar:jline-2.10.jar"
sqlline.SqlLine -d org.apache.hive.jdbc.HiveDriver -u "jdbc:hive2://knox
server name:port/default;ssl=true;sslTrustStore=
Hi Asim,
You may try using hive uber jar in case you have not tried it. See if below
link helps.
https://github.com/timveil/hive-jdbc-uber-jar/releases
It would be ideal to test this uber jar with a sample JDBC application before
trying with Drill.
java -cp
"hive-jdbc-uber-2.6.3.0-235.ja
Hi Tyler,
The Hadoop-AWS module provides settings for proxy setup. You may try setting
these configs in $DRILL_CONF/core-site.xml and restart drill-bits. I have not
tested it though.
https://hadoop.apache.org/docs/r2.7.1/hadoop-aws/tools/hadoop-aws/index.html
fs.s3a.proxy.host
Hostname
Thanks Kunal...
Here are the details.
{
"type": "jdbc",
"driver": "org.apache.hive.jdbc.HiveDriver",
"url": "jdbc:hive2://knox
address:port/default?ssl=true&transportMode=http&httpPath=pathdetail&sslTrustStore=mytruststore.jks&trustStorePassword=**",
"username": "XXX",
"password"
Not sure what exactly you mean by proxy settings.
But, here is what you can do to access files on S3.
Enable S3 storage plugin, update the connection string, access key and secret
key in the config.
If it is able to connect fine, you should see s3.root when you do show
databases.
Thanks
Padma
Hello,
I am currently trying to set up drill locally to query a JSON file in Amazon’s
AWS S3. I have not been able to configure proxy settings for drill. Could you
send me a configuration example of this?
Thank you,
Tyler Edelman
The inf
Hi Anup
Can you share details about the memory allocations (JVM, etc) you have for
Drill, in addition to the cluster size? Also provide the platform details
(e.g. Hadoop version), and the profiles for the succeeded and failed
queries?
i.e. the JSON of these queries ( e.g.
http://drillbit:804
There can be lot of issues here.
Connection loss error can happen when zookeeper thinks that a node is dead
because
it did not get heartbeat from the node. It can be because the node is busy or
you have
network problems. Did anything changed in your network ?
Is the data static or are you adding
With the session option set as `drill.exec.hashagg.fallback.enabled`=TRUE;
means HashAgg will not honor the operator memory limit which it was assigned to
(thus not spilling to disk) and will end up consuming unbounded memory.
Thanks,
Sorabh
From: Anup Tiwari
Hi Kunal,
I have executed below command and query got executed in 38.763 sec.
alter session set `drill.exec.hashagg.fallback.enabled`=TRUE;
Can you tell me what is the problems in setting this variable? Since you have
mentioned it will risk instability.
On Mon, Mar 12, 2018 6:27 PM, Anup T
Hi Kunal,
I am still getting this error for some other query and i have increased
planner.memory.max_query_memory_per_node variable from 2 GB to 10 GB on session
level but still getting this issue.
Can you tell me how this was getting handled in Earlier Drill Versions(<1.11.0)?
On Mon, Mar
I would like to see an article on creating a new sample storage plugin with
details on the different components involved, like the internal drill
memory representation, data types etc.
I dont think the existing plugins are self explanatory for a beginner.
Regards,
Rahul
On Tue, Mar 6, 2018 at 6:4
Hi Kunal,
Thanks for info and i went with option 1 and increased
planner.memory.max_query_memory_per_node and now queries are working fine. Will
let you in case of any issues.
On Mon, Mar 12, 2018 2:30 AM, Kunal Khatua ku...@apache.org wrote:
Here is the background of your issue:
https:/
Hi All,
From last couple of days i am stuck in a problem. I have a query which left
joins 3 drill tables(parquet), everyday it is used to take around 15-20 mins but
from last couple of days it is taking more than 45 mins and when i tried to
drill down i can see in operator profile that 40% query
17 matches
Mail list logo