Thanks.. will upgrade to 1.13.0 and let you know.
On Tue, Mar 20, 2018 11:08 AM, Parth Chandra par...@apache.org wrote:
Hi Anup,
I don't have full context for the proposed hack, and it might have worked,
but looks like Vlad has addressed the issue in the right place. Perhaps you
can
Aman,
That is exactly the clarification that I needed. I had a hazy memory of a
problem in this area, but not enough to actually figure out the current
state.
In case anybody cares, being able to do this is really handy. The basic
idea is to keep long history in files and recent history in a DB.
Hi Anup,
I don't have full context for the proposed hack, and it might have worked,
but looks like Vlad has addressed the issue in the right place. Perhaps you
can try out 1.13.0 and let us all know.
Thanks
Parth
On Sat, Mar 17, 2018 at 11:43 AM, Anup Tiwari
wrote:
> Thanks Parth for Info. I
Francis
I'm certain this is the result of JayDeBeApi using the preparedStatement
command. (DRILL-5316. See the comments in the JIRA)
I was thinking of creating a fork and using the standard
Connection.getStatement() API instead, before compiling. However, I'm currently
on a time crunch and m
Thanks Kunal and Charles,
I rebuilt the script / environment inside a container to see if I could
replicate and I have the same result.
The container is running on an EC2 "next to" the cluster.
Charles was there any additional configuration you had done?
I have in the Dockerfile:
...
conda inst
Due to an infinite loop occurring in Calcite planning, we had to disable
the filter pushdown past the union (SetOps). See
https://issues.apache.org/jira/browse/DRILL-3855.
Now that we have rebased on Calcite 1.15.0, we should re-enable this and
test and if the pushdown works then the partition pru
I think Ted's question is 2 fold, with the former being more important.
1. Can we push filters past a union.
2. Will Drill push filters down to the source.
For the latter, it depends on the source.
For the former, it depends primarily on whether Calcite supports this. I
haven't tried it, so I c
First I would suggest to ignore the view and try out a query which has the
required filters as part of the subqueries on both sides of the union (for
both the database and partitioned parquet data). The plan for such a query
should have the answers to your question. If both the subqueries
independe
IF I create a view that is a union of partitioned parquet files and a
database that has secondary indexes, will Drill be able to properly push
down query limits into both parts of the union?
In particular, if I have lots of archival data and parquet partitioned by
time but my query only asks for r
The exception in your snippet is basically saying that one of the communication
channels on which the client was communicating with the server to fetch sever
metadata closed.
Usually, this is observed if a system is under load (e.g. many concurrent
queries, etc). We'd need to trace back through
Hi,
I am trying to get data from mongo database using apache drill in
Saiku tool. Right now I am able to get tables from mongo using apache drill
but columns (fields) are not coming in tables ( empty tables are coming ).
Please help me out of this problem ASAP.
Thank you
Hi ,
The file size is quite small in KBs only .
If you could tell me few scenarios when it happens will help me debug it ?
Thanks,
Divya
On 15 March 2018 at 15:00, Kunal Khatua wrote:
> There could be multiple reasons for why the ChannelClosedException is
> thrown. What kind of a load are you
One more point, from this release Apache Drill no longer supports JDK7 and
fully moved to JDK8.
Kind regards
Arina
On Mon, Mar 19, 2018 at 5:51 AM, Abhishek Girish wrote:
> Congratulations everyone, on yet another great release of Apache Drill!
> On Mon, Mar 19, 2018 at 6:57 AM Parth Chandra w
13 matches
Mail list logo