That’s the problem with documentation. It goes out of date. If you want the
truth, see JdbcAdapterTest.
> On Nov 1, 2016, at 4:20 PM, Christian Tzolov wrote:
>
> Sounds great!
>
> I am concerned by this remark in the documentation: "The JDBC adapter
> currently only pushes down table scan oper
Sounds great!
I am concerned by this remark in the documentation: "The JDBC adapter
currently only pushes down table scan operations".
Is there an easy way to pass the the whole (new) query to the backend
database? The query would always concern tables from a single database
(e.g. no multiple jdbc
PS Plus, the main thing you are doing is transforming the algebra, i.e. writing
a tricky planner rule. An adapter is mainly about plumbing — getting metadata
from the other system, executing queries against the other system — and
packaging for convenience. So, you should slot an additional rule
Definitely re-use the existing JDBC adapter. Within the adapter are ways to
tweak (a) the dialect of the SQL you generate, (b) the capabilities of the DB
(e.g. whether it supports OFFSET). Tweaking those knobs is a lot easier than
building a new adapter.
Julian
> On Nov 1, 2016, at 3:55 PM, C
Thanks again!
Would it make sense to reuse/extend the existing jdbc adapter or i better
start from scratch?
Since my backend DB uses postgres dialect i wonder what is the easiest way
to modify the relations and pass the whole query to the target DB.
On 1 November 2016 at 22:42, Julian Hyde wrote
Michael Mior created CALCITE-1481:
-
Summary: Documentation for materialized views
Key: CALCITE-1481
URL: https://issues.apache.org/jira/browse/CALCITE-1481
Project: Calcite
Issue Type: Improv
You might find that you only need to change the root node (TableModify) from
UPDATE to INSERT, plus maybe a Project immediately underneath it. You can
re-use the parts of the tree that you don’t change. This is typical of how
planner rules work.
> On Nov 1, 2016, at 2:38 PM, Christian Tzolov w
Thanks Julian!
So i can override the whole RelNode tree from UPDATE to INSERT for example?
Was not sure if this is allowed in the RelNode phase.
I guess as a start i need to implement my own TableModify relation and
related rule to substitute the LogicalTableModify and alter the
underlying operat
Monotonic columns aren’t fully implemented yet. I use them as a concept in the
streaming SQL examples, and they will be central when we’ve fully implemented
streaming SQL (although people might also use related concepts like watermarks
and partially-sorted columns).
We have some support for ded
Calcite is the right tool for the job, but our experience is that hacking the
AST is not the way to do it. You can do simple transformations on the AST, but
SQL has complex semantics (e.g. consider the rules required to look up an
unqualified column name in a sub-query), so complex transformatio
Hi guys,
I am looking for a solution/approach that would allow me intercept and JDBC
call, replace an UPDATE statement by related INSERT and run the new SQL on
the backend database (using JDBC). Similarly on SELECT i would like to add
a filter to the existing statement.
My big-data DB doesn't sup
What errors did you get? It should be possible to use both Volcano and hep
when query planning (Drill does this, and possibly others).
It superficially sounds like applying a heuristics pass that includes the
project pushdown (and any other rules you may want to apply)
after the volcano pass should
If it helps make your “hope” a bit more likely to happen, you should consider
doing your Spark or Pig adapters in the Calcite code base, that is, as a fork
of the Calcite repo on GitHub from which you periodically submit pull requests.
I would welcome that development model. For big, important
Eli,
Can you define what you mean by "fault-tolerant"? Phoenix+HBase are fault
tolerant through the retries that HBase does.
Thanks,
James
On Tue, Nov 1, 2016 at 11:35 AM, Eli Levine wrote:
> Thank you for the pointers, Julian and James! I have a requirement that the
> main execution engine is a
Thank you for the pointers, Julian and James! I have a requirement that the
main execution engine is a fault-tolerant one and at this point the main
contenders are Pig and Spark. Drillx is great as a source of example usages
of Calcite, so it will definitely be useful.
And yes, the hope is to cont
There is a simple form of projection push-down, namely column pruning aka field
trimming. If you compute x + y from a table that has columns (x, y, z), then it
will prune away z, and project only (x, y). Column pruning doesn’t fit into the
Volcano model (or in fact into the general transformatio
I fixed the errors (they occurred because of the way I added
EnumerableRules in Volcano).
I have implemented something like what you suggested:
1)I use VolcanoPlanner (with both simple and EnumerableRules), and get a
plan with EnumerableRules. As I have found by the hard way, I cannot get a
plan
What errors did you get? It should be possible to use both Volcano and hep
when query planning (Drill does this, and possibly others).
It superficially sounds like applying a heuristics pass that includes the
project pushdown (and any other rules you may want to apply)
after the volcano pass should
Alexander & Vineet,
One further comment about “NOT IN”. SQL in general is fairly close to
relational algebra, but “NOT IN” is one of the places where the gap is widest.
“NOT IN” is difficult in general to execute efficiently, because of the problem
of NULL values (at Oracle, we always recommend
Julian, thank you for help.
I had a wrong picture of NULL values processing. So, it looks like there is
some problem in my planner rules.
As for the AST, I was confused by the wrong Flink "explain()" function
description :)
Regards,
Alexander
-Original Message-
From: Julian Hyde [mail
Josh Elser created CALCITE-1480:
---
Summary: TLS support for Avatica
Key: CALCITE-1480
URL: https://issues.apache.org/jira/browse/CALCITE-1480
Project: Calcite
Issue Type: New Feature
C
Hey Dave,
Quite the cross-project-pollination we have going on here :)
HTTPS is not currently set up (specifically, not exposed via Avatica's
HttpServer class and corresponding Builder), but this would be rather
easy to do as it would just be a matter of hooking into the Jetty endpoints.
The
I have looked through the documentation and JIRA, and I don't see any mention
of configuring the JDBC client (HTTP client) for SSL. Is this possible? If not,
is it on the road map? Thanks,
Dave
I am wondering if is it possible to push down projections in Volcano
generally. With the cost model of Volcano, a projection adds rows and cpu
cost and it can't be chosen. For example for the next query:
"select s.products.productid "
+ "from s.products,s.orders "
+ "where s.orders.productid = s.
The Apache Calcite team is pleased to announce the release of Apache
Calcite Avatica 1.9.0.
Avatica is a framework for building database drivers. Avatica defines
a wire API and serialization mechanism for clients to communicate with
a server as a proxy to a database. The reference Avatica client a
25 matches
Mail list logo