> I am curious how to join the tables from different datasources.
Based on Calcite's conventions concept, the Join operator and its input
operators should all have the same convention. If they don't, the
convention different from the Join operator's convention will have to
register a converter rule. This rule should produce an operator that only
converts from that convention to the Join operator's convention.
This way the Join operator will be able to handle the data obtained from
its input operators because it understands the data structure.
Thanks,
Gelbana
On Wed, Dec 18, 2019 at 5:08 AM Juan Pan wrote:
> Some updates.
>
>
> Recently i took a look at their doc and source code, and found this
> project uses SQL parsing and Relational algebra of Calcite to get query
> plan, and also translates to spark SQL for joining different datasources,
> or corresponding query for single datasource.
>
>
> Although it copies many classes from Calcite, the idea of QuickSQL seems
> some of interests, and code is succinct.
>
>
> Best,
> Trista
>
>
> Juan Pan (Trista)
>
> Senior DBA & PPMC of Apache ShardingSphere(Incubating)
> E-mail: panj...@apache.org
>
>
>
>
> On 12/13/2019 17:16,Juan Pan wrote:
> Yes, indeed.
>
>
> Juan Pan (Trista)
>
> Senior DBA & PPMC of Apache ShardingSphere(Incubating)
> E-mail: panj...@apache.org
>
>
>
>
> On 12/12/2019 18:00,Alessandro Solimando
> wrote:
> Adapters must be needed by data sources not supporting SQL, I think this is
> what Juan Pan was asking for.
>
> On Thu, 12 Dec 2019 at 04:05, Haisheng Yuan wrote:
>
> Nope, it doesn't use any adapters. It just submits partial SQL query to
> different engines.
>
> If query contains table from single source, e.g.
> select count(*) from hive_table1, hive_table2 where a=b;
> then the whole query will be submitted to hive.
>
> Otherwise, e.g.
> select distinct a,b from hive_table union select distinct a,b from
> mysql_table;
>
> The following query will be submitted to Spark and executed by Spark:
> select a,b from spark_tmp_table1 union select a,b from spark_tmp_table2;
>
> spark_tmp_table1: select distinct a,b from hive_table
> spark_tmp_table2: select distinct a,b from mysql_table
>
> On 2019/12/11 04:27:07, "Juan Pan" wrote:
> Hi Haisheng,
>
>
> The query on different data source will then be registered as temp
> spark tables (with filter or join pushed in), the whole query is rewritten
> as SQL text over these temp tables and submitted to Spark.
>
>
> Does it mean QuickSQL also need adaptors to make query executed on
> different data source?
>
>
> Yes, virtualization is one of Calcite’s goals. In fact, when I created
> Calcite I was thinking about virtualization + in-memory materialized views.
> Not only the Spark convention but any of the “engine” conventions (Drill,
> Flink, Beam, Enumerable) could be used to create a virtual query engine.
>
>
> Basically, i like and agree with Julian’s statement. It is a great idea
> which personally hope Calcite move towards.
>
>
> Give my best wishes to Calcite community.
>
>
> Thanks,
> Trista
>
>
> Juan Pan
>
>
> panj...@apache.org
> Juan Pan(Trista), Apache ShardingSphere
>
>
> On 12/11/2019 10:53,Haisheng Yuan wrote:
> As far as I know, users still need to register tables from other data
> sources before querying it. QuickSQL uses Calcite for parsing queries and
> optimizing logical expressions with several transformation rules. The query
> on different data source will then be registered as temp spark tables (with
> filter or join pushed in), the whole query is rewritten as SQL text over
> these temp tables and submitted to Spark.
>
> - Haisheng
>
> --
> 发件人:Rui Wang
> 日 期:2019年12月11日 06:24:45
> 收件人:
> 主 题:Re: Quicksql
>
> The co-routine model sounds fitting into Streaming cases well.
>
> I was thinking how should Enumerable interface work with streaming cases
> but now I should also check Interpreter.
>
>
> -Rui
>
> On Tue, Dec 10, 2019 at 1:33 PM Julian Hyde wrote:
>
> The goal (or rather my goal) for the interpreter is to replace
> Enumerable as the quick, easy default convention.
>
> Enumerable is efficient but not that efficient (compared to engines
> that work on off-heap data representing batches of records). And
> because it generates java byte code there is a certain latency to
> getting a query prepared and ready to run.
>
> It basically implements the old Volcano query evaluation model. It is
> single-threaded (because all work happens as a result of a call to
> 'next()' on the root node) and cannot handle branching data-flow
> graphs (DAGs).
>
> The Interpreter operates uses a co-routine model (reading from queues,
> writing to queues, and yielding when there is no work to be done) and
> therefore could be more efficient than enumerable in a single-node
> multi-core system. Also, there is little start-up time, which is
> important for small queries.
>
> I would love to add another built-in convention that uses