[Invitation] ApacheCon Asia 2021

2021-04-09 Thread Juan Pan



Dear Calcite community,




Thanks for your attention. This is Juan Pan from the ApacheCon Asia 2021 
committee.




You are surely wondering why this invitation comes. I am glad to give some 
introduction on this event and do some clarification.




As a large and diverse open-source community, the Calcite community is still 
growing up after its graduation. That impresses me and many other Apache 
members and committers. Plus, its active mail list and integration with other 
projects convince people it is mature and self-powered!




As the first-time Apache Asia conference, this event is on the CFP stage and 
targets attracting people worldwide, especially from Asia. Hence, the committee 
of ApacheCon Asia 2021 especially hopes to have the Calcite community deliver a 
talk in **the Incubator track** to share your insight and stories on your 
community to make our attendees know more about its excellent community 
governance.




I am curious whether anyone from the Calcite community is available and 
interested in attending this event online [1]? One talk is expected to be a 
pre-recorded video with 40 minutes duration considering the time difference and 
potential internet issues. Plus, your video is bound to be published in other 
Apache channels after this event for these subsequent audiences. CFP [2] is 
waiting for your submission.




If you have any confusion or questions, please be free to contact me. Look 
forward to your reply!




Warmly regards,

Trista, on the half of the ApacheCon Asia committee &

Track Chair of the Incubator Track




[1] https://blogs.apache.org/conferences/

[2] https://apachecon.com/acasia2021/cfp.html







 
   Juan Pan (Trista)
 
Senior DBA & PMC of Apache ShardingSphere
E-mail: panj...@apache.org





Re: [Question] How to leverage Calcite adaptor for federated SQL query without using Calcite parser

2020-12-15 Thread Juan Pan
Hi all,


Respond by myself.


After digging into the Calcite project, I figured out how to do SQL federation 
with an external SQL parser.


Federated SQL[1] is a demo to join SQLs from different database instances and 
shows users a basic process of parse(), validate(), optimize(), and execute() a 
SQL inner Calcite.


So this demo project gives users hints and points on implementing SQL 
federation and learning the Calcite project (you know it is difficult to 
understand this tremendous project from zero). If you have any ideas, welcome 
your issues and PRs.


Besides, we are still discussing and learning the Calcite project on the 
issue[2], where you can get some useful info, I guess. Also, I am considering 
posting a blog or giving a summary about [2] to help others get its context 
(SQL federation and SQL optimization) shortly. 


[1] https://github.com/tristaZero/federatedSQL
[2] https://github.com/apache/shardingsphere/issues/8284


Best,
Trista




 
Juan Pan (Trista)
 
Senior DBA & PMC of Apache ShardingSphere
E-mail: panj...@apache.org




On 11/24/2020 18:20,Juan Pan wrote:
Hi Rui,


Your sum-up is precisely what I care about.


1. Reuse Calcite adaptors and combine that with your parser to parse queries.


First, that is what @Michael described, doesn't it?
The coding work to solve it makes sense to me. Thanks.


2. Convert results from 1. to RelNode and let Calcite optimize, then execute 
based on Enumerable implementations.


Second, that is my concern now.
Yep, we need a trigger or entrance to make Calcite use the custom adapter with 
the external parsed result
(Precisely speaking, the RelNode converted from the third-party parsed result).
I guess it is Calcite Driver or Calcite connection is that `trigger`.
However,  it can not use the output from point 1.


write code to execute the enumerable tree (this part of code is inside
Calcite connection, but Calcite connection won't let you use your own
parser)


That way, maybe we need to rewrite a `Calcite connection` to execute the 
enumerable tree?
How about implementing `SqlAbstractParserImpl` and configure it in JDBC props?


Thanks for your time.


Best wishes,
Trista






Juan Pan (Trista)

Senior DBA & PMC of Apache ShardingSphere
E-mail: panj...@apache.org




On 11/24/2020 06:16,Rui Wang wrote:
I think Michael has said points that I can say. Just try to understand your
problem after reads existing threads:

Sounds like, you need to do two things:
1. Reuse Calcite adaptors and combine that with your parser to parse
queries.
2. Convert results from 1. to RelNode and let Calcite optimize, then
execute based on Enumerable implementations.

If my understanding so far is correct, I am thinking Calcite does not have
a simple API to allow you do 2.

My understanding is you will need to build something by yourself:
a. write code to convert results of 1. to RelNode, make sure to set up
Enumerable conventions to produce Enumerable backed nodes.
b. write code to execute the enumerable tree (this part of code is inside
Calcite connection, but Calcite connection won't let you use your own
parser)

-Rui

On Mon, Nov 23, 2020 at 4:10 AM Michael Mior  wrote:

There is nothing stopping you from using adapters with SQL queries you
have parsed yourself. You simply need to assign the appropriate
convention to each table scan in the RelNode tree you pass into the
optimizer. However, if the reason for using your own parser is to be
able to have as broad support for different SQL queries as possible, I
suggest you look at Calcite's Babel parser. It extends the default
parser to add broader support for other dialects of SQL.

--
Michael Mior
mm...@apache.org

Le lun. 23 nov. 2020 à 01:54, Juan Pan  a écrit :

Hi JiaTao,


Very appreciated your share.


Actually, what I am confused about is how to make Calcite custom adaptor
works with other parsers.
For example, I use a non-Calcite parser to get the parsed result and
transform them into RelNode to tell Calcite, Hi, please use this RelNode
for the rest handling.
But I still implement a custom adaptor and wish Calcite can adopt them.
If I call Calcite JDBC, like `Driver.getConnection(Calcite_Conn)`, which
will bring Calcite parser to parser SQL instead of my own.  : (
Is there any approach to make Calcite call the custom adapter and
third-party parser?


Best wishes,
Trista




Juan Pan (Trista)

Senior DBA & PMC of Apache ShardingSphere
E-mail: panj...@apache.org




On 11/23/2020 14:38,JiaTao Tao wrote:
Hi Juan Pan

As I said, you can archive this by "If you have to do this, you can
either
generate SqlNode with Antlr OR transform your own AST tree to RelNode,
you
can take a look at org.apache.calcite.sql2rel.SqlToRelConverter.", in
fact,
hive does the same thing, you can take a look, it uses its own AST tree
to
generate a RelNode tree.

Regards!

Aron Tao


Juan Pan  于2020年11月2

Re: [Question] How to leverage Calcite adaptor for federated SQL query without using Calcite parser

2020-11-24 Thread Juan Pan
Hi Rui,


Your sum-up is precisely what I care about.


> 1. Reuse Calcite adaptors and combine that with your parser to parse queries.


First, that is what @Michael described, doesn't it? 
The coding work to solve it makes sense to me. Thanks.


> 2. Convert results from 1. to RelNode and let Calcite optimize, then execute 
> based on Enumerable implementations.


Second, that is my concern now. 
Yep, we need a trigger or entrance to make Calcite use the custom adapter with 
the external parsed result 
(Precisely speaking, the RelNode converted from the third-party parsed result).
I guess it is Calcite Driver or Calcite connection is that `trigger`. 
However,  it can not use the output from point 1.


> write code to execute the enumerable tree (this part of code is inside
Calcite connection, but Calcite connection won't let you use your own
parser)


That way, maybe we need to rewrite a `Calcite connection` to execute the 
enumerable tree?
How about implementing `SqlAbstractParserImpl` and configure it in JDBC props?


Thanks for your time.


Best wishes,
Trista






 Juan Pan (Trista)
 
Senior DBA & PMC of Apache ShardingSphere
E-mail: panj...@apache.org




On 11/24/2020 06:16,Rui Wang wrote:
I think Michael has said points that I can say. Just try to understand your
problem after reads existing threads:

Sounds like, you need to do two things:
1. Reuse Calcite adaptors and combine that with your parser to parse
queries.
2. Convert results from 1. to RelNode and let Calcite optimize, then
execute based on Enumerable implementations.

If my understanding so far is correct, I am thinking Calcite does not have
a simple API to allow you do 2.

My understanding is you will need to build something by yourself:
a. write code to convert results of 1. to RelNode, make sure to set up
Enumerable conventions to produce Enumerable backed nodes.
b. write code to execute the enumerable tree (this part of code is inside
Calcite connection, but Calcite connection won't let you use your own
parser)

-Rui

On Mon, Nov 23, 2020 at 4:10 AM Michael Mior  wrote:

There is nothing stopping you from using adapters with SQL queries you
have parsed yourself. You simply need to assign the appropriate
convention to each table scan in the RelNode tree you pass into the
optimizer. However, if the reason for using your own parser is to be
able to have as broad support for different SQL queries as possible, I
suggest you look at Calcite's Babel parser. It extends the default
parser to add broader support for other dialects of SQL.

--
Michael Mior
mm...@apache.org

Le lun. 23 nov. 2020 à 01:54, Juan Pan  a écrit :

Hi JiaTao,


Very appreciated your share.


Actually, what I am confused about is how to make Calcite custom adaptor
works with other parsers.
For example, I use a non-Calcite parser to get the parsed result and
transform them into RelNode to tell Calcite, Hi, please use this RelNode
for the rest handling.
But I still implement a custom adaptor and wish Calcite can adopt them.
If I call Calcite JDBC, like `Driver.getConnection(Calcite_Conn)`, which
will bring Calcite parser to parser SQL instead of my own.  : (
Is there any approach to make Calcite call the custom adapter and
third-party parser?


Best wishes,
Trista




Juan Pan (Trista)

Senior DBA & PMC of Apache ShardingSphere
E-mail: panj...@apache.org




On 11/23/2020 14:38,JiaTao Tao wrote:
Hi Juan Pan

As I said, you can archive this by "If you have to do this, you can
either
generate SqlNode with Antlr OR transform your own AST tree to RelNode,
you
can take a look at org.apache.calcite.sql2rel.SqlToRelConverter.", in
fact,
hive does the same thing, you can take a look, it uses its own AST tree
to
generate a RelNode tree.

Regards!

Aron Tao


Juan Pan  于2020年11月23日周一 下午1:04写道:

Hi JiaTao,


The reason we want to bypass Calcite parsing mainly contains two points.
First, as you said, we want to have a better query efficiency by only
parsing SQL one time. But from what you said, it looks like not a big
deal.


Second, I am a bit concerned about the SQL supported capacity of Calcite.
[1] shows me the all supported SQLs. Is that in line with SQL92 or MySQL
5.x?
Currently, ShardingSphere parser has almost complete support for MySQL
8.0
and PostgreSQL, and basic support for SQLServer, Oracle, SQL92 [2] (As a
distributed Database middleware ecosystem, we have to do so).
Therefore, if we use Calcite parser, maybe we can not help users handle
some of the SQLs  (Unsure).


Could you give me some hints to bypass the parsing of Calcite? Or maybe
we
can not reach that goal?
Much appreciated your any points or reply. : )


Regards,
Trista


[1] https://calcite.apache.org/docs/reference.html
[2]

https://shardingsphere.apache.org/document/current/en/features/sharding/principle/parse


Juan Pan (Trista)

Senior DBA & PMC of Apache ShardingSphere
E-mail: panj...@apache.org




On 11/22/2020 16:17,Jia

Re: [Question] How to leverage Calcite adaptor for federated SQL query without using Calcite parser

2020-11-24 Thread Juan Pan
Hi, Michael,


Babel parser[1] looks impressive, which has a similar function with 
SharingSphere parser[2]. I will give it a try later. Thanks for your suggestion.


Regards,
Trista


[1] https://issues.apache.org/jira/browse/CALCITE-2280
[2] https://github.com/apache/shardingsphere/issues/7869


 Juan Pan (Trista)
 
Senior DBA & PMC of Apache ShardingSphere
E-mail: panj...@apache.org




On 11/23/2020 20:10,Michael Mior wrote:
There is nothing stopping you from using adapters with SQL queries you
have parsed yourself. You simply need to assign the appropriate
convention to each table scan in the RelNode tree you pass into the
optimizer. However, if the reason for using your own parser is to be
able to have as broad support for different SQL queries as possible, I
suggest you look at Calcite's Babel parser. It extends the default
parser to add broader support for other dialects of SQL.

--
Michael Mior
mm...@apache.org

Le lun. 23 nov. 2020 à 01:54, Juan Pan  a écrit :

Hi JiaTao,


Very appreciated your share.


Actually, what I am confused about is how to make Calcite custom adaptor works 
with other parsers.
For example, I use a non-Calcite parser to get the parsed result and transform 
them into RelNode to tell Calcite, Hi, please use this RelNode for the rest 
handling.
But I still implement a custom adaptor and wish Calcite can adopt them.
If I call Calcite JDBC, like `Driver.getConnection(Calcite_Conn)`, which will 
bring Calcite parser to parser SQL instead of my own.  : (
Is there any approach to make Calcite call the custom adapter and third-party 
parser?


Best wishes,
Trista




Juan Pan (Trista)

Senior DBA & PMC of Apache ShardingSphere
E-mail: panj...@apache.org




On 11/23/2020 14:38,JiaTao Tao wrote:
Hi Juan Pan

As I said, you can archive this by "If you have to do this, you can either
generate SqlNode with Antlr OR transform your own AST tree to RelNode, you
can take a look at org.apache.calcite.sql2rel.SqlToRelConverter.", in fact,
hive does the same thing, you can take a look, it uses its own AST tree to
generate a RelNode tree.

Regards!

Aron Tao


Juan Pan  于2020年11月23日周一 下午1:04写道:

Hi JiaTao,


The reason we want to bypass Calcite parsing mainly contains two points.
First, as you said, we want to have a better query efficiency by only
parsing SQL one time. But from what you said, it looks like not a big deal.


Second, I am a bit concerned about the SQL supported capacity of Calcite.
[1] shows me the all supported SQLs. Is that in line with SQL92 or MySQL
5.x?
Currently, ShardingSphere parser has almost complete support for MySQL 8.0
and PostgreSQL, and basic support for SQLServer, Oracle, SQL92 [2] (As a
distributed Database middleware ecosystem, we have to do so).
Therefore, if we use Calcite parser, maybe we can not help users handle
some of the SQLs  (Unsure).


Could you give me some hints to bypass the parsing of Calcite? Or maybe we
can not reach that goal?
Much appreciated your any points or reply. : )


Regards,
Trista


[1] https://calcite.apache.org/docs/reference.html
[2]
https://shardingsphere.apache.org/document/current/en/features/sharding/principle/parse


Juan Pan (Trista)

Senior DBA & PMC of Apache ShardingSphere
E-mail: panj...@apache.org




On 11/22/2020 16:17,JiaTao Tao wrote:
In fact, parse twice's impact is little, in Apache Kylin, every time we do
the transformation to SQL, we re-parse it.
What really takes time is validation (use metadata like getting it from
HMS) and optimization.

Regards!

Aron Tao


Juan Pan  于2020年11月22日周日 下午2:32写道:

Hi community,




Thanks for your attention. : )




Currently, Apache ShardingSphere community plans to leverage Apache
Calcite to implement federated SQL query,

i.e., the query from different database instances [1].




The draft approach is that we consider using the custom adaptor with the
SQL parser of ShardingSphere itself (Antlr involved),

and transforming the parsed result to the algebra of Calcite.

Lastly, Calcite will execute the SQLs by means of the custom adaptor.




Currently, I know the entrance of calling the custom adaptor is to use the
`DriverManager.getConnection(CalciteUrl)`, which will get Calcite's SQL
parsing involved.

But we want to avoid twice SQL parsing, which means we wish to ignore the
SQL parsing of CalciteN .




My question is that how we can leverage Calcite adaptor without using
Calcite parser.

Could you give me some hints?




Very appreciated your any help and reply.




Regards,

Trista







[1] https://github.com/apache/shardingsphere/issues/8284



Juan Pan (Trista)

Senior DBA & PMC of Apache ShardingSphere
E-mail: panj...@apache.org







Re: [Question] How to leverage Calcite adaptor for federated SQL query without using Calcite parser

2020-11-24 Thread Juan Pan
Hi JiaTao,


Very appreciated your iterative responses.


Best,
Trista


 Juan Pan (Trista)
 
Senior DBA & PMC of Apache ShardingSphere
E-mail: panj...@apache.org




On 11/23/2020 18:17,JiaTao Tao wrote:
E, seems we don't have that.


Regards!

Aron Tao


Juan Pan  于2020年11月23日周一 下午2:54写道:

Hi JiaTao,


Very appreciated your share.


Actually, what I am confused about is how to make Calcite custom adaptor
works with other parsers.
For example, I use a non-Calcite parser to get the parsed result and
transform them into RelNode to tell Calcite, Hi, please use this RelNode
for the rest handling.
But I still implement a custom adaptor and wish Calcite can adopt them.
If I call Calcite JDBC, like `Driver.getConnection(Calcite_Conn)`, which
will bring Calcite parser to parser SQL instead of my own.  : (
Is there any approach to make Calcite call the custom adapter and
third-party parser?


Best wishes,
Trista




Juan Pan (Trista)

Senior DBA & PMC of Apache ShardingSphere
E-mail: panj...@apache.org




On 11/23/2020 14:38,JiaTao Tao wrote:
Hi Juan Pan

As I said, you can archive this by "If you have to do this, you can either
generate SqlNode with Antlr OR transform your own AST tree to RelNode, you
can take a look at org.apache.calcite.sql2rel.SqlToRelConverter.", in fact,
hive does the same thing, you can take a look, it uses its own AST tree to
generate a RelNode tree.

Regards!

Aron Tao


Juan Pan  于2020年11月23日周一 下午1:04写道:

Hi JiaTao,


The reason we want to bypass Calcite parsing mainly contains two points.
First, as you said, we want to have a better query efficiency by only
parsing SQL one time. But from what you said, it looks like not a big deal.


Second, I am a bit concerned about the SQL supported capacity of Calcite.
[1] shows me the all supported SQLs. Is that in line with SQL92 or MySQL
5.x?
Currently, ShardingSphere parser has almost complete support for MySQL 8.0
and PostgreSQL, and basic support for SQLServer, Oracle, SQL92 [2] (As a
distributed Database middleware ecosystem, we have to do so).
Therefore, if we use Calcite parser, maybe we can not help users handle
some of the SQLs  (Unsure).


Could you give me some hints to bypass the parsing of Calcite? Or maybe we
can not reach that goal?
Much appreciated your any points or reply. : )


Regards,
Trista


[1] https://calcite.apache.org/docs/reference.html
[2]

https://shardingsphere.apache.org/document/current/en/features/sharding/principle/parse


Juan Pan (Trista)

Senior DBA & PMC of Apache ShardingSphere
E-mail: panj...@apache.org




On 11/22/2020 16:17,JiaTao Tao wrote:
In fact, parse twice's impact is little, in Apache Kylin, every time we do
the transformation to SQL, we re-parse it.
What really takes time is validation (use metadata like getting it from
HMS) and optimization.

Regards!

Aron Tao


Juan Pan  于2020年11月22日周日 下午2:32写道:

Hi community,




Thanks for your attention. : )




Currently, Apache ShardingSphere community plans to leverage Apache
Calcite to implement federated SQL query,

i.e., the query from different database instances [1].




The draft approach is that we consider using the custom adaptor with the
SQL parser of ShardingSphere itself (Antlr involved),

and transforming the parsed result to the algebra of Calcite.

Lastly, Calcite will execute the SQLs by means of the custom adaptor.




Currently, I know the entrance of calling the custom adaptor is to use the
`DriverManager.getConnection(CalciteUrl)`, which will get Calcite's SQL
parsing involved.

But we want to avoid twice SQL parsing, which means we wish to ignore the
SQL parsing of CalciteN .




My question is that how we can leverage Calcite adaptor without using
Calcite parser.

Could you give me some hints?




Very appreciated your any help and reply.




Regards,

Trista







[1] https://github.com/apache/shardingsphere/issues/8284



Juan Pan (Trista)

Senior DBA & PMC of Apache ShardingSphere
E-mail: panj...@apache.org








Re: [Question] How to leverage Calcite adaptor for federated SQL query without using Calcite parser

2020-11-22 Thread Juan Pan
Hi JiaTao,


Very appreciated your share.


Actually, what I am confused about is how to make Calcite custom adaptor works 
with other parsers.
For example, I use a non-Calcite parser to get the parsed result and transform 
them into RelNode to tell Calcite, Hi, please use this RelNode for the rest 
handling. 
But I still implement a custom adaptor and wish Calcite can adopt them.
If I call Calcite JDBC, like `Driver.getConnection(Calcite_Conn)`, which will 
bring Calcite parser to parser SQL instead of my own.  : (
Is there any approach to make Calcite call the custom adapter and third-party 
parser?


Best wishes,
Trista




 Juan Pan (Trista)
 
Senior DBA & PMC of Apache ShardingSphere
E-mail: panj...@apache.org




On 11/23/2020 14:38,JiaTao Tao wrote:
Hi Juan Pan

As I said, you can archive this by "If you have to do this, you can either
generate SqlNode with Antlr OR transform your own AST tree to RelNode, you
can take a look at org.apache.calcite.sql2rel.SqlToRelConverter.", in fact,
hive does the same thing, you can take a look, it uses its own AST tree to
generate a RelNode tree.

Regards!

Aron Tao


Juan Pan  于2020年11月23日周一 下午1:04写道:

Hi JiaTao,


The reason we want to bypass Calcite parsing mainly contains two points.
First, as you said, we want to have a better query efficiency by only
parsing SQL one time. But from what you said, it looks like not a big deal.


Second, I am a bit concerned about the SQL supported capacity of Calcite.
[1] shows me the all supported SQLs. Is that in line with SQL92 or MySQL
5.x?
Currently, ShardingSphere parser has almost complete support for MySQL 8.0
and PostgreSQL, and basic support for SQLServer, Oracle, SQL92 [2] (As a
distributed Database middleware ecosystem, we have to do so).
Therefore, if we use Calcite parser, maybe we can not help users handle
some of the SQLs  (Unsure).


Could you give me some hints to bypass the parsing of Calcite? Or maybe we
can not reach that goal?
Much appreciated your any points or reply. : )


Regards,
Trista


[1] https://calcite.apache.org/docs/reference.html
[2]
https://shardingsphere.apache.org/document/current/en/features/sharding/principle/parse


Juan Pan (Trista)

Senior DBA & PMC of Apache ShardingSphere
E-mail: panj...@apache.org




On 11/22/2020 16:17,JiaTao Tao wrote:
In fact, parse twice's impact is little, in Apache Kylin, every time we do
the transformation to SQL, we re-parse it.
What really takes time is validation (use metadata like getting it from
HMS) and optimization.

Regards!

Aron Tao


Juan Pan  于2020年11月22日周日 下午2:32写道:

Hi community,




Thanks for your attention. : )




Currently, Apache ShardingSphere community plans to leverage Apache
Calcite to implement federated SQL query,

i.e., the query from different database instances [1].




The draft approach is that we consider using the custom adaptor with the
SQL parser of ShardingSphere itself (Antlr involved),

and transforming the parsed result to the algebra of Calcite.

Lastly, Calcite will execute the SQLs by means of the custom adaptor.




Currently, I know the entrance of calling the custom adaptor is to use the
`DriverManager.getConnection(CalciteUrl)`, which will get Calcite's SQL
parsing involved.

But we want to avoid twice SQL parsing, which means we wish to ignore the
SQL parsing of CalciteN .




My question is that how we can leverage Calcite adaptor without using
Calcite parser.

Could you give me some hints?




Very appreciated your any help and reply.




Regards,

Trista







[1] https://github.com/apache/shardingsphere/issues/8284



Juan Pan (Trista)

Senior DBA & PMC of Apache ShardingSphere
E-mail: panj...@apache.org







Re: [Question] How to leverage Calcite adaptor for federated SQL query without using Calcite parser

2020-11-22 Thread Juan Pan
Hi JiaTao,


The reason we want to bypass Calcite parsing mainly contains two points. 
First, as you said, we want to have a better query efficiency by only parsing 
SQL one time. But from what you said, it looks like not a big deal.


Second, I am a bit concerned about the SQL supported capacity of Calcite.
[1] shows me the all supported SQLs. Is that in line with SQL92 or MySQL 5.x? 
Currently, ShardingSphere parser has almost complete support for MySQL 8.0 and 
PostgreSQL, and basic support for SQLServer, Oracle, SQL92 [2] (As a 
distributed Database middleware ecosystem, we have to do so).  
Therefore, if we use Calcite parser, maybe we can not help users handle some of 
the SQLs  (Unsure).


Could you give me some hints to bypass the parsing of Calcite? Or maybe we can 
not reach that goal?
Much appreciated your any points or reply. : )


Regards,
Trista


[1] https://calcite.apache.org/docs/reference.html
[2] 
https://shardingsphere.apache.org/document/current/en/features/sharding/principle/parse


 Juan Pan (Trista)
 
Senior DBA & PMC of Apache ShardingSphere
E-mail: panj...@apache.org




On 11/22/2020 16:17,JiaTao Tao wrote:
In fact, parse twice's impact is little, in Apache Kylin, every time we do
the transformation to SQL, we re-parse it.
What really takes time is validation (use metadata like getting it from
HMS) and optimization.

Regards!

Aron Tao


Juan Pan  于2020年11月22日周日 下午2:32写道:

Hi community,




Thanks for your attention. : )




Currently, Apache ShardingSphere community plans to leverage Apache
Calcite to implement federated SQL query,

i.e., the query from different database instances [1].




The draft approach is that we consider using the custom adaptor with the
SQL parser of ShardingSphere itself (Antlr involved),

and transforming the parsed result to the algebra of Calcite.

Lastly, Calcite will execute the SQLs by means of the custom adaptor.




Currently, I know the entrance of calling the custom adaptor is to use the
`DriverManager.getConnection(CalciteUrl)`, which will get Calcite's SQL
parsing involved.

But we want to avoid twice SQL parsing, which means we wish to ignore the
SQL parsing of CalciteN .




My question is that how we can leverage Calcite adaptor without using
Calcite parser.

Could you give me some hints?




Very appreciated your any help and reply.




Regards,

Trista







[1] https://github.com/apache/shardingsphere/issues/8284



Juan Pan (Trista)

Senior DBA & PMC of Apache ShardingSphere
E-mail: panj...@apache.org






Re: A question regarding querying Google Cloud BigTable or Spanner through Apache Calcite

2020-11-22 Thread Juan Pan
Hi JiaTao,


Thanks for your valuable information. :-)
We plan to parse SQL (ShardingSphere and Calcite) twice at the first step for 
federated SQL query (It includes @Jason Chen's case).
Twice-parsing is not a big issue for query efficiency (as you said) though, 
we still want to know whether there is any possibility to bypass the SQL 
parsing from Calcite?


Best wishes,
Trista


 Juan Pan (Trista)
 
Senior DBA & PMC of Apache ShardingSphere
E-mail: panj...@apache.org




On 11/22/2020 16:39,JiaTao Tao wrote:
I think you are talking about query federation, yes, it's a good case for
Caclite.


Regards!

Aron Tao


Jason Chen  于2020年11月4日周三 上午8:06写道:

Hey,

I am Jason Chen from Shopify Data Science and Engineering team. I have a
few questions regarding the Apache Calcite, and I am not sure if the Apache
Calcite fits our use cases. Feel free to point me to the correct email or
Slack channel if this email is not the correct one for asking questions.

We are exploring the approaches to do mixed querying across multiple
storage resources. One use cases is doing the “JOIN” in query time of query
results from both Druid and BigTable/Spanner. Is this a good use case for
Apache Calcite?

Thank you for any help!

Regards,
Jason Chen


Jason (Jianbin) Chen
Senior Data Developer
p: +1 2066608351 | e: jason.c...@shopify.com
a: 234 Laurier Ave W Ottawa, ON K1N 5X8



[Question] How to leverage Calcite adaptor for federated SQL query without using Calcite parser

2020-11-21 Thread Juan Pan
Hi community,




Thanks for your attention. : )




Currently, Apache ShardingSphere community plans to leverage Apache Calcite to 
implement federated SQL query, 

i.e., the query from different database instances [1].




The draft approach is that we consider using the custom adaptor with the SQL 
parser of ShardingSphere itself (Antlr involved), 

and transforming the parsed result to the algebra of Calcite. 

 Lastly, Calcite will execute the SQLs by means of the custom adaptor. 




Currently, I know the entrance of calling the custom adaptor is to use the 
`DriverManager.getConnection(CalciteUrl)`, which will get Calcite's SQL parsing 
involved. 

But we want to avoid twice SQL parsing, which means we wish to ignore the SQL 
parsing of CalciteN .




My question is that how we can leverage Calcite adaptor without using Calcite 
parser.

Could you give me some hints?




Very appreciated your any help and reply.




Regards,

Trista







[1] https://github.com/apache/shardingsphere/issues/8284



 Juan Pan (Trista)
 
Senior DBA & PMC of Apache ShardingSphere
E-mail: panj...@apache.org





Re: Draft board report for January 2020

2020-01-07 Thread Juan Pan
Totally agree. It deserves praise that rotate chair annually and run Calcite 
community so active and in order.


 Juan Pan (Trista) 
 
Senior DBA & PPMC of Apache ShardingSphere(Incubating)
E-mail: panj...@apache.org




On 01/7/2020 04:17,Julian Hyde wrote:
Is it worth mentioning that we have a new PMC chair? (Of course you’re too 
modest to mention it.)

I am proud of the fact that we change the chair annually, and are now on our 
fifth (distinct) chair. Orderly transfer of power is a mark of a stable 
democracy.

On Jan 5, 2020, at 2:29 AM, Stamatis Zampetakis  wrote:

@Andrei: If I remember well both Ignite and Hazelcast decided to adopt
Calcite and we mentioned Ignite in the previous board report.

Best,
Stamatis

On Thu, Jan 2, 2020 at 9:49 PM Andrei Sereda  wrote:

+1

Question regarding Hazelcast :

Finally, the Hazelcast system has decided to adopt Calcite for query
planning.

Was it Ignite [1] or Hazelcast team to adopt (prototype) Calcite ?


https://lists.apache.org/thread.html/4211dbbe35690e70462370886afcbb35419ff016b0ee604acf07a4d3%40%3Cdev.ignite.apache.org%3E
[1]

On Thu, Jan 2, 2020 at 3:42 PM Rui Wang  wrote:

Looks nice! Thank you Stamatis!



-Rui



On Wed, Jan 1, 2020 at 6:52 PM Matt Wang  wrote:

+1, looks good. Thanks~


---
Best,
Matt Wang


On 01/2/2020 09:57,Chunwei Lei wrote:
+1, looks good.
Thanks, Stamatis~~


Best,
Chunwei


On Thu, Jan 2, 2020 at 8:41 AM Haisheng Yuan 
wrote:

+1, looks good to me.
Thanks.

- Haisheng

--
发件人:Francis Chuang
日 期:2020年01月02日 04:54:46
收件人:
主 题:Re: Draft board report for January 2020

+1, looks good, Stamatis!

On 1/01/2020 9:18 pm, Stamatis Zampetakis wrote:
Attached below is a draft of this month's board report. I plan to
submit
it
on January 7. Please let me know if you have any additions or
corrections.

## Description:
Apache Calcite is a highly customizable framework for parsing and
planning
queries on data in a wide variety of formats. It allows database-like
access,
and in particular a SQL interface and advanced query optimization, for
data
not
residing in a traditional database.

Avatica is a sub-project within Calcite and provides a framework for
building
local and remote JDBC and ODBC database drivers. Avatica has an
independent
release schedule and its own repository.

## Issues:
There are no issues requiring board attention.

## Membership Data:
Apache Calcite was founded 2015-10-22 (4 years ago).
There are currently 45 committers and 22 PMC members in this project.
The Committer-to-PMC ratio is roughly 2:1.

Community changes, past quarter:
- Danny Chen was added to the PMC on 2019-10-30.
- Haisheng Yuan was added to the PMC on 2019-11-11.
- Stamatis Zampetakis was appointed as PMC chair on 2019-12-18,
continuing the tradition of the project of rotating the chair every
year.
- No new committers. Last addition was Mohamed Mohsen on 2019-09-17.

## Project Activity:
Calcite 1.21.0 was released in the middle of September, including more
than
100
resolved issues and maintaining a release cadence of roughly one
release
per
quarter.

Calcite 1.22.0 is under preparation and is expected to be released
inside
January while at the moment contains more than 230 commits and 150
resolved
issues.

Avatica 1.16.0 was released in the middle of December, including
numerous
bug
fixes and security improvements while the build system has been
migrated
from
maven to gradle.

The build and test infrastructure has been modernized for both Calcite
and
Avatica, with the migration from maven to gradle, JUnit4 to JUnit5, and
the
introduction of GitHub actions as part of the CI. The changes shall
improve
developers experience, code quality, and protect better against
regressions.

Members of the project participated in ApacheCon EU on October and
Flink
Forward
Asia on November, representing the community, and presenting talks
about
Calcite.

Finally, the Hazelcast system has decided to adopt Calcite for query
planning.

## Community Health:

Activity levels on mailing lists (37%), git (40%) and JIRA (opened 15%,
closed
19%) have increased significantly in the last quarter. One reason is
the
modernization of the build and test infrastructure for both Calcite and
Avatica,
which triggered  many discussions and follow-up tasks. Another reason,
is
the
changes in the roster of the PMC and open discussions about the future
of
the
project. Last but not least, is the involvement of new people in the
community
bringing up new challenges and ideas for improvements.

The rates of pull requests being closed and merged on Github has
increased
by
16%, as we work to clear our backlog. Nevertheless, the number of open
pull
requests is still big since the number of committers who get involved
in
reviews
is rather small. Furthermore, there are pull requests which are stale,
work in progress, or proposals that make the numbers look even bigger.
On
the
positive side every pul

Re: [QUESTION] How could getTableName(columnIndex) return the correct result?

2020-01-07 Thread Juan Pan
Thanks your explanation, Julian. Does it mean the optimization of this JDBC 
interface may be included in next release of Calcite?


 Juan Pan (Trista) 
 
Senior DBA & PPMC of Apache ShardingSphere(Incubating)
E-mail: panj...@apache.org




On 01/7/2020 11:17,Julian Hyde wrote:
Yes, we should be returning “” rather than null.

(Not an excuse, but that method is so old that I suspect that the authors of 
JDBC were still thinking in terms of ODBC. In C it’s difficult to return a 
null, it’s easier to return an empty string.)

Julian


On Jan 6, 2020, at 7:03 PM, Juan Pan  wrote:

FYI.


The following information comes from `java.sql.ResultSetMetaData`.


/**
* Gets the designated column's table name.
*
* @param column the first column is 1, the second is 2, ...
* @return table name or "" if not applicable
* @exception SQLException if a database access error occurs
*/
String getTableName(int column) throws SQLException;


Juan Pan (Trista)

Senior DBA & PPMC of Apache ShardingSphere(Incubating)
E-mail: panj...@apache.org




On 01/7/2020 10:56,Juan Pan wrote:
Hi Julian,


You’re right. From my tests, since “a” is not from table test, 
getTableName(columnIndex) returns `empty string` from MySQL and H2 databases, 
and `null` from calcite. It makes sense.
The scenario happened to me is that  some of third-part applications or 
open-source projects would call some jdbc interfaces, like 
getTableName(columnIndex).
As a result, when they call getTableName(columnIndex), the null result from 
calcite makes them throw NPE, but empty string from DBs avoid this case.


Julian, very appreciated your help. :-)


Best wishes,
Trista


Juan Pan (Trista)

Senior DBA & PPMC of Apache ShardingSphere(Incubating)
E-mail: panj...@apache.org




On 01/7/2020 04:12,Julian Hyde wrote:
JDBC table names and column names are of limited use. They tell you where a 
particular column comes from, and your “a” column does not come (directly) from 
a table. I think you’ll find that Calcite is implementing the JDBC standard 
correctly, and is consistent with other databases.

What do you need the table name for?

If you want to understand the structure of the query - e.g. the fact that the 
query is sourced from the “test” table - then your might be better working with 
the SqlNode or RelNode representations. The RelNode representation of your 
query is


Aggregate(count(*) as a)
^
|
TableScan(“test”)

and that probably tells you what you need to know.

Julian


On Jan 5, 2020, at 11:42 PM, Juan Pan  wrote:



Hi Calcite Community,


Thanks for your attention. After failing self-helping by debug source code, i 
sent this email for your help. :)


My query SQL is `SELECT count(*) a FROM test`, and i called JDBC interface, 
i.e, `ResultSet.getMetaData().getTableName(1)` to get table name, i.e, test, 
however the result of which is null.
I traced the process and found that if !(selectItem instanceof SqlIdentifier) 
then return null in `SqlValidatorImpl.java`. Is there any way to get the real 
table name, i.e, test?


Thanks in advance,


Trista






Juan Pan (Trista)

Senior DBA & PPMC of Apache ShardingSphere(Incubating)
E-mail: panj...@apache.org





Re: [QUESTION] How could getTableName(columnIndex) return the correct result?

2020-01-06 Thread Juan Pan
FYI. 


The following information comes from `java.sql.ResultSetMetaData`.


/**
 * Gets the designated column's table name.
 *
 * @param column the first column is 1, the second is 2, ...
 * @return table name or "" if not applicable
 * @exception SQLException if a database access error occurs
 */
String getTableName(int column) throws SQLException;


 Juan Pan (Trista) 
 
Senior DBA & PPMC of Apache ShardingSphere(Incubating)
E-mail: panj...@apache.org




On 01/7/2020 10:56,Juan Pan wrote:
Hi Julian,


You’re right. From my tests, since “a” is not from table test, 
getTableName(columnIndex) returns `empty string` from MySQL and H2 databases, 
and `null` from calcite. It makes sense.
The scenario happened to me is that  some of third-part applications or 
open-source projects would call some jdbc interfaces, like 
getTableName(columnIndex). 
As a result, when they call getTableName(columnIndex), the null result from 
calcite makes them throw NPE, but empty string from DBs avoid this case.


Julian, very appreciated your help. :-)


Best wishes,
Trista


 Juan Pan (Trista) 
 
Senior DBA & PPMC of Apache ShardingSphere(Incubating)
E-mail: panj...@apache.org




On 01/7/2020 04:12,Julian Hyde wrote:
JDBC table names and column names are of limited use. They tell you where a 
particular column comes from, and your “a” column does not come (directly) from 
a table. I think you’ll find that Calcite is implementing the JDBC standard 
correctly, and is consistent with other databases.

What do you need the table name for?

If you want to understand the structure of the query - e.g. the fact that the 
query is sourced from the “test” table - then your might be better working with 
the SqlNode or RelNode representations. The RelNode representation of your 
query is


Aggregate(count(*) as a)
^
|
TableScan(“test”)

and that probably tells you what you need to know.

Julian


On Jan 5, 2020, at 11:42 PM, Juan Pan  wrote:



Hi Calcite Community,


Thanks for your attention. After failing self-helping by debug source code, i 
sent this email for your help. :)


My query SQL is `SELECT count(*) a FROM test`, and i called JDBC interface, 
i.e, `ResultSet.getMetaData().getTableName(1)` to get table name, i.e, test, 
however the result of which is null.
I traced the process and found that if !(selectItem instanceof SqlIdentifier) 
then return null in `SqlValidatorImpl.java`. Is there any way to get the real 
table name, i.e, test?


Thanks in advance,


Trista






Juan Pan (Trista)

Senior DBA & PPMC of Apache ShardingSphere(Incubating)
E-mail: panj...@apache.org





Re: [QUESTION] How could getTableName(columnIndex) return the correct result?

2020-01-06 Thread Juan Pan
Hi Julian,


You’re right. From my tests, since “a” is not from table test, 
getTableName(columnIndex) returns `empty string` from MySQL and H2 databases, 
and `null` from calcite. It makes sense.
The scenario happened to me is that  some of third-part applications or 
open-source projects would call some jdbc interfaces, like 
getTableName(columnIndex). 
As a result, when they call getTableName(columnIndex), the null result from 
calcite makes them throw NPE, but empty string from DBs avoid this case.


Julian, very appreciated your help. :-)


Best wishes,
Trista


 Juan Pan (Trista) 
 
Senior DBA & PPMC of Apache ShardingSphere(Incubating)
E-mail: panj...@apache.org




On 01/7/2020 04:12,Julian Hyde wrote:
JDBC table names and column names are of limited use. They tell you where a 
particular column comes from, and your “a” column does not come (directly) from 
a table. I think you’ll find that Calcite is implementing the JDBC standard 
correctly, and is consistent with other databases.

What do you need the table name for?

If you want to understand the structure of the query - e.g. the fact that the 
query is sourced from the “test” table - then your might be better working with 
the SqlNode or RelNode representations. The RelNode representation of your 
query is


Aggregate(count(*) as a)
^
|
TableScan(“test”)

and that probably tells you what you need to know.

Julian


On Jan 5, 2020, at 11:42 PM, Juan Pan  wrote:



Hi Calcite Community,


Thanks for your attention. After failing self-helping by debug source code, i 
sent this email for your help. :)


My query SQL is `SELECT count(*) a FROM test`, and i called JDBC interface, 
i.e, `ResultSet.getMetaData().getTableName(1)` to get table name, i.e, test, 
however the result of which is null.
I traced the process and found that if !(selectItem instanceof SqlIdentifier) 
then return null in `SqlValidatorImpl.java`. Is there any way to get the real 
table name, i.e, test?


Thanks in advance,


Trista






Juan Pan (Trista)

Senior DBA & PPMC of Apache ShardingSphere(Incubating)
E-mail: panj...@apache.org





[QUESTION] How could getTableName(columnIndex) return the correct result?

2020-01-05 Thread Juan Pan


Hi Calcite Community,


Thanks for your attention. After failing self-helping by debug source code, i 
sent this email for your help. :)


My query SQL is `SELECT count(*) a FROM test`, and i called JDBC interface, 
i.e, `ResultSet.getMetaData().getTableName(1)` to get table name, i.e, test, 
however the result of which is null.
I traced the process and found that if !(selectItem instanceof SqlIdentifier) 
then return null in `SqlValidatorImpl.java`. Is there any way to get the real 
table name, i.e, test?


Thanks in advance,


Trista






 Juan Pan (Trista) 
 
Senior DBA & PPMC of Apache ShardingSphere(Incubating)
E-mail: panj...@apache.org





Re: Quicksql

2019-12-22 Thread Juan Pan
Thanks Gelbana,


Very appreciated your explanation, which sheds me some light on exploring 
Calcite. :)


Best wishes,
Trista


 Juan Pan (Trista) 
 
Senior DBA & PPMC of Apache ShardingSphere(Incubating)
E-mail: panj...@apache.org




On 12/22/2019 05:58,Muhammad Gelbana wrote:
I am curious how to join the tables from different datasources.
Based on Calcite's conventions concept, the Join operator and its input
operators should all have the same convention. If they don't, the
convention different from the Join operator's convention will have to
register a converter rule. This rule should produce an operator that only
converts from that convention to the Join operator's convention.

This way the Join operator will be able to handle the data obtained from
its input operators because it understands the data structure.

Thanks,
Gelbana


On Wed, Dec 18, 2019 at 5:08 AM Juan Pan  wrote:

Some updates.


Recently i took a look at their doc and source code, and found this
project uses SQL parsing and Relational algebra of Calcite to get query
plan, and also translates to spark SQL for joining different datasources,
or corresponding query for single datasource.


Although it copies many classes from Calcite, the idea of QuickSQL seems
some of interests, and code is succinct.


Best,
Trista


Juan Pan (Trista)

Senior DBA & PPMC of Apache ShardingSphere(Incubating)
E-mail: panj...@apache.org




On 12/13/2019 17:16,Juan Pan wrote:
Yes, indeed.


Juan Pan (Trista)

Senior DBA & PPMC of Apache ShardingSphere(Incubating)
E-mail: panj...@apache.org




On 12/12/2019 18:00,Alessandro Solimando
wrote:
Adapters must be needed by data sources not supporting SQL, I think this is
what Juan Pan was asking for.

On Thu, 12 Dec 2019 at 04:05, Haisheng Yuan  wrote:

Nope, it doesn't use any adapters. It just submits partial SQL query to
different engines.

If query contains table from single source, e.g.
select count(*) from hive_table1, hive_table2 where a=b;
then the whole query will be submitted to hive.

Otherwise, e.g.
select distinct a,b from hive_table union select distinct a,b from
mysql_table;

The following query will be submitted to Spark and executed by Spark:
select a,b from spark_tmp_table1 union select a,b from spark_tmp_table2;

spark_tmp_table1: select distinct a,b from hive_table
spark_tmp_table2: select distinct a,b from mysql_table

On 2019/12/11 04:27:07, "Juan Pan"  wrote:
Hi Haisheng,


The query on different data source will then be registered as temp
spark tables (with filter or join pushed in), the whole query is rewritten
as SQL text over these temp tables and submitted to Spark.


Does it mean QuickSQL also need adaptors to make query executed on
different data source?


Yes, virtualization is one of Calcite’s goals. In fact, when I created
Calcite I was thinking about virtualization + in-memory materialized views.
Not only the Spark convention but any of the “engine” conventions (Drill,
Flink, Beam, Enumerable) could be used to create a virtual query engine.


Basically, i like and agree with Julian’s statement. It is a great idea
which personally hope Calcite move towards.


Give my best wishes to Calcite community.


Thanks,
Trista


Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere


On 12/11/2019 10:53,Haisheng Yuan wrote:
As far as I know, users still need to register tables from other data
sources before querying it. QuickSQL uses Calcite for parsing queries and
optimizing logical expressions with several transformation rules. The query
on different data source will then be registered as temp spark tables (with
filter or join pushed in), the whole query is rewritten as SQL text over
these temp tables and submitted to Spark.

- Haisheng

--
发件人:Rui Wang
日 期:2019年12月11日 06:24:45
收件人:
主 题:Re: Quicksql

The co-routine model sounds fitting into Streaming cases well.

I was thinking how should Enumerable interface work with streaming cases
but now I should also check Interpreter.


-Rui

On Tue, Dec 10, 2019 at 1:33 PM Julian Hyde  wrote:

The goal (or rather my goal) for the interpreter is to replace
Enumerable as the quick, easy default convention.

Enumerable is efficient but not that efficient (compared to engines
that work on off-heap data representing batches of records). And
because it generates java byte code there is a certain latency to
getting a query prepared and ready to run.

It basically implements the old Volcano query evaluation model. It is
single-threaded (because all work happens as a result of a call to
'next()' on the root node) and cannot handle branching data-flow
graphs (DAGs).

The Interpreter operates uses a co-routine model (reading from queues,
writing to queues, and yielding when there is no work to be done) and
therefore could be more efficient than enumerable in a single-node
multi-core system. Also, th

Re: Quicksql

2019-12-17 Thread Juan Pan
Some updates.


Recently i took a look at their doc and source code, and found this project 
uses SQL parsing and Relational algebra of Calcite to get query plan, and also 
translates to spark SQL for joining different datasources, or corresponding 
query for single datasource.


Although it copies many classes from Calcite, the idea of QuickSQL seems some 
of interests, and code is succinct.


Best,
Trista 


 Juan Pan (Trista) 
 
Senior DBA & PPMC of Apache ShardingSphere(Incubating)
E-mail: panj...@apache.org




On 12/13/2019 17:16,Juan Pan wrote:
Yes, indeed.


Juan Pan (Trista)

Senior DBA & PPMC of Apache ShardingSphere(Incubating)
E-mail: panj...@apache.org




On 12/12/2019 18:00,Alessandro Solimando wrote:
Adapters must be needed by data sources not supporting SQL, I think this is
what Juan Pan was asking for.

On Thu, 12 Dec 2019 at 04:05, Haisheng Yuan  wrote:

Nope, it doesn't use any adapters. It just submits partial SQL query to
different engines.

If query contains table from single source, e.g.
select count(*) from hive_table1, hive_table2 where a=b;
then the whole query will be submitted to hive.

Otherwise, e.g.
select distinct a,b from hive_table union select distinct a,b from
mysql_table;

The following query will be submitted to Spark and executed by Spark:
select a,b from spark_tmp_table1 union select a,b from spark_tmp_table2;

spark_tmp_table1: select distinct a,b from hive_table
spark_tmp_table2: select distinct a,b from mysql_table

On 2019/12/11 04:27:07, "Juan Pan"  wrote:
Hi Haisheng,


The query on different data source will then be registered as temp
spark tables (with filter or join pushed in), the whole query is rewritten
as SQL text over these temp tables and submitted to Spark.


Does it mean QuickSQL also need adaptors to make query executed on
different data source?


Yes, virtualization is one of Calcite’s goals. In fact, when I created
Calcite I was thinking about virtualization + in-memory materialized views.
Not only the Spark convention but any of the “engine” conventions (Drill,
Flink, Beam, Enumerable) could be used to create a virtual query engine.


Basically, i like and agree with Julian’s statement. It is a great idea
which personally hope Calcite move towards.


Give my best wishes to Calcite community.


Thanks,
Trista


Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere


On 12/11/2019 10:53,Haisheng Yuan wrote:
As far as I know, users still need to register tables from other data
sources before querying it. QuickSQL uses Calcite for parsing queries and
optimizing logical expressions with several transformation rules. The query
on different data source will then be registered as temp spark tables (with
filter or join pushed in), the whole query is rewritten as SQL text over
these temp tables and submitted to Spark.

- Haisheng

--
发件人:Rui Wang
日 期:2019年12月11日 06:24:45
收件人:
主 题:Re: Quicksql

The co-routine model sounds fitting into Streaming cases well.

I was thinking how should Enumerable interface work with streaming cases
but now I should also check Interpreter.


-Rui

On Tue, Dec 10, 2019 at 1:33 PM Julian Hyde  wrote:

The goal (or rather my goal) for the interpreter is to replace
Enumerable as the quick, easy default convention.

Enumerable is efficient but not that efficient (compared to engines
that work on off-heap data representing batches of records). And
because it generates java byte code there is a certain latency to
getting a query prepared and ready to run.

It basically implements the old Volcano query evaluation model. It is
single-threaded (because all work happens as a result of a call to
'next()' on the root node) and cannot handle branching data-flow
graphs (DAGs).

The Interpreter operates uses a co-routine model (reading from queues,
writing to queues, and yielding when there is no work to be done) and
therefore could be more efficient than enumerable in a single-node
multi-core system. Also, there is little start-up time, which is
important for small queries.

I would love to add another built-in convention that uses Arrow as
data format and generates co-routines for each operator. Those
co-routines could be deployed in a parallel and/or distributed data
engine.

Julian

On Tue, Dec 10, 2019 at 3:47 AM Zoltan Farkas
 wrote:

What is the ultimate goal of the Calcite Interpreter?

To provide some context, I have been playing around with calcite + REST
(see https://github.com/zolyfarkas/jaxrs-spf4j-demo/wiki/AvroCalciteRest
<
https://github.com/zolyfarkas/jaxrs-spf4j-demo/wiki/AvroCalciteRest> for
detail of my experiments)


—Z

On Dec 9, 2019, at 9:05 PM, Julian Hyde  wrote:

Yes, virtualization is one of Calcite’s goals. In fact, when I created
Calcite I was thinking about virtualization + in-memory materialized
views.
Not only the Spark convention but any of the “engine” conventions (D

Re: Quicksql

2019-12-13 Thread Juan Pan
Thanks for your clarification, Haisheng.


I am curious how to join the tables from different datasources. 


Supposing there is tb1 in datasource1 and tb2 in datasource2 and the SQL is 
`select tb1.col1, tb2.col2 from tb1, tb2 where tb1.id = tb2.id`, how to join 
two of tables together and get the final result?


 Juan Pan (Trista) 
 
Senior DBA & PPMC of Apache ShardingSphere(Incubating)
E-mail: panj...@apache.org




On 12/12/2019 11:05,Haisheng Yuan wrote:
Nope, it doesn't use any adapters. It just submits partial SQL query to 
different engines.

If query contains table from single source, e.g.
select count(*) from hive_table1, hive_table2 where a=b;
then the whole query will be submitted to hive.

Otherwise, e.g.
select distinct a,b from hive_table union select distinct a,b from mysql_table;

The following query will be submitted to Spark and executed by Spark:
select a,b from spark_tmp_table1 union select a,b from spark_tmp_table2;

spark_tmp_table1: select distinct a,b from hive_table
spark_tmp_table2: select distinct a,b from mysql_table

On 2019/12/11 04:27:07, "Juan Pan"  wrote:
Hi Haisheng,


The query on different data source will then be registered as temp spark tables 
(with filter or join pushed in), the whole query is rewritten as SQL text over 
these temp tables and submitted to Spark.


Does it mean QuickSQL also need adaptors to make query executed on different 
data source?


Yes, virtualization is one of Calcite’s goals. In fact, when I created Calcite 
I was thinking about virtualization + in-memory materialized views. Not only 
the Spark convention but any of the “engine” conventions (Drill, Flink, Beam, 
Enumerable) could be used to create a virtual query engine.


Basically, i like and agree with Julian’s statement. It is a great idea which 
personally hope Calcite move towards.


Give my best wishes to Calcite community.


Thanks,
Trista


Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere


On 12/11/2019 10:53,Haisheng Yuan wrote:
As far as I know, users still need to register tables from other data sources 
before querying it. QuickSQL uses Calcite for parsing queries and optimizing 
logical expressions with several transformation rules. The query on different 
data source will then be registered as temp spark tables (with filter or join 
pushed in), the whole query is rewritten as SQL text over these temp tables and 
submitted to Spark.

- Haisheng

--
发件人:Rui Wang
日 期:2019年12月11日 06:24:45
收件人:
主 题:Re: Quicksql

The co-routine model sounds fitting into Streaming cases well.

I was thinking how should Enumerable interface work with streaming cases
but now I should also check Interpreter.


-Rui

On Tue, Dec 10, 2019 at 1:33 PM Julian Hyde  wrote:

The goal (or rather my goal) for the interpreter is to replace
Enumerable as the quick, easy default convention.

Enumerable is efficient but not that efficient (compared to engines
that work on off-heap data representing batches of records). And
because it generates java byte code there is a certain latency to
getting a query prepared and ready to run.

It basically implements the old Volcano query evaluation model. It is
single-threaded (because all work happens as a result of a call to
'next()' on the root node) and cannot handle branching data-flow
graphs (DAGs).

The Interpreter operates uses a co-routine model (reading from queues,
writing to queues, and yielding when there is no work to be done) and
therefore could be more efficient than enumerable in a single-node
multi-core system. Also, there is little start-up time, which is
important for small queries.

I would love to add another built-in convention that uses Arrow as
data format and generates co-routines for each operator. Those
co-routines could be deployed in a parallel and/or distributed data
engine.

Julian

On Tue, Dec 10, 2019 at 3:47 AM Zoltan Farkas
 wrote:

What is the ultimate goal of the Calcite Interpreter?

To provide some context, I have been playing around with calcite + REST
(see https://github.com/zolyfarkas/jaxrs-spf4j-demo/wiki/AvroCalciteRest <
https://github.com/zolyfarkas/jaxrs-spf4j-demo/wiki/AvroCalciteRest> for
detail of my experiments)


—Z

On Dec 9, 2019, at 9:05 PM, Julian Hyde  wrote:

Yes, virtualization is one of Calcite’s goals. In fact, when I created
Calcite I was thinking about virtualization + in-memory materialized views.
Not only the Spark convention but any of the “engine” conventions (Drill,
Flink, Beam, Enumerable) could be used to create a virtual query engine.

See e.g. a talk I gave in 2013 about Optiq (precursor to Calcite)
https://www.slideshare.net/julianhyde/optiq-a-dynamic-data-management-framework
<
https://www.slideshare.net/julianhyde/optiq-a-dynamic-data-management-framework
.

Julian



On Dec 9, 2019, at 2:29 PM, Muhammad Gelbana 
wrote:

I recently contacted one of th

Re: Quicksql

2019-12-13 Thread Juan Pan
Yes, indeed.


 Juan Pan (Trista) 
 
Senior DBA & PPMC of Apache ShardingSphere(Incubating)
E-mail: panj...@apache.org




On 12/12/2019 18:00,Alessandro Solimando wrote:
Adapters must be needed by data sources not supporting SQL, I think this is
what Juan Pan was asking for.

On Thu, 12 Dec 2019 at 04:05, Haisheng Yuan  wrote:

Nope, it doesn't use any adapters. It just submits partial SQL query to
different engines.

If query contains table from single source, e.g.
select count(*) from hive_table1, hive_table2 where a=b;
then the whole query will be submitted to hive.

Otherwise, e.g.
select distinct a,b from hive_table union select distinct a,b from
mysql_table;

The following query will be submitted to Spark and executed by Spark:
select a,b from spark_tmp_table1 union select a,b from spark_tmp_table2;

spark_tmp_table1: select distinct a,b from hive_table
spark_tmp_table2: select distinct a,b from mysql_table

On 2019/12/11 04:27:07, "Juan Pan"  wrote:
Hi Haisheng,


The query on different data source will then be registered as temp
spark tables (with filter or join pushed in), the whole query is rewritten
as SQL text over these temp tables and submitted to Spark.


Does it mean QuickSQL also need adaptors to make query executed on
different data source?


Yes, virtualization is one of Calcite’s goals. In fact, when I created
Calcite I was thinking about virtualization + in-memory materialized views.
Not only the Spark convention but any of the “engine” conventions (Drill,
Flink, Beam, Enumerable) could be used to create a virtual query engine.


Basically, i like and agree with Julian’s statement. It is a great idea
which personally hope Calcite move towards.


Give my best wishes to Calcite community.


Thanks,
Trista


Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere


On 12/11/2019 10:53,Haisheng Yuan wrote:
As far as I know, users still need to register tables from other data
sources before querying it. QuickSQL uses Calcite for parsing queries and
optimizing logical expressions with several transformation rules. The query
on different data source will then be registered as temp spark tables (with
filter or join pushed in), the whole query is rewritten as SQL text over
these temp tables and submitted to Spark.

- Haisheng

--
发件人:Rui Wang
日 期:2019年12月11日 06:24:45
收件人:
主 题:Re: Quicksql

The co-routine model sounds fitting into Streaming cases well.

I was thinking how should Enumerable interface work with streaming cases
but now I should also check Interpreter.


-Rui

On Tue, Dec 10, 2019 at 1:33 PM Julian Hyde  wrote:

The goal (or rather my goal) for the interpreter is to replace
Enumerable as the quick, easy default convention.

Enumerable is efficient but not that efficient (compared to engines
that work on off-heap data representing batches of records). And
because it generates java byte code there is a certain latency to
getting a query prepared and ready to run.

It basically implements the old Volcano query evaluation model. It is
single-threaded (because all work happens as a result of a call to
'next()' on the root node) and cannot handle branching data-flow
graphs (DAGs).

The Interpreter operates uses a co-routine model (reading from queues,
writing to queues, and yielding when there is no work to be done) and
therefore could be more efficient than enumerable in a single-node
multi-core system. Also, there is little start-up time, which is
important for small queries.

I would love to add another built-in convention that uses Arrow as
data format and generates co-routines for each operator. Those
co-routines could be deployed in a parallel and/or distributed data
engine.

Julian

On Tue, Dec 10, 2019 at 3:47 AM Zoltan Farkas
 wrote:

What is the ultimate goal of the Calcite Interpreter?

To provide some context, I have been playing around with calcite + REST
(see https://github.com/zolyfarkas/jaxrs-spf4j-demo/wiki/AvroCalciteRest
<
https://github.com/zolyfarkas/jaxrs-spf4j-demo/wiki/AvroCalciteRest> for
detail of my experiments)


—Z

On Dec 9, 2019, at 9:05 PM, Julian Hyde  wrote:

Yes, virtualization is one of Calcite’s goals. In fact, when I created
Calcite I was thinking about virtualization + in-memory materialized
views.
Not only the Spark convention but any of the “engine” conventions (Drill,
Flink, Beam, Enumerable) could be used to create a virtual query engine.

See e.g. a talk I gave in 2013 about Optiq (precursor to Calcite)

https://www.slideshare.net/julianhyde/optiq-a-dynamic-data-management-framework
<

https://www.slideshare.net/julianhyde/optiq-a-dynamic-data-management-framework
.

Julian



On Dec 9, 2019, at 2:29 PM, Muhammad Gelbana 
wrote:

I recently contacted one of the active contributors asking about the
purpose of the project and here's his reply:

From my understanding, Quicksql is a data virtualization p

Re: Quicksql

2019-12-10 Thread Juan Pan
Hi Haisheng,


> The query on different data source will then be registered as temp spark 
> tables (with filter or join pushed in), the whole query is rewritten as SQL 
> text over these temp tables and submitted to Spark.


Does it mean QuickSQL also need adaptors to make query executed on different 
data source? 


> Yes, virtualization is one of Calcite’s goals. In fact, when I created 
> Calcite I was thinking about virtualization + in-memory materialized views. 
> Not only the Spark convention but any of the “engine” conventions (Drill, 
> Flink, Beam, Enumerable) could be used to create a virtual query engine.


Basically, i like and agree with Julian’s statement. It is a great idea which 
personally hope Calcite move towards.


Give my best wishes to Calcite community. 


Thanks,
Trista


 Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere


On 12/11/2019 10:53,Haisheng Yuan wrote:
As far as I know, users still need to register tables from other data sources 
before querying it. QuickSQL uses Calcite for parsing queries and optimizing 
logical expressions with several transformation rules. The query on different 
data source will then be registered as temp spark tables (with filter or join 
pushed in), the whole query is rewritten as SQL text over these temp tables and 
submitted to Spark.

- Haisheng

--
发件人:Rui Wang
日 期:2019年12月11日 06:24:45
收件人:
主 题:Re: Quicksql

The co-routine model sounds fitting into Streaming cases well.

I was thinking how should Enumerable interface work with streaming cases
but now I should also check Interpreter.


-Rui

On Tue, Dec 10, 2019 at 1:33 PM Julian Hyde  wrote:

The goal (or rather my goal) for the interpreter is to replace
Enumerable as the quick, easy default convention.

Enumerable is efficient but not that efficient (compared to engines
that work on off-heap data representing batches of records). And
because it generates java byte code there is a certain latency to
getting a query prepared and ready to run.

It basically implements the old Volcano query evaluation model. It is
single-threaded (because all work happens as a result of a call to
'next()' on the root node) and cannot handle branching data-flow
graphs (DAGs).

The Interpreter operates uses a co-routine model (reading from queues,
writing to queues, and yielding when there is no work to be done) and
therefore could be more efficient than enumerable in a single-node
multi-core system. Also, there is little start-up time, which is
important for small queries.

I would love to add another built-in convention that uses Arrow as
data format and generates co-routines for each operator. Those
co-routines could be deployed in a parallel and/or distributed data
engine.

Julian

On Tue, Dec 10, 2019 at 3:47 AM Zoltan Farkas
 wrote:

What is the ultimate goal of the Calcite Interpreter?

To provide some context, I have been playing around with calcite + REST
(see https://github.com/zolyfarkas/jaxrs-spf4j-demo/wiki/AvroCalciteRest <
https://github.com/zolyfarkas/jaxrs-spf4j-demo/wiki/AvroCalciteRest> for
detail of my experiments)


—Z

On Dec 9, 2019, at 9:05 PM, Julian Hyde  wrote:

Yes, virtualization is one of Calcite’s goals. In fact, when I created
Calcite I was thinking about virtualization + in-memory materialized views.
Not only the Spark convention but any of the “engine” conventions (Drill,
Flink, Beam, Enumerable) could be used to create a virtual query engine.

See e.g. a talk I gave in 2013 about Optiq (precursor to Calcite)
https://www.slideshare.net/julianhyde/optiq-a-dynamic-data-management-framework
<
https://www.slideshare.net/julianhyde/optiq-a-dynamic-data-management-framework
.

Julian



On Dec 9, 2019, at 2:29 PM, Muhammad Gelbana 
wrote:

I recently contacted one of the active contributors asking about the
purpose of the project and here's his reply:

From my understanding, Quicksql is a data virtualization platform. It
can
query multiple data sources altogether and in a distributed way;
Say, you
can write a SQL with a MySql table join with an Elasticsearch table.
Quicksql can recognize that, and then generate Spark code, in which
it will
fetch the MySQL/ES data as a temporary table separately, and then
join them
in Spark. The execution is in Spark so it is totally distributed.
The user
doesn't need to aware of where the table is from.


I understand that the Spark convention Calcite has attempts to
achieve the
same goal, but it isn't fully implemented yet.


On Tue, Oct 29, 2019 at 9:43 PM Julian Hyde  wrote:

Anyone know anything about Quicksql? It seems to be quite a popular
project, and they have an internal fork of Calcite.

https://github.com/Qihoo360/ <https://github.com/Qihoo360/>



https://github.com/Qihoo360/Quicksql/tree/master/analysis/src/main/java/org/apache/calcite
<

https://github.com/Qihoo360/Quicksql/tree/master/analysis/src/main/java/org/apache/calcite


Julian








Re: [QUESTION] Build Calcite using Gradle

2019-11-26 Thread Juan Pan
Hi Vladimir,


Super thanks for your suggestion! All the issues disappeared after upgrading 
IDEA. This is very important and first step for me to continue exploring 
Calcite. :)


Thanks Cheng and Rui as well to help me look for issue cause. :-)


 Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere


On 11/26/2019 19:20,Vladimir Sitnikov wrote:
IntelliJ IDEA 2017.3.5 (Ultimate Edition)

Could you please upgrade to 2019.3?

Vladimir


Re: [QUESTION] Build Calcite using Gradle

2019-11-26 Thread Juan Pan
Sure, feedback will come after upgrading.


Thanks Vladimir.


 Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere
On 11/26/2019 19:20, Vladimir Sitnikov wrote:
>IntelliJ IDEA 2017.3.5 (Ultimate Edition)

Could you please upgrade to 2019.3?

Vladimir


Re: [QUESTION] Build Calcite using Gradle

2019-11-26 Thread Juan Pan
Hello Vladimir,


Thanks, here is version of IDEA:


IntelliJ IDEA 2017.3.5 (Ultimate Edition)
Build #IU-173.4674.33, built on March 6, 2018
Licensed to The Apache Software Foundation / juan pan
Subscription is active until February 29, 2020
JRE: 1.8.0_152-release-1024-b15 x86_64
JVM: OpenJDK 64-Bit Server VM by JetBrains s.r.o
Mac OS X 10.13.6


 Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere


On 11/26/2019 18:57,Vladimir Sitnikov wrote:
Juan> tried to `Go to File > Open… and open up

Which version of IDEA you are using?

Vladimir


Re: [QUESTION] Build Calcite using Gradle

2019-11-26 Thread Juan Pan
Hi Cheng,


Sorry for late reply, for i am trying self-help, however i failed…
It is not a gradle project after cloning Calcite from gitHub, in this condition 
any test can not run. 


I  tried to `Go to File > Open… and open up Calcite’s root build.gradle.kts 
file` according to docs[1], and got the bad result: `calcite: sync failed.—> 
Failed to notify build listener.` after 2h 18m 986ms passed… Oh dear!


Very appreciated if you could give me some idea. :)


[1] https://calcite.apache.org/docs/howto.html#building-from-git


 Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere


On 11/26/2019 16:46,Shuo Cheng wrote:
`./gradlew build -x test` works for me, what exception do you encounter?

On Tue, Nov 26, 2019 at 4:22 PM Juan Pan  wrote:

Hi Cheng and Rui,


Thanks for your replies, i guessed so, but my VPN can not make link[1]
visited well either. :(


Actually, i just want to debug some tests, however the ISSUE mentioned by
thread `IntelliJ and Gradle` is also an obstacle to me, for this reason i
have to run `./gradlew build` and then this exception came out.


I tried to run `./gradlew build -xtest` to solve the problem in thread
`IntelliJ and Gradle`, but it didn’t work. How should i to to debug some
unit tests?


Best wishes,
Trista


[1]
http://en.wikipedia.org/wiki/List_of_United_States_cities_by_population


Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere


On 11/26/2019 15:59,Shuo Cheng wrote:
First make sure the URLs in `FileReaderTest`, e.g ,
en.wikipedia.org/wiki/List_of_states_and_territories_of_the_United_States,
can be accessed from your machine.

On Tue, Nov 26, 2019 at 3:49 PM Juan Pan  wrote:

Hi everyone,


After migrating Calcite to Gradle, i built Calcite referring to docs[1]
and got the following exception. As i have no idea to handle it, this
help-seeking email comes.


Thanks in advance.


Best wishes,
Trista


[1] https://calcite.apache.org/docs/howto.html#building-from-git




org.apache.calcite.adapter.file.FileReaderTest > testJsonFile
STANDARD_ERROR
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for
further details.
5.2sec, org.apache.calcite.adapter.file.FileReaderTest >
testJsonFile
FAILURE   0.6sec, org.apache.calcite.adapter.file.FileReaderTest >
testFileReaderUrlNoPath


org.apache.calcite.adapter.file.FileReaderTest > testFileReaderUrlNoPath
FAILED
org.apache.calcite.adapter.file.FileReaderException: Cannot read //
en.wikipedia.org/wiki/List_of_states_and_territories_of_the_United_States
at
org.apache.calcite.adapter.file.FileReader.getTable(FileReader.java:72)
at
org.apache.calcite.adapter.file.FileReader.refresh(FileReader.java:131)
at

org.apache.calcite.adapter.file.FileReaderTest.testFileReaderUrlNoPath(FileReaderTest.java:80)


Caused by:
java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:210)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at
java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at
java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
at
java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at
sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:735)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:678)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:706)
at

sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1587)
at

sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
at
java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
at
org.jsoup.helper.HttpConnection$Response.execute(HttpConnection.java:750)
at
org.jsoup.helper.HttpConnection$Response.execute(HttpConnection.java:722)
at
org.jsoup.helper.HttpConnection.execute(HttpConnection.java:306)
at org.jsoup.helper.HttpConnection.get(HttpConnection.java:295)
at org.jsoup.Jsoup.parse(Jsoup.java:183)
at
org.apache.calcite.adapter.file.FileReader.getTable(FileReader.java:69)
... 2 more
FAILURE  28.0sec,   15 completed,   1 failed,   2 skipped,
org.apache.calcite.adapter.file.FileReaderTest
WARNING   4.3sec,   16 completed,   0 failed,   1 skipped,
org.apache.calcite.adapter.file.SqlTest
FAILURE  34.0sec,   31 completed,   1 failed,   3 skipped, Gradle Test Run
:file:test


31 tests completed, 1 failed, 3 skipped


Task :file:test FAILED


Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere





Re: [QUESTION] Build Calcite using Gradle

2019-11-26 Thread Juan Pan
Hi Cheng and Rui,


Thanks for your replies, i guessed so, but my VPN can not make link[1] visited 
well either. :(


Actually, i just want to debug some tests, however the ISSUE mentioned by 
thread `IntelliJ and Gradle` is also an obstacle to me, for this reason i have 
to run `./gradlew build` and then this exception came out. 


I tried to run `./gradlew build -xtest` to solve the problem in thread 
`IntelliJ and Gradle`, but it didn’t work. How should i to to debug some unit 
tests?


Best wishes,
Trista


[1] http://en.wikipedia.org/wiki/List_of_United_States_cities_by_population


 Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere


On 11/26/2019 15:59,Shuo Cheng wrote:
First make sure the URLs in `FileReaderTest`, e.g ,
en.wikipedia.org/wiki/List_of_states_and_territories_of_the_United_States,
can be accessed from your machine.

On Tue, Nov 26, 2019 at 3:49 PM Juan Pan  wrote:

Hi everyone,


After migrating Calcite to Gradle, i built Calcite referring to docs[1]
and got the following exception. As i have no idea to handle it, this
help-seeking email comes.


Thanks in advance.


Best wishes,
Trista


[1] https://calcite.apache.org/docs/howto.html#building-from-git




org.apache.calcite.adapter.file.FileReaderTest > testJsonFile
STANDARD_ERROR
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for
further details.
5.2sec, org.apache.calcite.adapter.file.FileReaderTest >
testJsonFile
FAILURE   0.6sec, org.apache.calcite.adapter.file.FileReaderTest >
testFileReaderUrlNoPath


org.apache.calcite.adapter.file.FileReaderTest > testFileReaderUrlNoPath
FAILED
org.apache.calcite.adapter.file.FileReaderException: Cannot read //
en.wikipedia.org/wiki/List_of_states_and_territories_of_the_United_States
at
org.apache.calcite.adapter.file.FileReader.getTable(FileReader.java:72)
at
org.apache.calcite.adapter.file.FileReader.refresh(FileReader.java:131)
at
org.apache.calcite.adapter.file.FileReaderTest.testFileReaderUrlNoPath(FileReaderTest.java:80)


Caused by:
java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:210)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at
java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at
java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
at
java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at
sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:735)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:678)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:706)
at
sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1587)
at
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
at
java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
at
org.jsoup.helper.HttpConnection$Response.execute(HttpConnection.java:750)
at
org.jsoup.helper.HttpConnection$Response.execute(HttpConnection.java:722)
at
org.jsoup.helper.HttpConnection.execute(HttpConnection.java:306)
at org.jsoup.helper.HttpConnection.get(HttpConnection.java:295)
at org.jsoup.Jsoup.parse(Jsoup.java:183)
at
org.apache.calcite.adapter.file.FileReader.getTable(FileReader.java:69)
... 2 more
FAILURE  28.0sec,   15 completed,   1 failed,   2 skipped,
org.apache.calcite.adapter.file.FileReaderTest
WARNING   4.3sec,   16 completed,   0 failed,   1 skipped,
org.apache.calcite.adapter.file.SqlTest
FAILURE  34.0sec,   31 completed,   1 failed,   3 skipped, Gradle Test Run
:file:test


31 tests completed, 1 failed, 3 skipped


Task :file:test FAILED


Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere




[QUESTION] Build Calcite using Gradle

2019-11-25 Thread Juan Pan
Hi everyone,


After migrating Calcite to Gradle, i built Calcite referring to docs[1] and got 
the following exception. As i have no idea to handle it, this help-seeking 
email comes. 


Thanks in advance.


Best wishes,
Trista


[1] https://calcite.apache.org/docs/howto.html#building-from-git




org.apache.calcite.adapter.file.FileReaderTest > testJsonFile STANDARD_ERROR
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further 
details.
  5.2sec, org.apache.calcite.adapter.file.FileReaderTest > testJsonFile
FAILURE   0.6sec, org.apache.calcite.adapter.file.FileReaderTest > 
testFileReaderUrlNoPath


org.apache.calcite.adapter.file.FileReaderTest > testFileReaderUrlNoPath FAILED
org.apache.calcite.adapter.file.FileReaderException: Cannot read 
//en.wikipedia.org/wiki/List_of_states_and_territories_of_the_United_States
at 
org.apache.calcite.adapter.file.FileReader.getTable(FileReader.java:72)
at 
org.apache.calcite.adapter.file.FileReader.refresh(FileReader.java:131)
at 
org.apache.calcite.adapter.file.FileReaderTest.testFileReaderUrlNoPath(FileReaderTest.java:80)


Caused by:
java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:210)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:735)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:678)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:706)
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1587)
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
at 
java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
at 
org.jsoup.helper.HttpConnection$Response.execute(HttpConnection.java:750)
at 
org.jsoup.helper.HttpConnection$Response.execute(HttpConnection.java:722)
at org.jsoup.helper.HttpConnection.execute(HttpConnection.java:306)
at org.jsoup.helper.HttpConnection.get(HttpConnection.java:295)
at org.jsoup.Jsoup.parse(Jsoup.java:183)
at 
org.apache.calcite.adapter.file.FileReader.getTable(FileReader.java:69)
... 2 more
FAILURE  28.0sec,   15 completed,   1 failed,   2 skipped, 
org.apache.calcite.adapter.file.FileReaderTest
WARNING   4.3sec,   16 completed,   0 failed,   1 skipped, 
org.apache.calcite.adapter.file.SqlTest
FAILURE  34.0sec,   31 completed,   1 failed,   3 skipped, Gradle Test Run 
:file:test


31 tests completed, 1 failed, 3 skipped


> Task :file:test FAILED


 Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere



Re: [DISCUSS] State of the project 2019

2019-10-29 Thread Juan Pan
Sorry to disturb others.


 @Danny Chan Hi, i have not received your personal mail, and i sent you 
email(yuzhao@gmail.com?) as well, but no reply. :(



So i have to ping you in this way, please excuse me.






 Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere


On 10/25/2019 20:41,Danny Chan wrote:
Oh, you can add my weixin(send personal mail for that) and I have a free ticket 
for the conference !

Best,
Danny Chan
在 2019年10月25日 +0800 PM6:29,Juan Pan ,写道:
Hi Danny,


I am interested in your coming talk in Beijing China. How to take part in it, 
can you give me more detail?


Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere


On 10/23/2019 18:23,Danny Chan wrote:
I gave a talk last year in a university in
France, and nobody in the audience had ever heard of Calcite before.

Oops, that's a pity, I would also give a talk about Calcite on Flink Forward 
Asia 2019 of BeiJing China, hope more people would know Apache Calcite.

Best,
Danny Chan
在 2019年10月23日 +0800 PM2:36,dev@calcite.apache.org,写道:

I gave a talk last year in a university in
France, and nobody in the audience had ever heard of Calcite before.


Re: [DISCUSS] State of the project 2019

2019-10-25 Thread Juan Pan
Hi Danny,


I am interested in your coming talk in Beijing China. How to take part in it, 
can you give me more detail?


 Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere


On 10/23/2019 18:23,Danny Chan wrote:
I gave a talk last year in a university in
France, and nobody in the audience had ever heard of Calcite before.

Oops, that's a pity, I would also give a talk about Calcite on Flink Forward 
Asia 2019 of BeiJing China, hope more people  would know Apache Calcite.

Best,
Danny Chan
在 2019年10月23日 +0800 PM2:36,dev@calcite.apache.org,写道:

I gave a talk last year in a university in
France, and nobody in the audience had ever heard of Calcite before.


Re: [DISCUSS] State of the project 2019

2019-10-22 Thread Juan Pan
Hi everyone,


Actually i think it's time to have my say after receiving the email titiled 
`Have your say!` from Julian Hyde.


As a person new this project, i am appreciated community's help for my question 
ml, and glad to see its communnity so active and harmonious. Other than that, 
it also gives me some new ideas for our incubator project. As of now, i am 
exploring Calcite, so if possible, i want to do some contributions to it as 
well.


Hope Apache Calcite better and better in future.


Best wishes,
Trista




 Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere


On 10/23/2019 11:21,Chunwei Lei wrote:
Thank you for your great work, Francis!

In the past year, I deeply feel that Calcite community is becoming more and
more active, which means that more and
more companies start to use Apache Calcite. This is very exciting and
encouraging.

Thanks to active contributions and committers, pull requests could be
reviewed and merged in a short time currently
which really benefits a lot.

I also feel that there are quite a few tough legacy issues left. I hope we
can spend more time discussing and finding out
a solution in the future.

Thanks!

Best,
Chunwei


On Wed, Oct 23, 2019 at 9:38 AM Danny Chan  wrote:

Thanks for the state summarize Francis Chuang ! And thanks for the awesome
work for keeping Calcite in good shape !

From my perspective, I really feel that Calcite is becoming more and more
popular and there are many new groups trying to use this great project, as
a reviewer, I saw many contributions from all kinds of people. And I feel
very proud of that !

For the last year, I did many work to let Apache Flink and Calcite to have
better integration, and I believe there would be more and more people to
make contributions and make Calcite more pluggable and more suitable for
production environments !

I also very enjoy the absolute harmony of the community, we are fire
interesting discussions on the mailing list and we did have some valuable
conclusion(like the Join expression rework, the trait sets propagation, the
metadata etc.) And I felt very happy and respected to work with you guys in
the community. Let’s keep the good communications and output more valuable
design !

Thanks again to everyone !

Best,
Danny Chan
在 2019年10月22日 +0800 AM10:22,Julian Hyde ,写道:
I agree that we’ve made good progress on last year’s big problem, pull
requests languishing for too long. The situation has been better, because a
few people are putting in considerable effort reviewing. We still have some
ways to go, so let’s keep up the good work.

One of the successes of the year was to arrange release managers for
several releases in advance. Each of the individuals stepped up and did a
great job, and the release process was as smooth as it could possibly be
for a project of this size. Because the work was shared, I think no
individual felt that they were taking on an undue burden.

I also want to mention the fact that we now have an awesome logo. Thank
you for pushing that change through!

+1 for Stamatis as candidate for the next PMC chair. I was going to
propose him also.

I am proud that we have appointed a new person as PMC chair every year.
Each chair has brought a new perspective and energy to the role, and has
advanced the community. Francis is no exception, and he has kept Calcite in
good shape. Thank you, Francis!

We were discussing in another thread about whether we should cleave
Avatica into a more separate sub-project or top-level project. But I do
note that Francis came from the Avatica side of the project (Avatica Go, in
fact) and yet effortlessly and effectively spoke for the whole Calcite
project. So, it gives me hope that there is still cohesion between the
Calcite and Avatica communities.

Julian


On Oct 20, 2019, at 6:50 PM, Danny Chan  wrote:

Great work Francis! I’m + 1 for Stamatis being the PMC chair ~

Best,
Danny Chan
在 2019年10月21日 +0800 AM6:47,dev@calcite.apache.org,写道:

Francis




Re: [QUESTION] One query executed on two or more different data storages through Calcite

2019-10-22 Thread Juan Pan
Hi Danny and Julian


Thanks, i did some researches after listened to your suggestions. It seems not 
an easy thing for me, but i will learn Calcite and Flink more and think about 
your thoughts.


Best wishes,
Trista


 Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere


On 10/22/2019 13:42,Danny Chan wrote:
You may need a computation framework like Apache Flink. Use MySQL and Cassandra 
as connector/dataSource and write the results to your sink.

Best,
Danny Chan
在 2019年10月22日 +0800 AM10:36,Juan Pan ,写道:
Hi everyone,


Thanks for your attention. I can not get a clear result after read most of 
Calcite document. So i send this email for your suggestion.


Suppose there are two data storages, e.g, MySQL and Cassandra behind Calcite, 
and data is separately stored in two of them, can i execute a query, e.g 
`SELECT * FROM tb WHERE id = 1` simultaneously on two of data storages through 
Calcite? In other words, i want to get the final combined result from MySQL and 
Cassandra, which store part of data in different forms separately through 
Calcite.


Looking forward to your suggestions and thoughts.


Best wishes,
Trista


Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere



Re: [QUESTION] One query executed on two or more different data storages through Calcite

2019-10-21 Thread Juan Pan
Thanks for your rely. 


`SELECT * FROM tb WHERE id = 1`
can be converted into `UNION ALL`, but i am worried how to handle some 
aggregation SQLs, e.g `SELECT AVG(NUM) FROM tb`.


 Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere


On 10/22/2019 11:04,Julian Hyde wrote:
Not currently, but it wouldn’t be too much work.

Consider a similar query:

SELECT * FROM mysqlTable
UNION ALL
SELECT * FROM cassandraTable

This would convert into an EnumerableUnion which would send sub-queries to the 
two back ends and combine the results.

You’d need a new relational operator which, I assume, would go with whichever 
result arrives first. A new sub-class of RelNode, perhaps similar to 
EnumerableUnion, or perhaps you could use a table-valued function.

Julian


On Oct 21, 2019, at 7:27 PM, Juan Pan  wrote:

Hi everyone,


Thanks for your attention. I can not get a clear result after read most of 
Calcite document. So i send this email for your suggestion.


Suppose there are two data storages, e.g, MySQL and Cassandra behind Calcite, 
and data is separately stored in two of them, can i execute a query, e.g 
`SELECT * FROM tb WHERE id = 1` simultaneously on two of data storages through 
Calcite? In other words, i want to get the final combined result from MySQL and 
Cassandra, which store part of data in different forms separately through 
Calcite.


Looking forward to your suggestions and thoughts.


Best wishes,
Trista


Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere



[QUESTION] One query executed on two or more different data storages through Calcite

2019-10-21 Thread Juan Pan
Hi everyone,


Thanks for your attention. I can not get a clear result after read most of 
Calcite document. So i send this email for your suggestion.


Suppose there are two data storages, e.g, MySQL and Cassandra behind Calcite, 
and data is separately stored in two of them, can i execute a query, e.g 
`SELECT * FROM tb WHERE id = 1` simultaneously on two of data storages through 
Calcite? In other words, i want to get the final combined result from MySQL and 
Cassandra, which store part of data in different forms separately through 
Calcite.


Looking forward to your suggestions and thoughts.


Best wishes,
Trista


 Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere



Re: Ignite community is building Calcite-based prototype

2019-10-05 Thread Juan Pan
"pay good for good”, so cool!
Calcite is great, which our project is using. And now i become interested in 
Ignite.


Best wishes.


 Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere


On 10/2/2019 06:43,Denis Magda wrote:
Hi Julian,

Nice to e-meet you and thanks for being ready to help! Hopefully, the
Ignite community will be able to contribute valuable changes back to
Calcite as part of this activity - "pay good for good" :)

You are right that distributed computing, massive-parallel processing, and
calculations/querying at scale is what Ignite is targeted for. However,
while Drill is designed for analytics and IoTDB is for time-series, Ignite
is primarily used for OLTP with an increasing number of real-time analytics
use cases (no adhoc).

Let's stay in touch!

-
Denis


On Tue, Oct 1, 2019 at 6:42 AM Julian Feinauer 
wrote:

Hi Igor,

I agree that it should be rather similar to what Drill did as distributed
computing also is a big concern for Ignite, I guess, right?

Julian

Am 01.10.19, 15:06 schrieb "Seliverstov Igor" :

Guys,

The better link:
https://cwiki.apache.org/confluence/display/IGNITE/IEP-37%3A+New+query+execution+engine
<
https://cwiki.apache.org/confluence/display/IGNITE/IEP-37:+New+query+execution+engine


Almost everything you may see by the link is the same as Drill guys
already did, the difference is in details but the idea is the same.

Of course we’ll face many issues while development and I'll appreciate
if some of you assist us.

Regards,
Igor

1 окт. 2019 г., в 12:32, Julian Feinauer <
j.feina...@pragmaticminds.de> написал(а):

Hi Denis,

Nice to hear from you and the ignite team... that sounds like an
excellent idea. I liked the idea of Ignite since I heard about it (I think
when it became TLP back then). So I would be happy to help you if you have
specific questions... I‘m currently working on a related topic, namely
integrate calcite as SQL Layer into Apache IoTDB .

Best
Julian

Holen Sie sich Outlook für iOS<https://aka.ms/o0ukef>

Von: Denis Magda 
Gesendet: Tuesday, October 1, 2019 2:37:20 AM
An: dev@calcite.apache.org ; dev <
d...@ignite.apache.org>
Betreff: Ignite community is building Calcite-based prototype

Hey ASF-mates,

Just wanted to send a note for Ignite dev community who has started
prototyping
<
http://apache-ignite-developers.2346864.n4.nabble.com/New-SQL-execution-engine-td43724.html

with a new Ignite SQL engine and Calcite was selected as the most
favorable
option.

We will truly appreciate if you help us with questions that might
hit your
dev list. Ignite folks have already studied Calcite well enough and
carried
on with the integration, but there might be tricky parts that would
require
your expertise.

Btw, if anybody is interested in Ignite (memory-centric database and
compute platform) or would like to learn more details about the
prototype
or join its development, please check these links or send us a note:

- https://ignite.apache.org
-

https://cwiki.apache.org/confluence/display/IGNITE/IEP-33%3A+New+SQL+executor+engine+infrastructure


-
Denis,
Ignite PMC Chair






Re: How to get columnName as `COUNT(*)` , not `EXPR$0`

2019-09-29 Thread Juan Pan
Sorry, 
It is CALCITE-3261.


 Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere


On 09/29/2019 16:29,Juan Pan wrote:
Thanks Danny,
Got it. I will watch CALCITE-326.


Regards,
Trista


Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere


On 09/29/2019 16:15,Danny Chan wrote:
No worries, Juan Pan, welcome to contribute to Apache Calcite.

Calcite always put the JIRA issues in the first place instead of GitHub page 
because it is really food for bug/problem tracing.

If you have any questions or want to discuss something, welcome to send mail 
into the DEV mailing list.

If it is a known bug or promotion, feel free to fire a JIRA issue and we move 
the discussion there. The committers would help you and what you need to do is 
describe your problems/cases clearly in the JIRA issue.

Best,
Danny Chan
在 2019年9月29日 +0800 PM3:23,Juan Pan ,写道:
Actually, i think this problem should be already raised by others, for it is 
obvious enough. But i visited Calcite gitHub, and can not find issue list, so i 
sent this email. Yes, Calcite is using Jira for issues, i got.


Given Calcite implements the interfaces of ResultSetMetadata, ResultSet and so 
on, it should return the real columnName or columnLabel from SQL, not the 
parsing expr(?) which is somewhat... strange to users. When i first got the 
result `EXPR$0`, i doubted whether my program went wrong?


Recently, i am exploring Calcite, and i’d like to do some contributions to 
Calcite community if i can. But.. for a new one, it seems difficult.


Regard,
Trista


Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere


On 09/29/2019 14:48,Danny Chan wrote:
There is already a JIRA issue to trace this problem[1], maybe we can move the 
discussion to there.

[1] https://issues.apache.org/jira/browse/CALCITE-3261

Best,
Danny Chan
在 2019年9月29日 +0800 AM11:39,Juan Pan ,写道:


Hi everyone,


I executed SQL `select count(*) from tb1` through Calcite and 
resultSet.getMetaData().getColumnName(i) in my project. But the result is 
`EXPR$0` not `COUNT(*)`.


Is there any way to get real columnName?


Thanks for your attention.


Regard,
Trista




Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere



Re: How to get columnName as `COUNT(*)` , not `EXPR$0`

2019-09-29 Thread Juan Pan
Thanks Danny,
Got it. I will watch CALCITE-326.


Regards,
Trista


 Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere


On 09/29/2019 16:15,Danny Chan wrote:
No worries, Juan Pan, welcome to contribute to Apache Calcite.

Calcite always put the JIRA issues in the first place instead of GitHub page 
because it is really food for bug/problem tracing.

If you have any questions or want to discuss something, welcome to send mail 
into the DEV mailing list.

If it is a known bug or promotion, feel free to fire a JIRA issue and we move 
the discussion there. The committers would help you and what you need to do is 
describe your problems/cases clearly in the JIRA issue.

Best,
Danny Chan
在 2019年9月29日 +0800 PM3:23,Juan Pan ,写道:
Actually, i think this problem should be already raised by others, for it is 
obvious enough. But i visited Calcite gitHub, and can not find issue list, so i 
sent this email. Yes, Calcite is using Jira for issues, i got.


Given Calcite implements the interfaces of ResultSetMetadata, ResultSet and so 
on, it should return the real columnName or columnLabel from SQL, not the 
parsing expr(?) which is somewhat... strange to users. When i first got the 
result `EXPR$0`, i doubted whether my program went wrong?


Recently, i am exploring Calcite, and i’d like to do some contributions to 
Calcite community if i can. But.. for a new one, it seems difficult.


Regard,
Trista


Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere


On 09/29/2019 14:48,Danny Chan wrote:
There is already a JIRA issue to trace this problem[1], maybe we can move the 
discussion to there.

[1] https://issues.apache.org/jira/browse/CALCITE-3261

Best,
Danny Chan
在 2019年9月29日 +0800 AM11:39,Juan Pan ,写道:


Hi everyone,


I executed SQL `select count(*) from tb1` through Calcite and 
resultSet.getMetaData().getColumnName(i) in my project. But the result is 
`EXPR$0` not `COUNT(*)`.


Is there any way to get real columnName?


Thanks for your attention.


Regard,
Trista




Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere



Re: How to get columnName as `COUNT(*)` , not `EXPR$0`

2019-09-29 Thread Juan Pan
Actually, i think this problem should be already raised by others, for it is 
obvious enough. But i visited Calcite gitHub, and can not find issue list, so i 
sent this email. Yes, Calcite is using Jira for issues, i got.


Given Calcite implements the interfaces of ResultSetMetadata, ResultSet and so 
on, it should return the real columnName or columnLabel from SQL, not the 
parsing expr(?) which is somewhat... strange to users. When i first got the 
result `EXPR$0`, i doubted whether my program went wrong?


Recently, i am exploring Calcite, and i’d like to do some contributions to 
Calcite community if i can. But.. for a new one, it seems difficult.


Regard,
Trista


 Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere


On 09/29/2019 14:48,Danny Chan wrote:
There is already a JIRA issue to trace this problem[1], maybe we can move the 
discussion to there.

[1] https://issues.apache.org/jira/browse/CALCITE-3261

Best,
Danny Chan
在 2019年9月29日 +0800 AM11:39,Juan Pan ,写道:


Hi everyone,


I executed SQL `select count(*) from tb1` through Calcite and 
resultSet.getMetaData().getColumnName(i) in my project. But the result is 
`EXPR$0` not `COUNT(*)`.


Is there any way to get real columnName?


Thanks for your attention.


Regard,
Trista




Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere



Re: How to get columnName as `COUNT(*)` , not `EXPR$0`

2019-09-28 Thread Juan Pan
Hi XING,
I appreciate your kindness. :-D Your detailed and prompt replies really helped 
me a lot.
I will review the java doc you mentioned.


Best wishes,
Trista


 Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere


On 09/29/2019 13:58,XING JIN wrote:
You can check the below doc of SqlValidatorUtil#getAlias for explanation:

/**
* Derives an alias for a node, and invents a mangled identifier if it
* cannot.
*
* Examples:
*
* 
* Alias: "1 + 2 as foo" yields "foo"
* Identifier: "foo.bar.baz" yields "baz"
* Anything else yields "expr$ordinal"
* 
*
* @return An alias, if one can be derived; or a synthetic alias
* "expr$ordinal" if ordinal < 0; otherwise null
*/
public static String getAlias(SqlNode node, int ordinal)

But from my experience, you'd better not rely on above logic heavily. If
you really care about the output name, just give it an alias explicitly.

Juan Pan  于2019年9月29日周日 下午1:27写道:

That means Calcite can only return real columnName or columnLabel from
simple column or alias. And any aggregate function, or calculate expression
without alias, parsing expression, i.e, `EXPR$0` will be returned?


Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere


On 09/29/2019 13:16,XING JIN wrote:
If no column name given explicitly, e.g. by alias or simple identifier,
Calcite will derive one but not from the aggregate function.

Juan Pan  于2019年9月29日周日 下午1:12写道:

Thank for your reply. It is a indirect way to get columnName.


Calcite can not return the real columnName from SQL, is it right?


Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere


On 09/29/2019 12:21,XING JIN wrote:
You can try to give an alias for the selected column.

Juan Pan  于2019年9月29日周日 上午11:39写道:



Hi everyone,


I executed SQL `select count(*) from tb1` through Calcite and
resultSet.getMetaData().getColumnName(i) in my project. But the result is
`EXPR$0` not `COUNT(*)`.


Is there any way to get real columnName?


Thanks for your attention.


Regard,
Trista




Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere






Re: How to get columnName as `COUNT(*)` , not `EXPR$0`

2019-09-28 Thread Juan Pan
That means Calcite can only return real columnName or columnLabel from simple 
column or alias. And any aggregate function, or calculate expression without 
alias, parsing expression, i.e, `EXPR$0` will be returned?


 Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere


On 09/29/2019 13:16,XING JIN wrote:
If no column name given explicitly, e.g. by alias or simple identifier,
Calcite will derive one but not from the aggregate function.

Juan Pan  于2019年9月29日周日 下午1:12写道:

Thank for your reply. It is a indirect way to get columnName.


Calcite can not return the real columnName from SQL, is it right?


Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere


On 09/29/2019 12:21,XING JIN wrote:
You can try to give an alias for the selected column.

Juan Pan  于2019年9月29日周日 上午11:39写道:



Hi everyone,


I executed SQL `select count(*) from tb1` through Calcite and
resultSet.getMetaData().getColumnName(i) in my project. But the result is
`EXPR$0` not `COUNT(*)`.


Is there any way to get real columnName?


Thanks for your attention.


Regard,
Trista




Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere





Re: How to get columnName as `COUNT(*)` , not `EXPR$0`

2019-09-28 Thread Juan Pan
Thank for your reply. It is a indirect way to get columnName. 


Calcite can not return the real columnName from SQL, is it right?


 Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere


On 09/29/2019 12:21,XING JIN wrote:
You can try to give an alias for the selected column.

Juan Pan  于2019年9月29日周日 上午11:39写道:



Hi everyone,


I executed SQL `select count(*) from tb1` through Calcite and
resultSet.getMetaData().getColumnName(i) in my project. But the result is
`EXPR$0` not `COUNT(*)`.


Is there any way to get real columnName?


Thanks for your attention.


Regard,
Trista




Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere




How to get columnName as `COUNT(*)` , not `EXPR$0`

2019-09-28 Thread Juan Pan


Hi everyone,


I executed SQL `select count(*) from tb1` through Calcite and 
resultSet.getMetaData().getColumnName(i) in my project. But the result is 
`EXPR$0` not `COUNT(*)`. 


Is there any way to get real columnName?


Thanks for your attention.


Regard,
Trista




 Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere



Re: How to modify data for custom tables through Calcite.

2019-09-26 Thread Juan Pan
Thanks, Danny


I will have a try, but it seems challenging, i thought… 


Regards,
Trista


 Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere


On 09/26/2019 14:19,Danny Chan wrote:
The ElasticsearchTableScan is a good start to show how it transfers the elastic 
nodes from Convention.NONE to ElasticsearchRel.CONVENTION [1]

[1] 
https://github.com/apache/calcite/blob/c9adf94b0e07f2e9108ef4d1f2ee28c3e42063b3/elasticsearch/src/main/java/org/apache/calcite/adapter/elasticsearch/ElasticsearchTableScan.java#L79

Best,
Danny Chan
在 2019年9月26日 +0800 PM12:13,Juan Pan ,写道:
@Danny Chan



Thanks Danny, is there any document or test for me to learn `specific 
convention` more?


Regards,
Trista


Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere


On 09/26/2019 12:02,Danny Chan wrote:
@Rui Wang, Yes, I wrote the Flink-sql-parser module it did support insert 
grammar well.

@Juan Pan you need the converter rules to convert all the nodes to specific 
convention you want, also specify the desired convention in the trait set of 
your planing program.

Best,
Danny Chan
在 2019年9月26日 +0800 AM6:04,Rui Wang ,写道:
Another data point is both BeamSQL and FlinkSQL support DDL by an extensive
way (and I believe it works through Avitica as well).

BeamSQL: [1]
FlinkSQL: [2]


Calcite allows add customized DDL in parser and also in implementation
schema is accessible.

[1]:
https://github.com/apache/beam/blob/master/sdks/java/extensions/sql/src/main/codegen/includes/parserImpls.ftl#L149
[2]:
https://github.com/apache/flink/blob/master/flink-table/flink-sql-parser/src/main/codegen/data/Parser.tdd#L430

-Rui

On Wed, Sep 25, 2019 at 2:54 PM Stamatis Zampetakis 
wrote:

Hi Trista,

I think the server module is doing what you are asking for. Have a look in
ServerTest [1].
As Gelbana mentioned the implementation is based on implementations of the
ModifiableTable interface.

Best,
Stamatis

[1]

https://github.com/apache/calcite/blob/master/server/src/test/java/org/apache/calcite/test/ServerTest.java

On Wed, Sep 25, 2019 at 11:29 PM Mohamed Mohsen 
wrote:

I haven't done that before but I would start investigating from this
interface [1]. Please share your experience if you get this done.

[1] org.apache.calcite.schema.ModifiableTable


On Wed, Sep 25, 2019 at 2:00 PM Juan Pan  wrote:

Hi everyone,


Thanks for your attention. I want to know the following description is
right or not?


"Modification has only been worked on for JDBC tables, not for any
custom
tables currently.”


Query SQL on custom table is ok, so i am wondering whether i can
execute
`update/insert/delete` SQL through Calcite on custom tables.


Can anyone give me some ideas?


Really thanks for your help.


Regards,
Trista






Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere






Re: How to modify data for custom tables through Calcite.

2019-09-25 Thread Juan Pan
 @Danny Chan 



Thanks Danny, is there any document or test for me to learn `specific 
convention` more?


Regards,
Trista


 Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere


On 09/26/2019 12:02,Danny Chan wrote:
@Rui Wang, Yes, I wrote the Flink-sql-parser module it did support insert 
grammar well.

@Juan Pan you need the converter rules to convert all the nodes to specific 
convention you want, also specify the desired convention in the trait set of 
your planing program.

Best,
Danny Chan
在 2019年9月26日 +0800 AM6:04,Rui Wang ,写道:
Another data point is both BeamSQL and FlinkSQL support DDL by an extensive
way (and I believe it works through Avitica as well).

BeamSQL: [1]
FlinkSQL: [2]


Calcite allows add customized DDL in parser and also in implementation
schema is accessible.

[1]:
https://github.com/apache/beam/blob/master/sdks/java/extensions/sql/src/main/codegen/includes/parserImpls.ftl#L149
[2]:
https://github.com/apache/flink/blob/master/flink-table/flink-sql-parser/src/main/codegen/data/Parser.tdd#L430

-Rui

On Wed, Sep 25, 2019 at 2:54 PM Stamatis Zampetakis 
wrote:

Hi Trista,

I think the server module is doing what you are asking for. Have a look in
ServerTest [1].
As Gelbana mentioned the implementation is based on implementations of the
ModifiableTable interface.

Best,
Stamatis

[1]

https://github.com/apache/calcite/blob/master/server/src/test/java/org/apache/calcite/test/ServerTest.java

On Wed, Sep 25, 2019 at 11:29 PM Mohamed Mohsen 
wrote:

I haven't done that before but I would start investigating from this
interface [1]. Please share your experience if you get this done.

[1] org.apache.calcite.schema.ModifiableTable


On Wed, Sep 25, 2019 at 2:00 PM Juan Pan  wrote:

Hi everyone,


Thanks for your attention. I want to know the following description is
right or not?


"Modification has only been worked on for JDBC tables, not for any
custom
tables currently.”


Query SQL on custom table is ok, so i am wondering whether i can
execute
`update/insert/delete` SQL through Calcite on custom tables.


Can anyone give me some ideas?


Really thanks for your help.


Regards,
Trista






Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere






Re: How to modify data for custom tables through Calcite.

2019-09-25 Thread Juan Pan
:675)
at 
org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:156)
... 27 more


```


Otherwise, after reading doc again, `DDL extensions are only available in the 
calcite-server module. ` is expressed in doc. `update/insert/delete` are DML i 
think, is it necessary to use calcite server module for those DML?


Thanks for your help, and look forward to hearing from you. :)


Regards,
Trista






 Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere


On 09/26/2019 05:54,Stamatis Zampetakis wrote:
Hi Trista,

I think the server module is doing what you are asking for. Have a look in
ServerTest [1].
As Gelbana mentioned the implementation is based on implementations of the
ModifiableTable interface.

Best,
Stamatis

[1]
https://github.com/apache/calcite/blob/master/server/src/test/java/org/apache/calcite/test/ServerTest.java

On Wed, Sep 25, 2019 at 11:29 PM Mohamed Mohsen  wrote:

I haven't done that before but I would start investigating from this
interface [1]. Please share your experience if you get this done.

[1] org.apache.calcite.schema.ModifiableTable


On Wed, Sep 25, 2019 at 2:00 PM Juan Pan  wrote:

Hi everyone,


Thanks for your attention. I want to know the following description is
right or not?


"Modification has only been worked on for JDBC tables, not for any custom
tables currently.”


Query SQL on custom table is ok, so i am wondering whether i can execute
`update/insert/delete` SQL through Calcite on custom tables.


Can anyone give me some ideas?


Really thanks for your help.


Regards,
Trista






Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere





How to modify data for custom tables through Calcite.

2019-09-25 Thread Juan Pan
Hi everyone,


Thanks for your attention. I want to know the following description is right or 
not?


"Modification has only been worked on for JDBC tables, not for any custom 
tables currently.”


Query SQL on custom table is ok, so i am wondering whether i can execute 
`update/insert/delete` SQL through Calcite on custom tables.


Can anyone give me some ideas?


Really thanks for your help.


Regards,
Trista






 Juan Pan


panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere



Re: Is it possible that unquoted identifiers are not implicitly converted to upper case

2019-09-12 Thread Juan Pan
Hi Feng,


You’re right, i get the same result with your suggestion, and either of the 
following expressions is ok.


1. properties.put(CalciteConnectionProperty.LEX.camelName(), "MYSQL");
2. properties.put("lex", "MYSQL");




You’re familiar with Calcite :), and thanks for your help and kindness! ☺


Actually, we plan to develop a new feature, and i find Calcite is a great 
option to meet our demand.


Thanks Calcite community, and hope two communities can build deeper connection.


P.S
Apache ShardingSphere(Incubator) is an open-source ecosystem consisted of a set 
of distributed database middleware solutions.




| |
Juan Pan
|
|
panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere
|


On 09/12/2019 17:32,Feng Zhu wrote:
Hi, Juan Pan:
You may find the logic in *UnregisterDriver#connect(String url, Properties
info)*
It just parses the key-value pairs in url's prefix and adds into the copy
of "info".
Therefore, I think the below config
*properties.put(CalciteConnecti**onProperty.LEX, Lex.MYSQL); *
should be aligned with your first usage:
*properties.put("lex", "MYSQL"); *

Juan Pan  于2019年9月12日周四 下午2:23写道:







Hi Feng,




Thanks for your promote reply. :)




Lex is just what i want. But when i tried to use it, i encountered another
problem.




The first usage is ok, but the second one doesn’t work. ThoughLex are used
in different methods, the result will be same, i think. Do i misunderstand?
Or is the second one wrong usage?




The first usage:




CONNECTION_URL = "jdbc:calcite:lex=MYSQL;model="

try (Connection connection = DriverManager.getConnection(CONNECTION_URL);

Statement statement = connection.createStatement()) {

// do some things

}




The second usage:




CONNECTION_URL = "jdbc:calcite:model="

Properties properties = new Properties();

properties.put(CalciteConnectionProperty.LEX, Lex.MYSQL);

try (Connection connection = DriverManager.getConnection(CONNECTION_URL,
properties);

Statement statement = connection.createStatement()) {

// do some things

}




Thanks again for your kindness, and waiting  for u. :)




Regards,

Trista





| |
Juan Pan
|
|
panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere
|


On 09/11/2019 20:23,Feng Zhu wrote:
Hi, JuanPan,
You can refer to Lex, which decides how identifiers are quoted, whether
they are converted to upper-case
org.apache.calcite.config.Lex

Regards

Juan Pan  于2019年9月11日周三 下午8:05写道:



Hi, all the committers and contributors,


This email is for your help.


I am now deep in Apache Calcite, and it’s great. Now, i want to know
whether it is possible that unquoted identifiers are not implicitly
converted to upper case?


For example, a SQL is `select name from test`, when it was executed, an
exception is thrown:


org.apache.calcite.sql.validate.SqlValidatorException: Object 'TEST' not
found within 'memory'; did you mean 'test’?


I wonder there is any setting that can make `name` and `test`  recognized
correctly by Calcite without double quotes.


Thanks for your help.


Regards,
Trista
---
Email:panj...@apache.org
Juan Pan(Trista) Apache ShardingSphere





Re: Is it possible that unquoted identifiers are not implicitly converted to upper case

2019-09-11 Thread Juan Pan






Hi Feng,




Thanks for your promote reply. :)




Lex is just what i want. But when i tried to use it, i encountered another 
problem. 




The first usage is ok, but the second one doesn’t work. ThoughLex are used in 
different methods, the result will be same, i think. Do i misunderstand? Or is 
the second one wrong usage?




The first usage:




CONNECTION_URL = "jdbc:calcite:lex=MYSQL;model="

try (Connection connection = DriverManager.getConnection(CONNECTION_URL);

 Statement statement = connection.createStatement()) {

// do some things

} 




The second usage:




CONNECTION_URL = "jdbc:calcite:model="

Properties properties = new Properties();

properties.put(CalciteConnectionProperty.LEX, Lex.MYSQL);

try (Connection connection = DriverManager.getConnection(CONNECTION_URL, 
properties);

 Statement statement = connection.createStatement()) {

// do some things

} 




Thanks again for your kindness, and waiting  for u. :)




Regards,

Trista





| |
Juan Pan
|
|
panj...@apache.org
Juan Pan(Trista), Apache ShardingSphere
|


On 09/11/2019 20:23,Feng Zhu wrote:
Hi, JuanPan,
You can refer to Lex, which decides how identifiers are quoted, whether
they are converted to upper-case
org.apache.calcite.config.Lex

Regards

Juan Pan  于2019年9月11日周三 下午8:05写道:



Hi, all the committers and contributors,


This email is for your help.


I am now deep in Apache Calcite, and it’s great. Now, i want to know
whether it is possible that unquoted identifiers are not implicitly
converted to upper case?


For example, a SQL is `select name from test`, when it was executed, an
exception is thrown:


org.apache.calcite.sql.validate.SqlValidatorException: Object 'TEST' not
found within 'memory'; did you mean 'test’?


I wonder there is any setting that can make `name` and `test`  recognized
correctly by Calcite without double quotes.


Thanks for your help.


Regards,
Trista
-------
Email:panj...@apache.org
Juan Pan(Trista) Apache ShardingSphere




Is it possible that unquoted identifiers are not implicitly converted to upper case

2019-09-11 Thread Juan Pan


Hi, all the committers and contributors,


This email is for your help.


I am now deep in Apache Calcite, and it’s great. Now, i want to know whether it 
is possible that unquoted identifiers are not implicitly converted to upper 
case? 


For example, a SQL is `select name from test`, when it was executed, an 
exception is thrown:


org.apache.calcite.sql.validate.SqlValidatorException: Object 'TEST' not found 
within 'memory'; did you mean 'test’?


I wonder there is any setting that can make `name` and `test`  recognized 
correctly by Calcite without double quotes.


Thanks for your help.


Regards,
Trista
---
Email:panj...@apache.org
Juan Pan(Trista) Apache ShardingSphere