Re: Need a Jason output if physical tree

2020-12-13 Thread Muhammad Gelbana
I would use a visitor to traverse the optimized/physical plan.



On Sun, Dec 13, 2020 at 6:42 AM Bhavya Aggarwal  wrote:

> Hi All,
>
> We need to generate a JSON object for the physical execution tree that has
> been created. Is there an option in Calcite that we can use to do this. I
> am not sure what is the right approach to do it. Please let me know if
> there are different ways to achieve this.
>
> Regards
> Bhavya
>
> --
> Your feedback matters - At Knoldus we aim to be very professional in our
> quality of work, commitment to results, and proactive communication. If
> you
> feel otherwise please share your feedback
>  and we would work on it.
>


Re: limitations on the SQLs executed

2020-02-16 Thread Muhammad Gelbana
If your only concern is about memory utilization, I would try estimating
this using the plan's cost. But I guess you'll have run some tests to
estimate the ranges you can accept.


On Sun, Feb 16, 2020 at 5:50 PM Yang Liu  wrote:

> Is it possible to have some limitations on the SQLs to make sure our
> application which depends on Calcite is "safe"? For example, when do merge
> joining between 2 large datasources, our application maybe OOM since the
> joining process is in memory. If we have the "limitation mechanism", we can
> refuse to execute the joining to avoid OOM.
>
> Or we can only do the check outside Calcite?
>
> Thanks
>


Re: Why Apache Spark doesn't use Calcite?

2020-01-13 Thread Muhammad Gelbana
Interesting question.

Someone told me Spark didn't start (~2012) with SQL queries (Introduced
~2014) support in mind. Probably only python-based jobs so Catalyst was
enough then which makes sense to me but I can't confirm that.



On Mon, Jan 13, 2020 at 4:30 PM Michael Mior  wrote:

> This discussion on the Spark mailing list may be interesting to follow :)
>
> --
> Michael Mior
> mm...@apache.org
>
>
> -- Forwarded message -
> De : newroyker 
> Date: lun. 13 janv. 2020 à 09:25
> Subject: Why Apache Spark doesn't use Calcite?
> To: 
>
>
> Was there a qualitative or quantitative benchmark done before a design
> decision was made not to use Calcite?
>
> Are there limitations (for heuristic based, cost based, * aware optimizer)
> in Calcite, and frameworks built on top of Calcite? In the context of big
> data / TCPH benchmarks.
>
> I was unable to dig up anything concrete from user group / Jira. Appreciate
> if any Catalyst veteran here can give me pointers. Trying to defend
> Spark/Catalyst.
>
>
>
>
>
> --
> Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>


Re: Monthly online Calcite meetups

2020-01-06 Thread Muhammad Gelbana
Did we decide a time yet ?

On Sun, Dec 22, 2019 at 11:31 AM Muhammad Gelbana 
wrote:

> Here are my availability times in case we won't use doodle. Do you think
> it's useful to record and save those meetings ?
>
> Mohamed
> ==
> Option 1:  Sun-Thur 15:00-19:0
> Option 2: Fri 12:00-19:00
> Option 3: Sat 10:00 AM - 12:00
>
> On Sun, Dec 22, 2019 at 12:12 AM Muhammad Gelbana 
> wrote:
>
>> I love the idea. I added my availability times to doodle. I'll try to do
>> my best to attend the meeting even if it's out of the ranges I specified
>> anyway.
>>
>>
>> On Sat, Dec 21, 2019 at 9:30 PM Vladimir Sitnikov <
>> sitnikov.vladi...@gmail.com> wrote:
>>
>>> Stamatis>To begin with we could try to hold a single meetup per month and
>>> see later
>>> Stamatis>on how it goes
>>>
>>> It might be nice to try, however, it did not survive long the last time
>>> :(
>>>
>>> Stamatis>The ranges should be rather large so that it is easier to find
>>> Stamatis>some overlapping among us
>>>
>>> An alternative option is to mark checkboxes here:
>>> https://doodle.com/poll/4xymswz842i8xat8
>>> Note: even though it says "22..28 Dec" I suggest to treat it as "sunday
>>> ..
>>> monday"
>>>
>>> Vladimir
>>>
>>


Re: Monthly online Calcite meetups

2019-12-22 Thread Muhammad Gelbana
Here are my availability times in case we won't use doodle. Do you think
it's useful to record and save those meetings ?

Mohamed
==
Option 1:  Sun-Thur 15:00-19:0
Option 2: Fri 12:00-19:00
Option 3: Sat 10:00 AM - 12:00

On Sun, Dec 22, 2019 at 12:12 AM Muhammad Gelbana 
wrote:

> I love the idea. I added my availability times to doodle. I'll try to do
> my best to attend the meeting even if it's out of the ranges I specified
> anyway.
>
>
> On Sat, Dec 21, 2019 at 9:30 PM Vladimir Sitnikov <
> sitnikov.vladi...@gmail.com> wrote:
>
>> Stamatis>To begin with we could try to hold a single meetup per month and
>> see later
>> Stamatis>on how it goes
>>
>> It might be nice to try, however, it did not survive long the last time :(
>>
>> Stamatis>The ranges should be rather large so that it is easier to find
>> Stamatis>some overlapping among us
>>
>> An alternative option is to mark checkboxes here:
>> https://doodle.com/poll/4xymswz842i8xat8
>> Note: even though it says "22..28 Dec" I suggest to treat it as "sunday ..
>> monday"
>>
>> Vladimir
>>
>


Re: Monthly online Calcite meetups

2019-12-21 Thread Muhammad Gelbana
I love the idea. I added my availability times to doodle. I'll try to do my
best to attend the meeting even if it's out of the ranges I specified
anyway.


On Sat, Dec 21, 2019 at 9:30 PM Vladimir Sitnikov <
sitnikov.vladi...@gmail.com> wrote:

> Stamatis>To begin with we could try to hold a single meetup per month and
> see later
> Stamatis>on how it goes
>
> It might be nice to try, however, it did not survive long the last time :(
>
> Stamatis>The ranges should be rather large so that it is easier to find
> Stamatis>some overlapping among us
>
> An alternative option is to mark checkboxes here:
> https://doodle.com/poll/4xymswz842i8xat8
> Note: even though it says "22..28 Dec" I suggest to treat it as "sunday ..
> monday"
>
> Vladimir
>


Re: Quicksql

2019-12-21 Thread Muhammad Gelbana
cannot handle branching data-flow
> graphs (DAGs).
>
> The Interpreter operates uses a co-routine model (reading from queues,
> writing to queues, and yielding when there is no work to be done) and
> therefore could be more efficient than enumerable in a single-node
> multi-core system. Also, there is little start-up time, which is
> important for small queries.
>
> I would love to add another built-in convention that uses Arrow as
> data format and generates co-routines for each operator. Those
> co-routines could be deployed in a parallel and/or distributed data
> engine.
>
> Julian
>
> On Tue, Dec 10, 2019 at 3:47 AM Zoltan Farkas
>  wrote:
>
> What is the ultimate goal of the Calcite Interpreter?
>
> To provide some context, I have been playing around with calcite + REST
> (see https://github.com/zolyfarkas/jaxrs-spf4j-demo/wiki/AvroCalciteRest
> <
> https://github.com/zolyfarkas/jaxrs-spf4j-demo/wiki/AvroCalciteRest> for
> detail of my experiments)
>
>
> —Z
>
> On Dec 9, 2019, at 9:05 PM, Julian Hyde  wrote:
>
> Yes, virtualization is one of Calcite’s goals. In fact, when I created
> Calcite I was thinking about virtualization + in-memory materialized
> views.
> Not only the Spark convention but any of the “engine” conventions (Drill,
> Flink, Beam, Enumerable) could be used to create a virtual query engine.
>
> See e.g. a talk I gave in 2013 about Optiq (precursor to Calcite)
>
>
> https://www.slideshare.net/julianhyde/optiq-a-dynamic-data-management-framework
> <
>
>
> https://www.slideshare.net/julianhyde/optiq-a-dynamic-data-management-framework
> .
>
> Julian
>
>
>
> On Dec 9, 2019, at 2:29 PM, Muhammad Gelbana 
> wrote:
>
> I recently contacted one of the active contributors asking about the
> purpose of the project and here's his reply:
>
> From my understanding, Quicksql is a data virtualization platform. It
> can
> query multiple data sources altogether and in a distributed way;
> Say, you
> can write a SQL with a MySql table join with an Elasticsearch table.
> Quicksql can recognize that, and then generate Spark code, in which
> it will
> fetch the MySQL/ES data as a temporary table separately, and then
> join them
> in Spark. The execution is in Spark so it is totally distributed.
> The user
> doesn't need to aware of where the table is from.
>
>
> I understand that the Spark convention Calcite has attempts to
> achieve the
> same goal, but it isn't fully implemented yet.
>
>
> On Tue, Oct 29, 2019 at 9:43 PM Julian Hyde  wrote:
>
> Anyone know anything about Quicksql? It seems to be quite a popular
> project, and they have an internal fork of Calcite.
>
> https://github.com/Qihoo360/ <https://github.com/Qihoo360/>
>
>
>
>
>
> https://github.com/Qihoo360/Quicksql/tree/master/analysis/src/main/java/org/apache/calcite
> <
>
>
>
> https://github.com/Qihoo360/Quicksql/tree/master/analysis/src/main/java/org/apache/calcite
>
>
> Julian
>
>
>
>
>
>
>
>
>


Re: How to keep quotes in SqlIdentifier?

2019-12-14 Thread Muhammad Gelbana
I don't find it correct to have quotes, double-quotes, backticks or
anything other than the identifier name in the SqlNode's identifier. If you
need it, you can just add it your self.

But why do you want to have the identifier this way ? May be there is
another way to achieve your goal ?


On Sat, Dec 14, 2019 at 4:03 AM 月宫的木马兔  wrote:

> Hi,
>
> I got the SqlNode  by using SqlParse.parseQuery() (Sql bellow),  but I can 
> not find the back_tick in SqlIdentifer
>
> Sql:
>
> SELECT `sql`,id1 FROM testdata
>
> Debug with IDEA:
>
>
>


Re: Quicksql

2019-12-09 Thread Muhammad Gelbana
I recently contacted one of the active contributors asking about the
purpose of the project and here's his reply:

>From my understanding, Quicksql is a data virtualization platform. It can
> query multiple data sources altogether and in a distributed way; Say, you
> can write a SQL with a MySql table join with an Elasticsearch table.
> Quicksql can recognize that, and then generate Spark code, in which it will
> fetch the MySQL/ES data as a temporary table separately, and then join them
> in Spark. The execution is in Spark so it is totally distributed. The user
> doesn't need to aware of where the table is from.
>

I understand that the Spark convention Calcite has attempts to achieve the
same goal, but it isn't fully implemented yet.


On Tue, Oct 29, 2019 at 9:43 PM Julian Hyde  wrote:

> Anyone know anything about Quicksql? It seems to be quite a popular
> project, and they have an internal fork of Calcite.
>
> https://github.com/Qihoo360/ 
>
>
> https://github.com/Qihoo360/Quicksql/tree/master/analysis/src/main/java/org/apache/calcite
> <
> https://github.com/Qihoo360/Quicksql/tree/master/analysis/src/main/java/org/apache/calcite
> >
>
> Julian
>
>


Re: IntelliJ and Gradle

2019-11-28 Thread Muhammad Gelbana
I haven't tried IntelliJ yet but I found this while going through Gradle's
manual [1]

https://docs.gradle.org/current/userguide/troubleshooting.html#sec:troubleshooting_ide_integration



On Tue, Nov 26, 2019 at 10:22 AM Vladimir Sitnikov <
sitnikov.vladi...@gmail.com> wrote:

> >IntelliJ frequently fails to correctly load a project
>
> It never happens to me.
> I've been working recently on Gradle itself, Apache JMeter, Calcite
> Avatica, Calcite, and it works.
>
> Julian, is the issue reproducible?
> Can you provide the exact steps?
>
> Are there errors in IDEA logs? (
>
> https://intellij-support.jetbrains.com/hc/en-us/articles/207241085-Locating-IDE-log-files
>  )
>
> Is it caused by Maven's leftovers in your IDEA project?
> If both Maven and Gradle fight for the project model, then it won't work
> well.
>
> There are class issues when different branches have different dependencies.
> For instance, PR#1591 adds Redis adapter, and the project needs re-import
> so IDEA recognizes new dependencies.
> It is documented at
>
> https://www.jetbrains.com/help/idea/work-with-gradle-projects.html#gradle_refresh_project
> An alternative option is to use ctrl+shift+a / cmd+shift+a (see
> https://blog.jetbrains.com/idea/2009/06/find-action-saves-time/ ), type
> "reimport",
> and execute "Reimport All Gradle Projects" from there.
>
>
> Note: there's "Automatically import this project on changes in build script
> files" option, however, I suggest to keep it **disabled**.
> The import takes time (e.g. 10-20 seconds), and it seems to re-import the
> project on each keystroke when editing the build script which is insane.
>
> Vladimir
>


Re: [ANNOUNCE] Danny Chan joins Calcite PMC

2019-11-01 Thread Muhammad Gelbana
Congratulations!

Thanks,
Gelbana


On Fri, Nov 1, 2019 at 9:07 AM Stamatis Zampetakis 
wrote:

> Congratulations Danny!
>
> You are doing an amazing job. The project and the community is becoming
> better every day and your help is much appreciated.
>
> Keep up the momentum!
>
> Best,
> Stamatis
>
> On Thu, Oct 31, 2019 at 4:41 AM Kurt Young  wrote:
>
> > Congratulations Danny!
> >
> > Best,
> > Kurt
> >
> >
> > On Thu, Oct 31, 2019 at 11:18 AM Danny Chan 
> wrote:
> >
> > > Thank you so much colleagues, it’s my honor to work with you!
> > >
> > > I have always felt respected and the harmony of the community, hope to
> > > contribute more and I would give help as best as I can, thanks !
> > >
> > > Best,
> > > Danny Chan
> > > 在 2019年10月31日 +0800 AM5:22,Francis Chuang  >,写道:
> > > > I'm pleased to announce that Danny has accepted an invitation to
> > > > join the Calcite PMC. Danny has been a consistent and helpful
> > > > figure in the Calcite community for which we are very grateful. We
> > > > look forward to the continued contributions and support.
> > > >
> > > > Please join me in congratulating Danny!
> > > >
> > > > - Francis (on behalf of the Calcite PMC)
> > >
> >
>


Re: [DISCUSS] State of the project 2019

2019-10-23 Thread Muhammad Gelbana
I forgot mentioning that I strongly believe we need more technical design
documents for Calcite's major components such as the parser, validator and
code generation.

On Wed, Oct 23, 2019 at 12:52 PM Muhammad Gelbana 
wrote:

> The mailing list didn't have such activity three years ago when I started
> learning about Calcite. Also more advanced topics and questions are posted
> which, although makes it harder for me to participate, it definitely says a
> lot about how much the project has advanced on both the technical and
> adoption aspects.
>
> +1 for Stamatis for PMC chair. He's one of the most active, knowledgable
> and friendliest committers I've come across.
>
>
>
> On Wed, Oct 23, 2019 at 12:23 PM Danny Chan  wrote:
>
>> >I gave a talk last year in a university in
>> > France, and nobody in the audience had ever heard of Calcite before.
>>
>> Oops, that's a pity, I would also give a talk about Calcite on Flink
>> Forward Asia 2019 of BeiJing China, hope more people  would know Apache
>> Calcite.
>>
>> Best,
>> Danny Chan
>> 在 2019年10月23日 +0800 PM2:36,dev@calcite.apache.org,写道:
>> >
>> > I gave a talk last year in a university in
>> > France, and nobody in the audience had ever heard of Calcite before.
>>
>


Re: [DISCUSS] State of the project 2019

2019-10-23 Thread Muhammad Gelbana
The mailing list didn't have such activity three years ago when I started
learning about Calcite. Also more advanced topics and questions are posted
which, although makes it harder for me to participate, it definitely says a
lot about how much the project has advanced on both the technical and
adoption aspects.

+1 for Stamatis for PMC chair. He's one of the most active, knowledgable
and friendliest committers I've come across.



On Wed, Oct 23, 2019 at 12:23 PM Danny Chan  wrote:

> >I gave a talk last year in a university in
> > France, and nobody in the audience had ever heard of Calcite before.
>
> Oops, that's a pity, I would also give a talk about Calcite on Flink
> Forward Asia 2019 of BeiJing China, hope more people  would know Apache
> Calcite.
>
> Best,
> Danny Chan
> 在 2019年10月23日 +0800 PM2:36,dev@calcite.apache.org,写道:
> >
> > I gave a talk last year in a university in
> > France, and nobody in the audience had ever heard of Calcite before.
>


Re: [DISCUSS] Automated security fixes via dependabot

2019-10-12 Thread Muhammad Gelbana
Why would we not merge those PRs or even disable the whole thing ?



On Fri, Oct 11, 2019 at 12:09 AM Francis Chuang 
wrote:

> Dependabot is a bot on Github that opens PRs to automatically upgrade
> out of date dependencies to fix security issues. Recently, Github
> acquired dependabot and is gradually enabling the bot on all repositories.
>
> It just opened a PR to upgrade a few dependencies in the Avatica
> repository: https://github.com/apache/calcite-avatica/pull/114
>
> I'd like to start some discussion as to how we should deal with these
> PRs. For some background, dependency upgrades should usually have a jira
> issue number assigned, so that the change is fully trackable. We
> recently had some discussion regarding trivial fixes to documentation
> and the consensus was that changes to the code is not considered to be
> trivial and that an issue should be filed on jira.
>
> If we will not merge these PRs, I think it makes sense to ask infra to
> disable them. Having these open PRs and then closing them manually seem
> to generate a lot of noise. According to the documentation for
> dependabot [1] it appears that we can either opt out of having
> dependabot opening PRs completely or have it open PRs. There is no
> middle-ground where dependabot/Github sends members of the repo a
> notification for security issues, but do not open any PRs.
>
> What do you guys think?
>
> Francis
>
> [1]
> https://help.github.com/en/articles/configuring-automated-security-fixes
>


Re: Looking for guides on SQL Parser

2019-09-21 Thread Muhammad Gelbana
The parser is based on JavaCC[1] so you can download it and explore its
"examples" folder for tutorials to understand how it works.

If that's not what you're looking for, then you might debug through the
generated core parser[2] (There is also what we call Babel parser,
developed to parse all sorts of syntaxes other than the standard supported
by the core parser).

You can also enable the parser's debugging mode [3] which can be specified
in the grammer file or through the command line [4].

If that's not what you're looking for either then you're probably looking
for "visiting" the parsed tree to traverse all its contents ? You can do
that by providing an "SqlVisitor" to the SQL tree root node (ie.
SqlNode.accept(SqlVisitor visitor)).

[1] https://javacc.org/getting-started
[2] org.apache.calcite.sql.parser.impl.SqlParserImpl
[3] https://javacc.org/javaccgrm#prod6
[4] https://javacc.org/commandline

Thanks,
Gelbana


On Sat, Sep 21, 2019 at 10:31 PM Peer Arimond, INF-I <
arimo...@hochschule-trier.de> wrote:

> Hello Apache Calcite,
>
> I´m currently working on a project, wich connects diffrent database
> systems, relational such as nosql. Part of the project is to translate an
> SQL query into our own model of relational algebra. After making some
> research i saw Apache Calcite and especially the SQL Parser of Apache
> Calcite and i think it could help me a lot.
> Is there only the documentation or are there may be more guides or
> something to get deeper into the SQL Parser?
> I want to parse a query and walk recursively over each node from the query
> tree and for example print every Node and it´s function.
> If i could achieve that it would be a good beginning. May be you can help?
>
> Greetings,
>
> Peer Arimond
>
> --
> http://webmail.fh-trier.de
>
>


Re: Re: [ANNOUNCE] New committer: Julian Feinauer

2019-09-18 Thread Muhammad Gelbana
Welcome aboard !

Thanks,
Gelbana


On Wed, Sep 18, 2019 at 5:10 PM Andrei Sereda  wrote:

> Congratulations, Julian !
>
> On Tue, Sep 17, 2019 at 11:26 PM Amit Chavan  wrote:
>
> > Congrats, Julian !!
> >
> > On Tue, Sep 17, 2019 at 8:12 PM XING JIN 
> wrote:
> >
> > > Congrats, Julian !
> > > You are well deserved ~
> > >
> > > Haisheng Yuan  于2019年9月18日周三 上午10:38写道:
> > >
> > > > Congrats, Julian!
> > > >
> > > > - Haisheng
> > > >
> > > > --
> > > > 发件人:Chunwei Lei
> > > > 日 期:2019年09月18日 10:30:31
> > > > 收件人:
> > > > 主 题:Re: [ANNOUNCE] New committer: Julian Feinauer
> > > >
> > > > Congratulations, Julian!
> > > >
> > > >
> > > >
> > > > Best,
> > > > Chunwei
> > > >
> > > >
> > > > On Wed, Sep 18, 2019 at 9:24 AM Danny Chan 
> > wrote:
> > > >
> > > > > Congratulations, Muhammad ! Welcome to join us ! Thanks for your
> huge
> > > > > contribution for the Match Recognize.
> > > > >
> > > > > Best,
> > > > > Danny Chan
> > > > > 在 2019年9月18日 +0800 AM5:55,Francis Chuang  > > >,写道:
> > > > > > Apache Calcite's Project Management Committee (PMC) has invited
> > > Julian
> > > > > > Feinauer to become a committer, and we are pleased to announce
> that
> > > he
> > > > > > has accepted.
> > > > > >
> > > > > > Julian is an active contributor to the Calcite code base and has
> > been
> > > > > > active on the mailing list answering questions, participating in
> > > > > > discussions and voting for releases.
> > > > > >
> > > > > > Julian, welcome, thank you for your contributions, and we look
> > > forward
> > > > > > your further interactions with the community! If you wish, please
> > > feel
> > > > > > free to tell us more about yourself and what you are working on.
> > > > > >
> > > > > > Francis (on behalf of the Apache Calcite PMC)
> > > > >
> > > >
> > > >
> > >
> >
>


Re: [ANNOUNCE] New committer: Muhammad Gelbana

2019-09-18 Thread Muhammad Gelbana
Thank you all for the very warm welcome.

Calcite really changed a lot of things for my current employer. I got to
know Calcite when I was exposed to Drill and now we're planning to
integrating Calcite within our product. Being an analytics company, Calcite
provides great deal of help to us.

Thanks,
Gelbana


On Wed, Sep 18, 2019 at 5:10 PM Andrei Sereda  wrote:

> Congrats, Muhammad!
>
> On Tue, Sep 17, 2019 at 11:26 PM Amit Chavan  wrote:
>
> > Congratulations, Muhammad !!
> >
> > On Tue, Sep 17, 2019 at 8:10 PM XING JIN 
> wrote:
> >
> > > Congrats, Muhammad !
> > >
> > > 王炎林 <1989yanlinw...@163.com> 于2019年9月18日周三 上午10:38写道:
> > >
> > > > Congratulations, Muhammad!
> > > >
> > > >
> > > > Best,
> > > > Yanlin
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > At 2019-09-18 05:58:53, "Francis Chuang" 
> > > wrote:
> > > > >Apache Calcite's Project Management Committee (PMC) has invited
> > Muhammad
> > > > >Gelbana to become a committer, and we are pleased to announce that
> he
> > > > >has accepted.
> > > > >
> > > > >Muhammad is an active contributor and has contributed numerous
> patches
> > > > >to Calcite. He has also been extremely active on the mailing list,
> > > > >helping out new users and participating in design discussions.
> > > > >
> > > > >Muhammad, welcome, thank you for your contributions, and we look
> > forward
> > > > >your further interactions with the community! If you wish, please
> feel
> > > > >free to tell us more about yourself and what you are working on.
> > > > >
> > > > >Francis (on behalf of the Apache Calcite PMC)
> > > >
> > >
> >
>


Re: [DISCUSS] ANTLR4 parse template for Calcite ?

2019-08-22 Thread Muhammad Gelbana
I once needed to fix this issue [1] but the fix was rejected because it
introduced worse performance than it ideally should. As mentioned in the
comments, the current approach followed in the current parser is the reason
for that. I mean if we designed the grammar differently, we could've had
fixed the linked issue a long time ago as Julian already attempted to fix
it.

Having that said, we might go with *antlr* only to have that "better"
approach for our parsers. We don't have to dump our current parser of
course as *antlr* can be optionally activated.

[1] https://issues.apache.org/jira/browse/CALCITE-35

Thanks,
Gelbana


On Thu, Aug 22, 2019 at 10:05 AM Danny Chan  wrote:

> Thanks, Julian.
>
> I agree this would be a huge work, but I have to do this, I’m just
> wondering if any fellows here have the similar requests.
>
> Best,
> Danny Chan
> 在 2019年8月22日 +0800 PM2:15,Julian Hyde ,写道:
> > ANTLR isn’t significantly better than, or worse than, JavaCC, but it’s
> different. So translating to ANTLR would be a rewrite, and would be a HUGE
> amount of work.
> >
> >
> >
> > > On Aug 21, 2019, at 8:01 PM, Danny Chan  wrote:
> > >
> > > Now some of our fellows want to do the syntax promote in the WEB page,
> and they what a parser in the front-page; The ANTLR4 can generate JS parser
> directly but JAVACC couldn’t.
> > >
> > > So I’m wondering do you have the similar requests ? And do you think
> there is necessity to support ANTLR4 g4 file in Calcite ?
> > >
> > >
> > > Best,
> > > Danny Chan
> >
>


Is group type "rollup" deduced correctly ?

2019-08-13 Thread Muhammad Gelbana
Based on those pages [1], [2], I understand that for group sets to be a
rollup, all groups in the group set must be the same as the union set of
groups but with gradually one less group removed from the end. For example,
the following are valid rollup groups (The union set of groups is *(G1, G2,
G3)*):
((G1, G2, G3), (G1, G2), (G1), () )

While the following isn't
((G1, G2, G3), *(G1, G3)*, (G1), () )
- because the second group skipped *G2* and took *G3* instead.
or
((G1, G2, G3), *(G2, G3)*, (G1), () )
- because the second group didn't start from *G1*, but started from *G2*.

I'm saying this because I believe this method [3] isn't functioning
correctly because it considers the group sets ({3, 5}, {5}, {}) to be a
rollup although the second group started with "5" and not "3".

Or am I missing something?

[1]
https://www.postgresql.org/docs/devel/queries-table-expressions.html#QUERIES-GROUPING-SETS
[2] http://www.sqlservertutorial.net/sql-server-basics/sql-server-rollup/
[3]
https://github.com/apache/calcite/blob/5ec3a2a503dcf26fe1b3cad8a5a9467264213dcf/core/src/main/java/org/apache/calcite/rel/core/Aggregate.java#L496

Thanks,
Gelbana


Re: Frequent Travis CI failures

2019-08-05 Thread Muhammad Gelbana
It is, thank you.

Thanks,
Gelbana


On Mon, Aug 5, 2019 at 2:52 PM Ruben Q L  wrote:

> Hi all,
>
> maybe the final error message is a bit misleading, but it seems that the
> situation is caused by a checkstyle violation:
>
> [INFO] There is 1 error reported by Checkstyle 7.8.2 with
> /src/babel/../src/main/config/checkstyle/checker.xml ruleset.
> [ERROR] src/test/java/org/apache/calcite/test/*BabelParserTest.java:[34,8]
> (imports) UnusedImports: Unused import - java.nio.charset.Charset.*
> ...
>
> [ERROR] Failed to execute goal
> org.apache.maven.plugins:*maven-checkstyle-plugin:3.0.0:check
> (validate) on project calcite-babel: You have 1 Checkstyle violation.*
> -> [Help 1][ERROR]   mvn  -rf :calcite-babelThe command
> "$DOCKERRUN $IMAGE mvn install -DskipTests=true
> -Dmaven.javadoc.skip=true -Djavax.net.ssl.trustStorePassword=changeit
> -B -V" failed and exited with 1 during .
>
>
>
> Le lun. 5 août 2019 à 14:44, Muhammad Gelbana  a
> écrit :
>
> > Here is the Travis page[1] for my PR. Thanks!
> >
> > [1] https://travis-ci.org/apache/calcite/builds/567846038
> >
> > Thanks,
> > Gelbana
> >
> >
> > On Mon, Aug 5, 2019 at 2:32 PM Stamatis Zampetakis 
> > wrote:
> >
> > > Hey Gelbana,
> > >
> > > Are there any other information which might help diagnosing the
> problem?
> > > Can you provide a link to the complete Travis log?
> > >
> > > Best,
> > > Stamatis
> > >
> > > On Mon, Aug 5, 2019 at 2:08 PM Muhammad Gelbana 
> > > wrote:
> > >
> > > > For the last few days I've been facing frequent PR check failures
> > caused
> > > by
> > > > Travis CI. The error has nothing to do with my tests, it would be
> > > something
> > > > like
> > > >
> > > > The command "$DOCKERRUN $IMAGE mvn install -DskipTests=true
> > > > -Dmaven.javadoc.skip=true -Djavax.net.ssl.trustStorePassword=changeit
> > > > -B -V" failed and exited with 1 during .
> > > >
> > > >
> > > > Is anyone handling this ? Who should I contact and what can I do
> about
> > > this
> > > > ?
> > > >
> > > > Thanks,
> > > > Gelbana
> > > >
> > >
> >
>


Re: Frequent Travis CI failures

2019-08-05 Thread Muhammad Gelbana
Here is the Travis page[1] for my PR. Thanks!

[1] https://travis-ci.org/apache/calcite/builds/567846038

Thanks,
Gelbana


On Mon, Aug 5, 2019 at 2:32 PM Stamatis Zampetakis 
wrote:

> Hey Gelbana,
>
> Are there any other information which might help diagnosing the problem?
> Can you provide a link to the complete Travis log?
>
> Best,
> Stamatis
>
> On Mon, Aug 5, 2019 at 2:08 PM Muhammad Gelbana 
> wrote:
>
> > For the last few days I've been facing frequent PR check failures caused
> by
> > Travis CI. The error has nothing to do with my tests, it would be
> something
> > like
> >
> > The command "$DOCKERRUN $IMAGE mvn install -DskipTests=true
> > -Dmaven.javadoc.skip=true -Djavax.net.ssl.trustStorePassword=changeit
> > -B -V" failed and exited with 1 during .
> >
> >
> > Is anyone handling this ? Who should I contact and what can I do about
> this
> > ?
> >
> > Thanks,
> > Gelbana
> >
>


Frequent Travis CI failures

2019-08-05 Thread Muhammad Gelbana
For the last few days I've been facing frequent PR check failures caused by
Travis CI. The error has nothing to do with my tests, it would be something
like

The command "$DOCKERRUN $IMAGE mvn install -DskipTests=true
-Dmaven.javadoc.skip=true -Djavax.net.ssl.trustStorePassword=changeit
-B -V" failed and exited with 1 during .


Is anyone handling this ? Who should I contact and what can I do about this
?

Thanks,
Gelbana


Re: Calcite-Master - Build # 1280 - Still Failing

2019-08-02 Thread Muhammad Gelbana
I'm working on a PR and its checks keep saying that a test case is failing.

That exact test case succeeds on my machine and looking into the PR checks
logs, it looks like the expected and actual values are actually identical !
Could this message (Calcite-Master - Build #1280 - Still Failing) has
anything to do with this?

Thanks,
Gelbana


On Fri, Aug 2, 2019 at 5:44 PM Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> The Apache Jenkins build system has built Calcite-Master (build #1280)
>
> Status: Still Failing
>
> Check console output at https://builds.apache.org/job/Calcite-Master/1280/
> to view the results.


Re: How to execute an SQL query without using Calcite's JDBC API ?

2019-07-29 Thread Muhammad Gelbana
Thanks a lot Stamatis, very helpful and responsive as always :)

Here is what worked perfectly for me so far after building over what you
provided.

HashMap parameters = new HashMap<>();
dataContext.setMap(parameters); // A custom context object holding my root
schema and type factory. The setMap method is custom too to provide my own
parameters map
Iterator resultsIterator =
EnumerableInterpretable.toBindable(parameters, null, (EnumerableRel)
planned, EnumerableRel.Prefer.ARRAY).bind(dataContext).iterator();
// "planned" is the optimized/physical plan.

Now the "resultsIterator" iterator has all my result set. The tricky part
was that for single column resultsets, the output from the iterator is a
single object, not an Object[].

Thanks,
Gelbana


On Wed, Jul 17, 2019 at 11:31 PM Stamatis Zampetakis 
wrote:

> Hi Gelbana,
>
> In [1, 2] you can find rather full end-to-end example with the main Calcite
> primitives in use. Hope it helps!
>
> Best,
> Stamatis
>
> [1]
>
> https://github.com/zabetak/calcite/blob/livecodingdemo/core/src/test/java/org/apache/calcite/examples/foodmart/java/EndToEndExampleBindable.java
> [2]
>
> https://github.com/michaelmior/calcite-notebooks/blob/master/query-optimization.ipynb
>
> On Wed, Jul 17, 2019 at 7:18 PM Muhammad Gelbana 
> wrote:
>
> > I think I saw a message asking the same thing but I'm unable to dig it up
> > as I can't quite remember the subject. Is there a test class that
> executes
> > SQL queries and access the results without using the JDBC API ?
> >
> > Here is my attempt:
> > ---
> > Planner planner = Frameworks.getPlanner(frameworkConfig); //
> > frameworkConfig programs is set usign "Programs.standard()"
> >
> > SqlNode parsed = planner.parse("SELECT 1 + 1 FROM myschema.mytable limit
> 1"
> > ); // Assume the existense of "myschema.mytable", don't try (VALUES())
> > SqlNode validated = planner.validate(parsed);
> > RelNode converted = planner.rel(validated).rel;
> > RelNode planned = planner.transform(0,
> > converted.getTraitSet().replace(EnumerableConvention.INSTANCE),
> converted);
> >
> > TestDataContext dataContext = new TestDataContext(rootSchema,
> > (JavaTypeFactory) planner.getTypeFactory()); // A context that provides
> the
> > root schema and the java type factory only
> > try (Interpreter interpreter = new Interpreter(dataContext, planned)) {
> > Enumerator results =
> interpreter.asEnumerable().enumerator();
> > while(results.moveNext()) {
> > System.out.println(Arrays.toString(results.current()));
> > }
> > }
> > ---
> > This fails and throws the following error
> > Exception in thread "main" java.lang.AssertionError: interpreter: no
> > implementation for class
> > org.apache.calcite.adapter.enumerable.EnumerableInterpreter
> > at
> >
> >
> org.apache.calcite.interpreter.Interpreter$CompilerImpl.visit(Interpreter.java:460)
> > at org.apache.calcite.interpreter.Nodes$CoreCompiler.visit(Nodes.java:42)
> > at org.apache.calcite.rel.SingleRel.childrenAccept(SingleRel.java:72)
> > at
> >
> >
> org.apache.calcite.interpreter.Interpreter$CompilerImpl.visit(Interpreter.java:447)
> > at org.apache.calcite.interpreter.Nodes$CoreCompiler.visit(Nodes.java:42)
> > at
> >
> >
> org.apache.calcite.interpreter.Interpreter$CompilerImpl.visitRoot(Interpreter.java:405)
> > at org.apache.calcite.interpreter.Interpreter.(Interpreter.java:88)
> >
> > The output of RelOptUtil.toString(planned) is
> > EnumerableCalc(expr#0..22=[{inputs}], expr#23=[1], expr#24=[2],
> > expr#25=[+($t23, $t24)], EXPR$0=[$t25])
> >   EnumerableLimit(fetch=[1])
> > EnumerableInterpreter
> >   BindableTableScan(table=[[myschema, mytable]])
> >
> > The reason for this is the converter node "EnumerableInterpreter"
> > converting from the bindable table scan node to an enumerable convention
> > node.
> >
> > Thanks,
> > Gelbana
> >
>


Re: Eclipse error: java.sql.SQLException: No suitable driver found for jdbc:calcite

2019-07-29 Thread Muhammad Gelbana
What do you mean by "refresh" ?

Thanks,
Gelbana


On Mon, Jul 29, 2019 at 4:12 AM Danny Chan  wrote:

> Do you refresh the calcite-avatica jar in your local repository ?
>
> Best,
> Danny Chan
> 在 2019年7月28日 +0800 AM5:39,Muhammad Gelbana ,写道:
> > When I try to debug through a test method on Eclipse, I regularly get
> this
> > error and I have to waste another 3-4 minutes to run the build again
> using
> > maven.
> >
> > After I run the maven build (mvn clean install), I get the chance to
> debug
> > through my test method a few times, then this method starts appearing
> again
> > and blocks the execution of my test method.
> >
> > Does anyone know how can I resolve this please ?
> >
> > Thanks,
> > Gelbana
>


Eclipse error: java.sql.SQLException: No suitable driver found for jdbc:calcite

2019-07-27 Thread Muhammad Gelbana
When I try to debug through a test method on Eclipse, I regularly get this
error and I have to waste another 3-4 minutes to run the build again using
maven.

After I run the maven build (mvn clean install), I get the chance to debug
through my test method a few times, then this method starts appearing again
and blocks the execution of my test method.

Does anyone know how can I resolve this please ?

Thanks,
Gelbana


How to execute an SQL query without using Calcite's JDBC API ?

2019-07-17 Thread Muhammad Gelbana
I think I saw a message asking the same thing but I'm unable to dig it up
as I can't quite remember the subject. Is there a test class that executes
SQL queries and access the results without using the JDBC API ?

Here is my attempt:
---
Planner planner = Frameworks.getPlanner(frameworkConfig); //
frameworkConfig programs is set usign "Programs.standard()"

SqlNode parsed = planner.parse("SELECT 1 + 1 FROM myschema.mytable limit 1"
); // Assume the existense of "myschema.mytable", don't try (VALUES())
SqlNode validated = planner.validate(parsed);
RelNode converted = planner.rel(validated).rel;
RelNode planned = planner.transform(0,
converted.getTraitSet().replace(EnumerableConvention.INSTANCE), converted);

TestDataContext dataContext = new TestDataContext(rootSchema,
(JavaTypeFactory) planner.getTypeFactory()); // A context that provides the
root schema and the java type factory only
try (Interpreter interpreter = new Interpreter(dataContext, planned)) {
Enumerator results = interpreter.asEnumerable().enumerator();
while(results.moveNext()) {
System.out.println(Arrays.toString(results.current()));
}
}
---
This fails and throws the following error
Exception in thread "main" java.lang.AssertionError: interpreter: no
implementation for class
org.apache.calcite.adapter.enumerable.EnumerableInterpreter
at
org.apache.calcite.interpreter.Interpreter$CompilerImpl.visit(Interpreter.java:460)
at org.apache.calcite.interpreter.Nodes$CoreCompiler.visit(Nodes.java:42)
at org.apache.calcite.rel.SingleRel.childrenAccept(SingleRel.java:72)
at
org.apache.calcite.interpreter.Interpreter$CompilerImpl.visit(Interpreter.java:447)
at org.apache.calcite.interpreter.Nodes$CoreCompiler.visit(Nodes.java:42)
at
org.apache.calcite.interpreter.Interpreter$CompilerImpl.visitRoot(Interpreter.java:405)
at org.apache.calcite.interpreter.Interpreter.(Interpreter.java:88)

The output of RelOptUtil.toString(planned) is
EnumerableCalc(expr#0..22=[{inputs}], expr#23=[1], expr#24=[2],
expr#25=[+($t23, $t24)], EXPR$0=[$t25])
  EnumerableLimit(fetch=[1])
EnumerableInterpreter
  BindableTableScan(table=[[myschema, mytable]])

The reason for this is the converter node "EnumerableInterpreter"
converting from the bindable table scan node to an enumerable convention
node.

Thanks,
Gelbana


Has anyone thought of writing a book about Apache Calcite ?

2019-07-13 Thread Muhammad Gelbana
I don't think Calcite is fundamentally evolving that much, so a book would
stay relevant for enough time to make use of it.

So has anyone/group start working on a book about Apache Calcite ? Does it
sound like a good idea to you ?

Thanks,
Gelbana


Re: How to create a rule in calcite

2019-07-12 Thread Muhammad Gelbana
"I remember very well that Apache Calcite does exactly what you say"
I meant Apache Drill

Thanks,
Gelbana


On Tue, Jul 9, 2019 at 10:47 PM Muhammad Gelbana 
wrote:

> Did you look into the Geode and Druide example projects ? They have some
> rules that might help.
> I remember very well that Apache Calcite does exactly what you say, its
> join rule decides which join algorithm will be executed. Check that out.
> Here [1] is Michael's answer on SoF that might shed some light on what you
> need.
>
> Straight into your question, you need to match exactly what you need,
> else, you might end up modifying operators in the plan tree that you didn't
> want to touch.
> To do that, you need to identify the inputs to your join operator, their
> convention and the join operator's convention. All this can be specified in
> the rule's constructor super method call.
> If you need more control, you can run extra checks in the *matches*
> method. Or you can simply abort your rule if it's mistakenly matched by
> aborting the execution on *onMatch* method by simply returning.
>
> Forgive me if I'm answer is too general but I would say that your question
> isn't very specific itself. Perhaps you can start by sharing your trials so
> everyone can have a better idea about what's wrong with your rule.
>
> [1]
> https://stackoverflow.com/questions/56234480/whats-the-difference-between-calcites-converterrule-and-reloptrule
>
> Thanks,
> Gelbana
>
>
> On Tue, Jul 9, 2019 at 5:22 PM Felipe Gutierrez <
> felipe.o.gutier...@gmail.com> wrote:
>
>> Hi,
>>
>> Is there any tutorial teaching how to create my own rule (using Java) in
>> apache Calcite?
>> I want to create a rule for join operators which I can decide which
>> implementation of join I use.
>>
>> Or, maybe, is there any example that I can see how does it work in
>> Calcite?
>>
>> thanks
>> Felipe
>> *--*
>> *-- Felipe Gutierrez*
>>
>> *-- skype: felipe.o.gutierrez*
>> *--* *https://felipeogutierrez.blogspot.com
>> <https://felipeogutierrez.blogspot.com>*
>>
>


Re: How to create a rule in calcite

2019-07-09 Thread Muhammad Gelbana
Did you look into the Geode and Druide example projects ? They have some
rules that might help.
I remember very well that Apache Calcite does exactly what you say, its
join rule decides which join algorithm will be executed. Check that out.
Here [1] is Michael's answer on SoF that might shed some light on what you
need.

Straight into your question, you need to match exactly what you need, else,
you might end up modifying operators in the plan tree that you didn't want
to touch.
To do that, you need to identify the inputs to your join operator, their
convention and the join operator's convention. All this can be specified in
the rule's constructor super method call.
If you need more control, you can run extra checks in the *matches* method.
Or you can simply abort your rule if it's mistakenly matched by aborting
the execution on *onMatch* method by simply returning.

Forgive me if I'm answer is too general but I would say that your question
isn't very specific itself. Perhaps you can start by sharing your trials so
everyone can have a better idea about what's wrong with your rule.

[1]
https://stackoverflow.com/questions/56234480/whats-the-difference-between-calcites-converterrule-and-reloptrule

Thanks,
Gelbana


On Tue, Jul 9, 2019 at 5:22 PM Felipe Gutierrez <
felipe.o.gutier...@gmail.com> wrote:

> Hi,
>
> Is there any tutorial teaching how to create my own rule (using Java) in
> apache Calcite?
> I want to create a rule for join operators which I can decide which
> implementation of join I use.
>
> Or, maybe, is there any example that I can see how does it work in Calcite?
>
> thanks
> Felipe
> *--*
> *-- Felipe Gutierrez*
>
> *-- skype: felipe.o.gutierrez*
> *--* *https://felipeogutierrez.blogspot.com
> *
>


[jira] [Created] (CALCITE-3164) Averaging an all-nulls values after grouping produce NaN instead of NULL

2019-07-01 Thread Muhammad Gelbana (JIRA)
Muhammad Gelbana created CALCITE-3164:
-

 Summary: Averaging an all-nulls values after grouping produce NaN 
instead of NULL
 Key: CALCITE-3164
 URL: https://issues.apache.org/jira/browse/CALCITE-3164
 Project: Calcite
  Issue Type: Bug
  Components: core
Affects Versions: 1.20.0
Reporter: Muhammad Gelbana


{code:sql}
-- Values are a single tuple
SELECT C1, avg(C2)
FROM (VALUES('X', NULL::INT)) T (C1, C2)
GROUP  BY C1

-- Values are more than a single tuple
SELECT C1, avg(C2)
FROM (VALUES('X', NULL::INT), ('X', NULL::INT)) T (C1, C2)
GROUP  BY C1
{code}

Those queries return {{NaN}} while it's expected to return {{NULL}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: A NPE when rounding a nullable numeric

2019-06-22 Thread Muhammad Gelbana
Done[1]. Thanks for the guidance.

[1] https://issues.apache.org/jira/browse/CALCITE-3142
Thanks,
Gelbana


On Sat, Jun 22, 2019 at 3:58 PM Stamatis Zampetakis 
wrote:

> Hey Gelbana,
>
> I didn't have a chance to look into this but it looks like a bug so please
> log a JIRA case with your analysis so far. JIRA is the first place where
> people look for problems so it is better to continue the discussion there.
>
> Best,
> Stamatis
>
> On Mon, Jun 17, 2019 at 1:03 PM Muhammad Gelbana 
> wrote:
>
> > *This is the optimized generated code*
> > final Object[] current = (Object[]) inputEnumerator.current();
> > final Integer inp0_ = (Integer) current[0];
> > final Integer inp1_ = (Integer) current[1];
> > final java.math.BigDecimal v1 = new java.math.BigDecimal(
> >   inp0_.intValue() / inp1_.intValue()); *// NPE*
> > return inp0_ == null || inp1_ == null ? (java.math.BigDecimal) null :
> > org.apache.calcite.runtime.SqlFunctions.sround(v1, 2);
> >
> > *This is the non-optimized one*
> > final Object[] current = (Object[]) inputEnumerator.current();
> > final Integer inp0_ = (Integer) current[0];
> > final boolean inp0__unboxed = inp0_ == null;
> > final Integer inp1_ = (Integer) current[1];
> > final boolean inp1__unboxed = inp1_ == null;
> > final boolean v = inp0__unboxed || inp1__unboxed;
> > final int inp0__unboxed0 = inp0_.intValue(); *// NPE*
> > final int inp1__unboxed0 = inp1_.intValue(); *// NPE*
> > final int v0 = inp0__unboxed0 / inp1__unboxed0;
> > final java.math.BigDecimal v1 = new java.math.BigDecimal(
> >   v0);
> > final java.math.BigDecimal v2 = v ? (java.math.BigDecimal) null :
> > org.apache.calcite.runtime.SqlFunctions.sround(v1, 2);
> > return v2;
> >
> > I'm still trying to understand how to fix this. I assume I need to avoid
> > creating an Expression for "final int inp0__unboxed0 = inp0_.intValue()"
> > and "final int inp1__unboxed0 = inp1_.intValue()". Any hints ?
> >
> > Thanks,
> > Gelbana
> >
> >
> > On Sun, Jun 16, 2019 at 9:28 PM Muhammad Gelbana 
> > wrote:
> >
> > > Of course, my bad!
> > >
> > > -- Regular cast syntax
> > > SELECT ROUND(CAST((X/Y) AS NUMERIC), 2) FROM (VALUES (1, 2), (NULLIF(5,
> > > 5), NULLIF(5, 5))) A(X, Y)
> > >
> > > Thanks,
> > > Gelbana
> > >
> > >
> > > On Sun, Jun 16, 2019 at 8:43 PM Julian Hyde 
> > > wrote:
> > >
> > >> Can you reproduce it with regular cast syntax? Make it as easy as
> > >> possible for others to help you.
> > >>
> > >> Julian
> > >>
> > >> > On Jun 16, 2019, at 11:24 AM, Muhammad Gelbana  >
> > >> wrote:
> > >> >
> > >> > The following query throws a NPE in the generated code because it
> > >> assumes
> > >> > the divided value to be an initialized Java object (Not null), which
> > is
> > >> > fine for the first row, but not for the second.
> > >> >
> > >> > SELECT ROUND((X/Y)::NUMERIC, 2)
> > >> > FROM (VALUES (1, 2), (NULLIF(5, 5), NULLIF(5, 5))) A(X, Y)
> > >> >
> > >> > If I modify the query a little bit, it runs ok:
> > >> > -- No casting
> > >> > SELECT ROUND((X/Y), 2) FROM (VALUES (1, 2), (NULLIF(5, 5), NULLIF(5,
> > >> 5)))
> > >> > A(X, Y)
> > >> >
> > >> > -- No rounding
> > >> > SELECT (X/Y)::NUMERIC FROM (VALUES (1, 2), (NULLIF(5, 5), NULLIF(5,
> > 5)))
> > >> > A(X, Y)
> > >> >
> > >> > What could be causing this ? Any hints ?
> > >> > And was this reported before or should I create a new ticket ?
> > >> >
> > >> > Thanks,
> > >> > Gelbana
> > >>
> > >
> >
>


[jira] [Created] (CALCITE-3142) A NPE when rounding a nullable numeric

2019-06-22 Thread Muhammad Gelbana (JIRA)
Muhammad Gelbana created CALCITE-3142:
-

 Summary: A NPE when rounding a nullable numeric
 Key: CALCITE-3142
 URL: https://issues.apache.org/jira/browse/CALCITE-3142
 Project: Calcite
  Issue Type: Bug
  Components: core
Affects Versions: 1.20.0
Reporter: Muhammad Gelbana


The following query throws a NPE in the generated code because it assumes the 
divided value to be an initialized Java object (Not null), which is fine for 
the first row, but not for the second.
{code:sql}
SELECT ROUND(CAST((X/Y) AS NUMERIC), 2) FROM (VALUES (1, 2), (NULLIF(5, 5), 
NULLIF(5, 5))) A(X, Y){code}
If I modify the query a little bit, it runs ok:
 – No casting
{code:sql}
SELECT ROUND((X/Y), 2) FROM (VALUES (1, 2), (NULLIF(5, 5), NULLIF(5, 5))) A(X, 
Y){code}
– No rounding
{code:sql}
SELECT (X/Y)::NUMERIC FROM (VALUES (1, 2), (NULLIF(5, 5), NULLIF(5, 5))) A(X, 
Y){code}
+This is the optimized generated code+
{code:java}
final Object[] current = (Object[]) inputEnumerator.current();
final Integer inp0_ = (Integer) current[0];
final Integer inp1_ = (Integer) current[1];
final java.math.BigDecimal v1 = new java.math.BigDecimal(
  inp0_.intValue() / inp1_.intValue()); // <<< NPE
return inp0_ == null || inp1_ == null ? (java.math.BigDecimal) null : 
org.apache.calcite.runtime.SqlFunctions.sround(v1, 2);{code}
+This is the non-optimized one+
{code:java}
final Object[] current = (Object[]) inputEnumerator.current();
final Integer inp0_ = (Integer) current[0];
final boolean inp0__unboxed = inp0_ == null;
final Integer inp1_ = (Integer) current[1];
final boolean inp1__unboxed = inp1_ == null;
final boolean v = inp0__unboxed || inp1__unboxed;
final int inp0__unboxed0 = inp0_.intValue(); // <<< NPE
final int inp1__unboxed0 = inp1_.intValue(); // <<< NPE
final int v0 = inp0__unboxed0 / inp1__unboxed0;
final java.math.BigDecimal v1 = new java.math.BigDecimal(
  v0);
final java.math.BigDecimal v2 = v ? (java.math.BigDecimal) null : 
org.apache.calcite.runtime.SqlFunctions.sround(v1, 2);
return v2;{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Avatica java.sql.Date offset calculation bug

2019-06-21 Thread Muhammad Gelbana
The java docs says that the calculated timezone offset needs to be *added*
[1] while avatica *subtracts* it [2].

I even saw this happening multiple times in the same class which makes me
think, is this actually a bug or not ?
I'm facing a case supporting that this is a bug and the javadocs supports
the same too.

[1]
https://docs.oracle.com/javase/8/docs/api/java/util/TimeZone.html#getOffset-long-
[2]
https://github.com/apache/calcite-avatica/blob/96507bfe737f2188c16dec9d16d5e8b502df231f/core/src/main/java/org/apache/calcite/avatica/util/AbstractCursor.java#L1047

Thanks,
Gelbana


Re: Pluggable JDBC types

2019-06-21 Thread Muhammad Gelbana
I believe you're correct. Thanks a lot for your help.
Thanks,
Gelbana


On Fri, Jun 21, 2019 at 3:51 PM Stamatis Zampetakis 
wrote:

> For the use-case that you described, I think what needs to be changed is in
> CalcitePrepareImpl#getTypeName [1].
> Possibly instead of using RelDataType#getSqlTypeName we should use
> RelDataType#getSqlIdentifier [2].
>
> [1]
>
> https://github.com/apache/calcite/blob/4e89fddab415a1e04b82c7d69960e399f608949f/core/src/main/java/org/apache/calcite/prepare/CalcitePrepareImpl.java#L829
> [2]
>
> https://github.com/apache/calcite/blob/4e89fddab415a1e04b82c7d69960e399f608949f/core/src/main/java/org/apache/calcite/rel/type/RelDataType.java#L200
>
> Best,
> Stamatis
>
> On Thu, Jun 6, 2019 at 11:05 PM Muhammad Gelbana 
> wrote:
>
> > You're absolutely right. User-defined types should be the way to go. I
> > believe it needs enhancement though, only to customize the returned
> column
> > type name as I mentioned here[1]
> >
> > [1]
> >
> >
> https://issues.apache.org/jira/browse/CALCITE-3108?focusedCommentId=16857993=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16857993
> >
> > Thanks,
> > Gelbana
> >
> >
> > On Thu, Jun 6, 2019 at 3:00 PM Stamatis Zampetakis 
> > wrote:
> >
> > > I see but I am not sure SqlTypeName is the way to go.
> > >
> > > Postgres has many built-in types [1] which do not appear in this
> > > enumeration.
> > > Other DBMS have also their own built-in types.
> > > Adding every possible type in SqlTypeName does not seem right.
> > >
> > > Unfortunately, I don't know what's the best way to proceed.
> > >
> > > [1] https://www.postgresql.org/docs/11/datatype.html
> > >
> > >
> > >
> > > On Tue, Jun 4, 2019 at 7:39 PM Muhammad Gelbana 
> > > wrote:
> > >
> > > > The only difference I need to achieve while handling both types, is
> the
> > > > returned column type name
> > (ResultSet.getMetaData().getColumnTypeName(int
> > > > index)).
> > > > The returned value is VARCHAR even if the column type is a user
> defined
> > > > type with the alias TEXT.
> > > >
> > > > While getting the column type name using a real PostgreSQL connection
> > > for a
> > > > TEXT column, is TEXT, not VARCHAR.
> > > >
> > > > Thanks,
> > > > Gelbana
> > > >
> > > >
> > > > On Tue, Jun 4, 2019 at 6:23 PM Stamatis Zampetakis <
> zabe...@gmail.com>
> > > > wrote:
> > > >
> > > > > I am not sure what problem exactly we are trying to solve here
> (sorry
> > > for
> > > > > that).
> > > > > From what I understood so far the requirement is to introduce a new
> > > > > built-in SQL type (i.e., TEXT).
> > > > > However, I am still trying to understand why do we need this.
> > > > > Are we going to treat TEXT and VARCHAR differently?
> > > > >
> > > > > On Tue, Jun 4, 2019 at 5:18 PM Muhammad Gelbana <
> m.gelb...@gmail.com
> > >
> > > > > wrote:
> > > > >
> > > > > > Thanks Lai, I beleive your analysis is correct.
> > > > > >
> > > > > > Which brings up another question:
> > > > > > Is it ok if we add support for what I'm trying to do here ? I can
> > > > gladly
> > > > > > work on that but I need to know if it will be accepted.
> > > > > >
> > > > > > Thanks,
> > > > > > Gelbana
> > > > > >
> > > > > >
> > > > > > On Tue, Jun 4, 2019 at 8:38 AM Lai Zhou 
> > wrote:
> > > > > >
> > > > > > > @Muhammad Gelbana,I think you just register an alias-name
> 'TEXT'
> > > for
> > > > > the
> > > > > > > SqlType  'VARCHAR'.
> > > > > > > The parser did the right thing here, see
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/apache/calcite/blob/9721283bd0ce46a337f51a3691585cca8003e399/core/src/main/java/org/apache/calcite/sql/validate/SqlValidatorImpl.java#L1566
> > > > > > > When the parser encountered a 'text' SqlIdentifier, it would
> get
> > > the
> > > > > type
>

Re: How do I enable logging/tracing in Apache Calcite using Sqlline?

2019-06-17 Thread Muhammad Gelbana
I assume you can specify the log level in the JVM options for SqlLine. The
JVM option is on the website/docs. Probably the *how to* page.

Thanks,
Gelbana


On Mon, Jun 17, 2019 at 5:09 PM Pirk, Holger  wrote:

> Hi folks,
>
> I posted a question on SO (see <
> https://stackoverflow.com/questions/56629738/how-do-i-enable-logging-tracing-in-apache-calcite-using-sqlline>)
> but figured that I might get a better and/or quicker answer here (I am
> happy to copy responses from one to the other). Here is the question as
> posted:
>
> Following , I ran Apache
> Calcite using SqlLine. I tried activating tracing as instructed in <
> https://calcite.apache.org/docs/howto.html#tracing>. However, I don't get
> any logging. Here is the content of my session (hopefully containing all
> relevant information):
>
> 
> root@3b8279cda4cd:~/calcite/example/csv# egrep "^[^#]"
> ../../core/src/test/resources/log4j.properties
> log4j.rootLogger=TRACE, A1
> log4j.logger.org.apache.calcite.runtime.CalciteException=FATAL
> log4j.logger.org.apache.calcite.sql.validate.SqlValidatorException=FATAL
> log4j.logger.org.apache.calcite.plan.RexImplicationChecker=ERROR
> log4j.appender.A1=org.apache.log4j.ConsoleAppender
> log4j.appender.A1.layout=org.apache.log4j.PatternLayout
> log4j.appender.A1.layout.ConversionPattern=%d [%t] %-5p - %m%n
> log4j.logger.org.apache.calcite.plan.RelOptPlanner=DEBUG
> log4j.logger.org.apache.calcite.plan.hep.HepPlanner=TRACE
>
> root@3b8279cda4cd:~/calcite/example/csv# cat target/classpath.txt
>
> 

Re: A NPE when rounding a nullable numeric

2019-06-17 Thread Muhammad Gelbana
*This is the optimized generated code*
final Object[] current = (Object[]) inputEnumerator.current();
final Integer inp0_ = (Integer) current[0];
final Integer inp1_ = (Integer) current[1];
final java.math.BigDecimal v1 = new java.math.BigDecimal(
  inp0_.intValue() / inp1_.intValue()); *// NPE*
return inp0_ == null || inp1_ == null ? (java.math.BigDecimal) null :
org.apache.calcite.runtime.SqlFunctions.sround(v1, 2);

*This is the non-optimized one*
final Object[] current = (Object[]) inputEnumerator.current();
final Integer inp0_ = (Integer) current[0];
final boolean inp0__unboxed = inp0_ == null;
final Integer inp1_ = (Integer) current[1];
final boolean inp1__unboxed = inp1_ == null;
final boolean v = inp0__unboxed || inp1__unboxed;
final int inp0__unboxed0 = inp0_.intValue(); *// NPE*
final int inp1__unboxed0 = inp1_.intValue(); *// NPE*
final int v0 = inp0__unboxed0 / inp1__unboxed0;
final java.math.BigDecimal v1 = new java.math.BigDecimal(
  v0);
final java.math.BigDecimal v2 = v ? (java.math.BigDecimal) null :
org.apache.calcite.runtime.SqlFunctions.sround(v1, 2);
return v2;

I'm still trying to understand how to fix this. I assume I need to avoid
creating an Expression for "final int inp0__unboxed0 = inp0_.intValue()"
and "final int inp1__unboxed0 = inp1_.intValue()". Any hints ?

Thanks,
Gelbana


On Sun, Jun 16, 2019 at 9:28 PM Muhammad Gelbana 
wrote:

> Of course, my bad!
>
> -- Regular cast syntax
> SELECT ROUND(CAST((X/Y) AS NUMERIC), 2) FROM (VALUES (1, 2), (NULLIF(5,
> 5), NULLIF(5, 5))) A(X, Y)
>
> Thanks,
> Gelbana
>
>
> On Sun, Jun 16, 2019 at 8:43 PM Julian Hyde 
> wrote:
>
>> Can you reproduce it with regular cast syntax? Make it as easy as
>> possible for others to help you.
>>
>> Julian
>>
>> > On Jun 16, 2019, at 11:24 AM, Muhammad Gelbana 
>> wrote:
>> >
>> > The following query throws a NPE in the generated code because it
>> assumes
>> > the divided value to be an initialized Java object (Not null), which is
>> > fine for the first row, but not for the second.
>> >
>> > SELECT ROUND((X/Y)::NUMERIC, 2)
>> > FROM (VALUES (1, 2), (NULLIF(5, 5), NULLIF(5, 5))) A(X, Y)
>> >
>> > If I modify the query a little bit, it runs ok:
>> > -- No casting
>> > SELECT ROUND((X/Y), 2) FROM (VALUES (1, 2), (NULLIF(5, 5), NULLIF(5,
>> 5)))
>> > A(X, Y)
>> >
>> > -- No rounding
>> > SELECT (X/Y)::NUMERIC FROM (VALUES (1, 2), (NULLIF(5, 5), NULLIF(5, 5)))
>> > A(X, Y)
>> >
>> > What could be causing this ? Any hints ?
>> > And was this reported before or should I create a new ticket ?
>> >
>> > Thanks,
>> > Gelbana
>>
>


Re: A NPE when rounding a nullable numeric

2019-06-16 Thread Muhammad Gelbana
Of course, my bad!

-- Regular cast syntax
SELECT ROUND(CAST((X/Y) AS NUMERIC), 2) FROM (VALUES (1, 2), (NULLIF(5, 5),
NULLIF(5, 5))) A(X, Y)

Thanks,
Gelbana


On Sun, Jun 16, 2019 at 8:43 PM Julian Hyde  wrote:

> Can you reproduce it with regular cast syntax? Make it as easy as possible
> for others to help you.
>
> Julian
>
> > On Jun 16, 2019, at 11:24 AM, Muhammad Gelbana 
> wrote:
> >
> > The following query throws a NPE in the generated code because it assumes
> > the divided value to be an initialized Java object (Not null), which is
> > fine for the first row, but not for the second.
> >
> > SELECT ROUND((X/Y)::NUMERIC, 2)
> > FROM (VALUES (1, 2), (NULLIF(5, 5), NULLIF(5, 5))) A(X, Y)
> >
> > If I modify the query a little bit, it runs ok:
> > -- No casting
> > SELECT ROUND((X/Y), 2) FROM (VALUES (1, 2), (NULLIF(5, 5), NULLIF(5, 5)))
> > A(X, Y)
> >
> > -- No rounding
> > SELECT (X/Y)::NUMERIC FROM (VALUES (1, 2), (NULLIF(5, 5), NULLIF(5, 5)))
> > A(X, Y)
> >
> > What could be causing this ? Any hints ?
> > And was this reported before or should I create a new ticket ?
> >
> > Thanks,
> > Gelbana
>


A NPE when rounding a nullable numeric

2019-06-16 Thread Muhammad Gelbana
The following query throws a NPE in the generated code because it assumes
the divided value to be an initialized Java object (Not null), which is
fine for the first row, but not for the second.

SELECT ROUND((X/Y)::NUMERIC, 2)
FROM (VALUES (1, 2), (NULLIF(5, 5), NULLIF(5, 5))) A(X, Y)

If I modify the query a little bit, it runs ok:
-- No casting
SELECT ROUND((X/Y), 2) FROM (VALUES (1, 2), (NULLIF(5, 5), NULLIF(5, 5)))
A(X, Y)

-- No rounding
SELECT (X/Y)::NUMERIC FROM (VALUES (1, 2), (NULLIF(5, 5), NULLIF(5, 5)))
A(X, Y)

What could be causing this ? Any hints ?
And was this reported before or should I create a new ticket ?

Thanks,
Gelbana


[jira] [Created] (CALCITE-3128) Joining two tables producing only NULLs will return 0 rows

2019-06-14 Thread Muhammad Gelbana (JIRA)
Muhammad Gelbana created CALCITE-3128:
-

 Summary: Joining two tables producing only NULLs will return 0 rows
 Key: CALCITE-3128
 URL: https://issues.apache.org/jira/browse/CALCITE-3128
 Project: Calcite
  Issue Type: Bug
  Components: core
Affects Versions: 1.20.0
Reporter: Muhammad Gelbana


The following queries will return 0 rows while they're expected to ruturn rows 
with NULLs in them.

{code:sql}
SELECT *
FROM (SELECT NULLIF(5, 5)) a, (SELECT NULLIF(5, 5)) b
{code}
{code:sql}
SELECT *
FROM (VALUES (NULLIF(5, 5)), (NULLIF(5, 5))) a, (VALUES (NULLIF(5, 5)), 
(NULLIF(5, 5))) b
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Joining two tables while each table returns a single row of nulls

2019-06-13 Thread Muhammad Gelbana
I run the following query against MySQL, PostgreSQL and Calcite and found
that Calcite returns 0 rows, while the other DBMSs return a single row of
two NULLs

SELECT * FROM (SELECT NULLIF(5, 5)) a, (SELECT NULLIF(5, 5)) b

Is this possibly a bug ? Is there a configuration or something to tweak to
return the same output MySQL and PostgreSQL without modifying the query ?

Thanks,
Gelbana


Re: how to decide the result type of appregate operation

2019-06-12 Thread Muhammad Gelbana
You can provide an instance of
"org.apache.calcite.rel.type.RelDataTypeSystemImpl" to the
"org.apache.calcite.tools.Frameworks.ConfigBuilder" to return a different
type for "some" functions such as SUM and AVG. I'd love to know if there is
another way to override the output type of a any function but I don't know
if there is such a thing.

Thanks,
Gelbana


On Wed, Jun 12, 2019 at 8:46 AM Maria  wrote:

> Embarrassing , My question is how to decide these aggregate result type ?
>
>
> At 2019-06-12 13:40:42, "Maria"  wrote:
> >Hi, all:  I'm using the es-adapter to do some aggregating,such like
> count(),sum(),avg(). BUT, these ops will generate new virtual
> column,maybe something was done by calcite itself to deduce appropriate
> datatype for this?
> >I saw some description in http://calcite.apache.org/docs/reference.html:
> >   "Calcite deduces the parameter types and result type of a function
> from the parameter and return types of the Java method that implements it. "
> >So the question is does this matter?
> >
> >
> >Thanks for any reply.
> >Best Regards.
> >
>


Re: Re: How to avoid SUM0 or disable a rule ?

2019-06-11 Thread Muhammad Gelbana
Sorry folks. False alarm. The aggregator works fine, but my table scan was
faulty.

Thanks,
Gelbana


On Tue, Jun 11, 2019 at 9:24 PM Muhammad Gelbana 
wrote:

> With pleaseure. I'll try to fix it first to confirm that my assumption is
> correct.
>
> Thanks,
> Gelbana
>
>
> On Tue, Jun 11, 2019 at 8:44 PM Haisheng Yuan 
> wrote:
>
>> Cool, can you create an issue for this bug?
>>
>> - Haisheng
>>
>> ------
>> 发件人:Muhammad Gelbana
>> 日 期:2019年06月12日 02:39:20
>> 收件人:dev@calcite.apache.org (dev@calcite.apache.org)<
>> dev@calcite.apache.org>
>> 抄 送:Haisheng Yuan
>> 主 题:Re: How to avoid SUM0 or disable a rule ?
>>
>> I believe it's a bug because DoubleSum (Also LongSum and IntSum) are
>> initialized with a value of 0 [1]
>>
>> [1]
>> https://github.com/apache/calcite/blob/a3c56be7bccc58859524ba39e5b30b7078f97d00/core/src/main/java/org/apache/calcite/interpreter/AggregateNode.java#L459
>>
>> Thanks,
>> Gelbana
>>
>>
>> On Tue, Jun 11, 2019 at 8:35 PM Vamshi Krishna <
>> vamshi.v.kris...@gmail.com> wrote:
>>
>>> It's done in the SqlToRelConverter.java:5427. I don't think there is a
>>> way currently to disable it (i may be wrong).
>>> There should be a configurable option to disable this.
>>>
>>>
>>> -Vamshi
>>>
>>> On Tue, Jun 11, 2019 at 2:31 PM Muhammad Gelbana 
>>> wrote:
>>> >
>>> > I just cleared the reducible aggregate calls collection at runtime (to
>>> void
>>> > the rule) and I'm still facing the same problem. This onviously has
>>> nothing
>>> > to do with the rule. I'll investigate further. Thanks for your help.
>>> >
>>> > Thanks,
>>> > Gelbana
>>> >
>>> >
>>> > On Tue, Jun 11, 2019 at 8:16 PM Haisheng Yuan 
>>> > wrote:
>>> >
>>> > > Hi Gelbana,
>>> > >
>>> > > You can construct your own AggregateReduceFunctionsRule instance by
>>> > > specifying the functions you want to reduce:
>>> > >
>>> > > public AggregateReduceFunctionsRule(Class
>>> aggregateClass,
>>> > > RelBuilderFactory relBuilderFactory, EnumSet
>>> functionsToReduce) {
>>> > >
>>> > >
>>> > > But I think the issue you described might be a bug, can you open a
>>> JIRA
>>> > > issue with a test case if possible?
>>> > >
>>> > > - Haisheng
>>> > >
>>> > > --
>>> > > 发件人:Muhammad Gelbana
>>> > > 日 期:2019年06月12日 01:46:28
>>> > > 收件人:
>>> > > 主 题:How to avoid SUM0 or disable a rule ?
>>> > >
>>> > > Executing the following query produces unexpected results
>>> > >
>>> > > SELECT
>>> > > "Calcs"."key" AS "key",
>>> > > SUM("Calcs"."num2") AS "sum:num2:ok",
>>> > > SUM("Calcs"."num2") AS "$__alias__0"
>>> > > FROM "TestV1"."Calcs" "Calcs"
>>> > > GROUP BY 1
>>> > > ORDER BY 3 ASC NULLS FIRST
>>> > > LIMIT 10
>>> > >
>>> > > The returned results contains 0 instead of NULLs while running the
>>> query
>>> > > against a PostgreSQL instance returns NULLs as expected.
>>> > >
>>> > >
>>> > > The reason for that is that Calcite uses SUM0 implementation instead
>>> of SUM.
>>> > > I found that the AggregateReduceFunctionsRule rule is the one that
>>> converts
>>> > > the SUM aggregate call to SUM0, so is there a way to remove this rule
>>> > > before planning ?
>>> > >
>>> > > Thanks,
>>> > > Gelbana
>>> > >
>>> > >
>>>
>>
>>


Re: Re: How to avoid SUM0 or disable a rule ?

2019-06-11 Thread Muhammad Gelbana
With pleaseure. I'll try to fix it first to confirm that my assumption is
correct.

Thanks,
Gelbana


On Tue, Jun 11, 2019 at 8:44 PM Haisheng Yuan 
wrote:

> Cool, can you create an issue for this bug?
>
> - Haisheng
>
> --
> 发件人:Muhammad Gelbana
> 日 期:2019年06月12日 02:39:20
> 收件人:dev@calcite.apache.org (dev@calcite.apache.org) >
> 抄 送:Haisheng Yuan
> 主 题:Re: How to avoid SUM0 or disable a rule ?
>
> I believe it's a bug because DoubleSum (Also LongSum and IntSum) are
> initialized with a value of 0 [1]
>
> [1]
> https://github.com/apache/calcite/blob/a3c56be7bccc58859524ba39e5b30b7078f97d00/core/src/main/java/org/apache/calcite/interpreter/AggregateNode.java#L459
>
> Thanks,
> Gelbana
>
>
> On Tue, Jun 11, 2019 at 8:35 PM Vamshi Krishna 
> wrote:
>
>> It's done in the SqlToRelConverter.java:5427. I don't think there is a
>> way currently to disable it (i may be wrong).
>> There should be a configurable option to disable this.
>>
>>
>> -Vamshi
>>
>> On Tue, Jun 11, 2019 at 2:31 PM Muhammad Gelbana 
>> wrote:
>> >
>> > I just cleared the reducible aggregate calls collection at runtime (to
>> void
>> > the rule) and I'm still facing the same problem. This onviously has
>> nothing
>> > to do with the rule. I'll investigate further. Thanks for your help.
>> >
>> > Thanks,
>> > Gelbana
>> >
>> >
>> > On Tue, Jun 11, 2019 at 8:16 PM Haisheng Yuan 
>> > wrote:
>> >
>> > > Hi Gelbana,
>> > >
>> > > You can construct your own AggregateReduceFunctionsRule instance by
>> > > specifying the functions you want to reduce:
>> > >
>> > > public AggregateReduceFunctionsRule(Class
>> aggregateClass,
>> > > RelBuilderFactory relBuilderFactory, EnumSet
>> functionsToReduce) {
>> > >
>> > >
>> > > But I think the issue you described might be a bug, can you open a
>> JIRA
>> > > issue with a test case if possible?
>> > >
>> > > - Haisheng
>> > >
>> > > --
>> > > 发件人:Muhammad Gelbana
>> > > 日 期:2019年06月12日 01:46:28
>> > > 收件人:
>> > > 主 题:How to avoid SUM0 or disable a rule ?
>> > >
>> > > Executing the following query produces unexpected results
>> > >
>> > > SELECT
>> > > "Calcs"."key" AS "key",
>> > > SUM("Calcs"."num2") AS "sum:num2:ok",
>> > > SUM("Calcs"."num2") AS "$__alias__0"
>> > > FROM "TestV1"."Calcs" "Calcs"
>> > > GROUP BY 1
>> > > ORDER BY 3 ASC NULLS FIRST
>> > > LIMIT 10
>> > >
>> > > The returned results contains 0 instead of NULLs while running the
>> query
>> > > against a PostgreSQL instance returns NULLs as expected.
>> > >
>> > >
>> > > The reason for that is that Calcite uses SUM0 implementation instead
>> of SUM.
>> > > I found that the AggregateReduceFunctionsRule rule is the one that
>> converts
>> > > the SUM aggregate call to SUM0, so is there a way to remove this rule
>> > > before planning ?
>> > >
>> > > Thanks,
>> > > Gelbana
>> > >
>> > >
>>
>
>


Re: How to avoid SUM0 or disable a rule ?

2019-06-11 Thread Muhammad Gelbana
I believe it's a bug because DoubleSum (Also LongSum and IntSum) are
initialized with a value of 0 [1]

[1]
https://github.com/apache/calcite/blob/a3c56be7bccc58859524ba39e5b30b7078f97d00/core/src/main/java/org/apache/calcite/interpreter/AggregateNode.java#L459

Thanks,
Gelbana


On Tue, Jun 11, 2019 at 8:35 PM Vamshi Krishna 
wrote:

> It's done in the SqlToRelConverter.java:5427. I don't think there is a
> way currently to disable it (i may be wrong).
> There should be a configurable option to disable this.
>
>
> -Vamshi
>
> On Tue, Jun 11, 2019 at 2:31 PM Muhammad Gelbana 
> wrote:
> >
> > I just cleared the reducible aggregate calls collection at runtime (to
> void
> > the rule) and I'm still facing the same problem. This onviously has
> nothing
> > to do with the rule. I'll investigate further. Thanks for your help.
> >
> > Thanks,
> > Gelbana
> >
> >
> > On Tue, Jun 11, 2019 at 8:16 PM Haisheng Yuan 
> > wrote:
> >
> > > Hi Gelbana,
> > >
> > > You can construct your own AggregateReduceFunctionsRule instance by
> > > specifying the functions you want to reduce:
> > >
> > > public AggregateReduceFunctionsRule(Class
> aggregateClass,
> > > RelBuilderFactory relBuilderFactory, EnumSet
> functionsToReduce) {
> > >
> > >
> > > But I think the issue you described might be a bug, can you open a JIRA
> > > issue with a test case if possible?
> > >
> > > - Haisheng
> > >
> > > --
> > > 发件人:Muhammad Gelbana
> > > 日 期:2019年06月12日 01:46:28
> > > 收件人:
> > > 主 题:How to avoid SUM0 or disable a rule ?
> > >
> > > Executing the following query produces unexpected results
> > >
> > > SELECT
> > > "Calcs"."key" AS "key",
> > > SUM("Calcs"."num2") AS "sum:num2:ok",
> > > SUM("Calcs"."num2") AS "$__alias__0"
> > > FROM "TestV1"."Calcs" "Calcs"
> > > GROUP BY 1
> > > ORDER BY 3 ASC NULLS FIRST
> > > LIMIT 10
> > >
> > > The returned results contains 0 instead of NULLs while running the
> query
> > > against a PostgreSQL instance returns NULLs as expected.
> > >
> > >
> > > The reason for that is that Calcite uses SUM0 implementation instead
> of SUM.
> > > I found that the AggregateReduceFunctionsRule rule is the one that
> converts
> > > the SUM aggregate call to SUM0, so is there a way to remove this rule
> > > before planning ?
> > >
> > > Thanks,
> > > Gelbana
> > >
> > >
>


Re: How to avoid SUM0 or disable a rule ?

2019-06-11 Thread Muhammad Gelbana
I just cleared the reducible aggregate calls collection at runtime (to void
the rule) and I'm still facing the same problem. This onviously has nothing
to do with the rule. I'll investigate further. Thanks for your help.

Thanks,
Gelbana


On Tue, Jun 11, 2019 at 8:16 PM Haisheng Yuan 
wrote:

> Hi Gelbana,
>
> You can construct your own AggregateReduceFunctionsRule instance by
> specifying the functions you want to reduce:
>
> public AggregateReduceFunctionsRule(Class aggregateClass,
> RelBuilderFactory relBuilderFactory, EnumSet functionsToReduce) {
>
>
> But I think the issue you described might be a bug, can you open a JIRA
> issue with a test case if possible?
>
> - Haisheng
>
> ------
> 发件人:Muhammad Gelbana
> 日 期:2019年06月12日 01:46:28
> 收件人:
> 主 题:How to avoid SUM0 or disable a rule ?
>
> Executing the following query produces unexpected results
>
> SELECT
> "Calcs"."key" AS "key",
> SUM("Calcs"."num2") AS "sum:num2:ok",
> SUM("Calcs"."num2") AS "$__alias__0"
> FROM "TestV1"."Calcs" "Calcs"
> GROUP BY 1
> ORDER BY 3 ASC NULLS FIRST
> LIMIT 10
>
> The returned results contains 0 instead of NULLs while running the query
> against a PostgreSQL instance returns NULLs as expected.
>
>
> The reason for that is that Calcite uses SUM0 implementation instead of SUM.
> I found that the AggregateReduceFunctionsRule rule is the one that converts
> the SUM aggregate call to SUM0, so is there a way to remove this rule
> before planning ?
>
> Thanks,
> Gelbana
>
>


Is it essential to unparse to the same original syntax ?

2019-06-08 Thread Muhammad Gelbana
I created a PR [1] to support the PostgreSQL :: casting operator. The way I
did this is by creating a new 'SqlBinaryOperator' child. This new child
wraps an instance of the 'SqlCastFunction' to reuse it's
'getOperandCountRange',
'inferReturnType', 'checkOperandTypes' and 'getMonotonicity' logic, and of
course unparses to the original input (i.e. op1 :: type).

But then the PR was commented to reuse the 'SqlCastFunction' type instead
of having a totally new 'SqlBinaryOperator', wich won't unparse properly
because 'op1 :: type' will be unparsed as 'CAST(op1 AS type)'.

Is this a big deal ? I prefer to preserve the orignal format for the parsed
string but to do that I'll have to extend 'SqlCastFunction' to override
it's 'unparse' implementation (I don't remember why I didn't do that, the
PR is like 3 months old)

So is preserving the original structure necessary, recommended or a must
while unparsing ?
If there are any related restriction I need to follow while working on
this, please let me know.

[1] https://github.com/apache/calcite/pull/1066

Thanks,
Gelbana


Re: [DISCUSS] Towards Calcite 1.20.0

2019-06-07 Thread Muhammad Gelbana
I'll keep a close eye on those two PRs [1][2] in case anyone has further
comments. One of them [2] has been around for months now so I appreciate if
someone can finish reviewing it. Danny already pointed out some concerns
and I believe I addressed them.

Thanks Michael for your recent comment. I fixed the typo.

[1] https://github.com/apache/calcite/pull/1242
[2] https://github.com/apache/calcite/pull/1066

Thanks,
Gelbana


On Fri, Jun 7, 2019 at 9:03 PM Michael Mior  wrote:

> I'm not sure it can really be a blocker for the release since it's
> already been released. That said, we certainly would like to allow
> Drill the ability to upgrade. Since CALCITE-2798 isn't a functional
> change, I'd be open to reverting.
> --
> Michael Mior
> mm...@apache.org
>
> Le ven. 7 juin 2019 à 13:47, Bohdan Kazydub  a
> écrit :
> >
> > Hi all,
> >
> > I'm working on upgrading Calcite in Drill (from 1.18 to 1.20) and almost
> > all issues were resolved except CALCITE-3121
> > .
> > This issue appeared after the fix for CALCITE-2798
> > , and it causes a
> lot
> > of queries to hang in Drill.
> > Sorry for reporting it so late, it was hard to reproduce it in Calcite.
> >
> > Since hanging of VolcanoPlanner is critical issue, I think it may be a
> > blocker for the release.
> >
> > Can we revert the fix for CALCITE-2798
> >  to resolve it
> before
> > the release, since the fix for more general one may require more time?
> >
> > Regards Bohdan
> >
> >
> > On Fri, Jun 7, 2019 at 7:41 PM Julian Hyde  wrote:
> >
> > > +1
> > >
> > > I support fixing https://issues.apache.org/jira/browse/CALCITE-3119 <
> > > https://issues.apache.org/jira/browse/CALCITE-3119> before 1.20
> because
> > > it modifies APIs that we have added since 1.19; if we wait until after
> the
> > > release, we will have to keep them.
> > >
> > > Browsing https://github.com/apache/calcite/pulls <
> > > https://github.com/apache/calcite/pulls> it looks likely that quite a
> few
> > > PRs are ready. Committers, if you have a little time to review PRs and
> find
> > > ones that you consider ready, put them in. If all they need is cosmetic
> > > changes (e.g. an improved commit message, changes to formatting) feel
> free
> > > to make those fixups yourself.
> > >
> > > Julian
> > >
> > >
> > > > On Jun 7, 2019, at 8:06 AM, Michael Mior  wrote:
> > > >
> > > > I have reviewed and committed couple PRs and removed fix version of
> > > > 1.20.0 from all other issues. Given that it's Friday, I'm proposing
> > > > that I wait until Monday before freezing for release in case anyone
> > > > wants to push anything final through.
> > > > --
> > > > Michael Mior
> > > > mm...@apache.org
> > > >
> > > > Le ven. 31 mai 2019 à 20:03, Michael Mior  a
> écrit :
> > > >>
> > > >> Below is a link to open issues with fix version set to 1.20.0. I
> > > >> previously went through and removed the fix version for issues which
> > > >> will definitely not be ready.
> > > >>
> > > >>
> > >
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20CALCITE%20AND%20status%20in%20(Open%2C%20%22In%20Progress%22)%20AND%20resolution%20%3D%20Unresolved%20AND%20fixVersion%20%3D%201.20.0%20ORDER%20BY%20priority%20DESC%2C%20updated%20DESC
> > > >>
> > > >> I don't think any of these are critical, but several have PRs which
> I
> > > >> believe should be ready to merge. A second set of eyes would be
> > > >> appreciated. Some of the rest also have PRs but they seem to need
> > > >> further work.
> > > >>
> > > >> https://github.com/apache/calcite/pull/1138
> > > >> https://github.com/apache/calcite/pull/1011
> > > >> https://github.com/apache/calcite/pull/1014
> > > >>
> > > >> --
> > > >> Michael Mior
> > > >> mm...@apache.org
> > > >>
> > > >> Le ven. 31 mai 2019 à 14:28, Julian Hyde  a
> écrit :
> > > >>>
> > > >>> How are we doing? What must-fix bugs remain?
> > > >>>
> > > >>> I asked Danny to fix some deprecation warnings, which he duly
> did[1],
> > > but now I think I was mistaken, because he did so by removing a bunch
> of
> > > methods whose arguments were the now-deprecated class SemiJoin. This
> has
> > > become a breaking change with not even a minor release notice, and I
> think
> > > we should back it out before 1.20. I’m going to re-open 3102 and
> declare it
> > > a blocker for 1.20. Sorry I screwed up, Danny! Let’s discuss in the
> JIRA
> > > case.
> > > >>>
> > > >>> Julian
> > > >>>
> > > >>> [1] https://issues.apache.org/jira/browse/CALCITE-3102 <
> > > https://issues.apache.org/jira/browse/CALCITE-3102>
> > > >>>
> > >  On May 28, 2019, at 5:18 AM, Yuzhao Chen 
> > > wrote:
> > > 
> > >  Thanks so much for your work, Michael,
> > > 
> > >  Let's get CALCITE-3055 into 1.20 version, because  it fix an
> > > important function regression. I will merge it in if finishes the
> review.
> > > 
> > > 

Re: Pluggable JDBC types

2019-06-06 Thread Muhammad Gelbana
You're absolutely right. User-defined types should be the way to go. I
believe it needs enhancement though, only to customize the returned column
type name as I mentioned here[1]

[1]
https://issues.apache.org/jira/browse/CALCITE-3108?focusedCommentId=16857993=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16857993

Thanks,
Gelbana


On Thu, Jun 6, 2019 at 3:00 PM Stamatis Zampetakis 
wrote:

> I see but I am not sure SqlTypeName is the way to go.
>
> Postgres has many built-in types [1] which do not appear in this
> enumeration.
> Other DBMS have also their own built-in types.
> Adding every possible type in SqlTypeName does not seem right.
>
> Unfortunately, I don't know what's the best way to proceed.
>
> [1] https://www.postgresql.org/docs/11/datatype.html
>
>
>
> On Tue, Jun 4, 2019 at 7:39 PM Muhammad Gelbana 
> wrote:
>
> > The only difference I need to achieve while handling both types, is the
> > returned column type name (ResultSet.getMetaData().getColumnTypeName(int
> > index)).
> > The returned value is VARCHAR even if the column type is a user defined
> > type with the alias TEXT.
> >
> > While getting the column type name using a real PostgreSQL connection
> for a
> > TEXT column, is TEXT, not VARCHAR.
> >
> > Thanks,
> > Gelbana
> >
> >
> > On Tue, Jun 4, 2019 at 6:23 PM Stamatis Zampetakis 
> > wrote:
> >
> > > I am not sure what problem exactly we are trying to solve here (sorry
> for
> > > that).
> > > From what I understood so far the requirement is to introduce a new
> > > built-in SQL type (i.e., TEXT).
> > > However, I am still trying to understand why do we need this.
> > > Are we going to treat TEXT and VARCHAR differently?
> > >
> > > On Tue, Jun 4, 2019 at 5:18 PM Muhammad Gelbana 
> > > wrote:
> > >
> > > > Thanks Lai, I beleive your analysis is correct.
> > > >
> > > > Which brings up another question:
> > > > Is it ok if we add support for what I'm trying to do here ? I can
> > gladly
> > > > work on that but I need to know if it will be accepted.
> > > >
> > > > Thanks,
> > > > Gelbana
> > > >
> > > >
> > > > On Tue, Jun 4, 2019 at 8:38 AM Lai Zhou  wrote:
> > > >
> > > > > @Muhammad Gelbana,I think you just register an alias-name 'TEXT'
> for
> > > the
> > > > > SqlType  'VARCHAR'.
> > > > > The parser did the right thing here, see
> > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/apache/calcite/blob/9721283bd0ce46a337f51a3691585cca8003e399/core/src/main/java/org/apache/calcite/sql/validate/SqlValidatorImpl.java#L1566
> > > > > When the parser encountered a 'text' SqlIdentifier, it would get
> the
> > > type
> > > > > from the rootSchema, the type was SqlTypeName.VARCHAR here , that
> you
> > > > > registered before.
> > > > > If you really need a new sqlType named 'text' rather than an
> > > alias-name,
> > > > I
> > > > > guess you need to introduce a new kind of SqlTypeName .
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > Muhammad Gelbana  于2019年6月3日周一 下午6:54写道:
> > > > >
> > > > > > Is that different from what I mentioned in my Jira comment ? Here
> > it
> > > is
> > > > > > again:
> > > > > >
> > > > > > Connection connection =
> > DriverManager.getConnection("jdbc:calcite:",
> > > > > info);
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> connection.unwrap(CalciteConnection.class).getRootSchema().unwrap(CalciteSchema.class).add("
> > > > > > *TEXT*", new RelProtoDataType() {
> > > > > >
> > > > > > @Override
> > > > > >     public RelDataType apply(RelDataTypeFactory factory)
> {
> > > > > > return
> > > > > >
> > > factory.createTypeWithNullability(factory.createJavaType(String.class),
> > > > > > false);
> > > > > > // return
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> factory.createTypeWithNullability(factory.createSqlType(S

Re: Type information from SQL parser?

2019-06-05 Thread Muhammad Gelbana
Parsing alone can't get you types. You'll have to validate (let Calcite
discover your schemas, tables and columns metadata). A quick look in
SqlNode showed no trace of types information (other than the SqlKind
enumeration which doesn't seem like what you're looking for).

Therefore I believe you'll have to convert the SqlNode tree to RelNode.
Sample code:

Planner planner = Frameworks.getPlanner(frameworkConfig);
SqlNode parsed = planner.parse(query);
SqlNode validated = planner.validate(parsed);
RelRoot root = planner.rel(validated); // Convert SqlNode tree to RelNode

"root.rel" should have what you need.

Thanks,
Gelbana


On Thu, Jun 6, 2019 at 12:02 AM Scott McKinney 
wrote:

> Hi.  I'm reviewing Calcite for a project and I'm having difficulty wading
> through the API. Roughly, I want the following functionality from the
> Calcite API:
>
> var schema = parseDDL(RAW_DDL); // SQL DDL or any type of Calcite
> supported schema
> var query = parseQuery("SELECT c1, c2 FROM t1 WHERE c2='value'", schema);
> var selectFields = query.getSelectFields();for(var field: selectFields) {
>   var name = field.getName();
>   var type = field.getType(); // <~~~ want this in terms of `t1` from ddl
>   ...}
>
> The type information for the select list in terms of the tables etc. in the
> DDL is what I'm after.
>
> Is this possible?  Thanks!
>


Re: Pluggable JDBC types

2019-06-04 Thread Muhammad Gelbana
The only difference I need to achieve while handling both types, is the
returned column type name (ResultSet.getMetaData().getColumnTypeName(int
index)).
The returned value is VARCHAR even if the column type is a user defined
type with the alias TEXT.

While getting the column type name using a real PostgreSQL connection for a
TEXT column, is TEXT, not VARCHAR.

Thanks,
Gelbana


On Tue, Jun 4, 2019 at 6:23 PM Stamatis Zampetakis 
wrote:

> I am not sure what problem exactly we are trying to solve here (sorry for
> that).
> From what I understood so far the requirement is to introduce a new
> built-in SQL type (i.e., TEXT).
> However, I am still trying to understand why do we need this.
> Are we going to treat TEXT and VARCHAR differently?
>
> On Tue, Jun 4, 2019 at 5:18 PM Muhammad Gelbana 
> wrote:
>
> > Thanks Lai, I beleive your analysis is correct.
> >
> > Which brings up another question:
> > Is it ok if we add support for what I'm trying to do here ? I can gladly
> > work on that but I need to know if it will be accepted.
> >
> > Thanks,
> > Gelbana
> >
> >
> > On Tue, Jun 4, 2019 at 8:38 AM Lai Zhou  wrote:
> >
> > > @Muhammad Gelbana,I think you just register an alias-name 'TEXT' for
> the
> > > SqlType  'VARCHAR'.
> > > The parser did the right thing here, see
> > >
> > >
> >
> https://github.com/apache/calcite/blob/9721283bd0ce46a337f51a3691585cca8003e399/core/src/main/java/org/apache/calcite/sql/validate/SqlValidatorImpl.java#L1566
> > > When the parser encountered a 'text' SqlIdentifier, it would get the
> type
> > > from the rootSchema, the type was SqlTypeName.VARCHAR here , that you
> > > registered before.
> > > If you really need a new sqlType named 'text' rather than an
> alias-name,
> > I
> > > guess you need to introduce a new kind of SqlTypeName .
> > >
> > >
> > >
> > >
> > > Muhammad Gelbana  于2019年6月3日周一 下午6:54写道:
> > >
> > > > Is that different from what I mentioned in my Jira comment ? Here it
> is
> > > > again:
> > > >
> > > > Connection connection = DriverManager.getConnection("jdbc:calcite:",
> > > info);
> > > >
> > > >
> > >
> >
> connection.unwrap(CalciteConnection.class).getRootSchema().unwrap(CalciteSchema.class).add("
> > > > *TEXT*", new RelProtoDataType() {
> > > >
> > > > @Override
> > > > public RelDataType apply(RelDataTypeFactory factory) {
> > > > return
> > > >
> factory.createTypeWithNullability(factory.createJavaType(String.class),
> > > > false);
> > > > // return
> > > >
> > > >
> > >
> >
> factory.createTypeWithNullability(factory.createSqlType(SqlTypeName.VARCHAR),
> > > > false); // Has the same effect
> > > > }
> > > > });
> > > >
> > > > This still returns a column type name of VARCHAR, not *TEXT*.
> > > >
> > > > I tried providing the type through the model as the UdtTest does but
> > it's
> > > > giving me the same output.
> > > >
> > > > Thanks,
> > > > Gelbana
> > > >
> > > >
> > > > On Mon, Jun 3, 2019 at 9:59 AM Julian Hyde  wrote:
> > > >
> > > > > User-defined types are probably the way to go.
> > > > >
> > > > > > On Jun 2, 2019, at 8:28 PM, Muhammad Gelbana <
> m.gelb...@gmail.com>
> > > > > wrote:
> > > > > >
> > > > > > That was my first attempt and it worked, but Julian pointed out
> > that
> > > I
> > > > > can
> > > > > > support a type without modifying the parser (which I prefer) but
> I
> > > > > couldn't
> > > > > > get it to return the column type name as I wish.
> > > > > >
> > > > > > Thanks,
> > > > > > Gelbana
> > > > > >
> > > > > >
> > > > > > On Mon, Jun 3, 2019 at 3:13 AM Yuzhao Chen  >
> > > > wrote:
> > > > > >
> > > > > >> You don’t need to, just define a new type name in parser[1] and
> > > > > translate
> > > > > >> it to VARCHAR is okey.
> > > > > >>
> > > > > >> [1]
> > > > > >>
> > 

Re: Pluggable JDBC types

2019-06-04 Thread Muhammad Gelbana
Thanks Lai, I beleive your analysis is correct.

Which brings up another question:
Is it ok if we add support for what I'm trying to do here ? I can gladly
work on that but I need to know if it will be accepted.

Thanks,
Gelbana


On Tue, Jun 4, 2019 at 8:38 AM Lai Zhou  wrote:

> @Muhammad Gelbana,I think you just register an alias-name 'TEXT' for the
> SqlType  'VARCHAR'.
> The parser did the right thing here, see
>
> https://github.com/apache/calcite/blob/9721283bd0ce46a337f51a3691585cca8003e399/core/src/main/java/org/apache/calcite/sql/validate/SqlValidatorImpl.java#L1566
> When the parser encountered a 'text' SqlIdentifier, it would get the type
> from the rootSchema, the type was SqlTypeName.VARCHAR here , that you
> registered before.
> If you really need a new sqlType named 'text' rather than an alias-name, I
> guess you need to introduce a new kind of SqlTypeName .
>
>
>
>
> Muhammad Gelbana  于2019年6月3日周一 下午6:54写道:
>
> > Is that different from what I mentioned in my Jira comment ? Here it is
> > again:
> >
> > Connection connection = DriverManager.getConnection("jdbc:calcite:",
> info);
> >
> >
> connection.unwrap(CalciteConnection.class).getRootSchema().unwrap(CalciteSchema.class).add("
> > *TEXT*", new RelProtoDataType() {
> >
> > @Override
> > public RelDataType apply(RelDataTypeFactory factory) {
> > return
> > factory.createTypeWithNullability(factory.createJavaType(String.class),
> > false);
> > // return
> >
> >
> factory.createTypeWithNullability(factory.createSqlType(SqlTypeName.VARCHAR),
> > false); // Has the same effect
> > }
> > });
> >
> > This still returns a column type name of VARCHAR, not *TEXT*.
> >
> > I tried providing the type through the model as the UdtTest does but it's
> > giving me the same output.
> >
> > Thanks,
> > Gelbana
> >
> >
> > On Mon, Jun 3, 2019 at 9:59 AM Julian Hyde  wrote:
> >
> > > User-defined types are probably the way to go.
> > >
> > > > On Jun 2, 2019, at 8:28 PM, Muhammad Gelbana 
> > > wrote:
> > > >
> > > > That was my first attempt and it worked, but Julian pointed out that
> I
> > > can
> > > > support a type without modifying the parser (which I prefer) but I
> > > couldn't
> > > > get it to return the column type name as I wish.
> > > >
> > > > Thanks,
> > > > Gelbana
> > > >
> > > >
> > > > On Mon, Jun 3, 2019 at 3:13 AM Yuzhao Chen 
> > wrote:
> > > >
> > > >> You don’t need to, just define a new type name in parser[1] and
> > > translate
> > > >> it to VARCHAR is okey.
> > > >>
> > > >> [1]
> > > >>
> > >
> >
> https://github.com/apache/calcite/blob/b0e83c469ff57257c1ea621ff943ca76f626a9b7/server/src/main/codegen/config.fmpp#L375
> > > >>
> > > >> Best,
> > > >> Danny Chan
> > > >> 在 2019年6月3日 +0800 AM6:09,Muhammad Gelbana ,写道:
> > > >>> That I understand now. But how can I support casting to TEXT and
> > having
> > > >> the
> > > >>> returned column type name as TEXT (ie. Not VARCHAR) ?
> > > >>>
> > > >>> Thanks,
> > > >>> Gelbana
> > > >>>
> > > >>>
> > > >>> On Sun, Jun 2, 2019 at 7:41 PM Julian Hyde 
> wrote:
> > > >>>
> > > >>>> The parser should only parse, not validate. This is a very
> important
> > > >>>> organizing principle for the parser.
> > > >>>>
> > > >>>> If I write “x :: text” or “x :: foo” it is up to the type system
> > > >>>> (implemented in the validator and elsewhere) to figure out whether
> > > >> “text”
> > > >>>> or “foo” are valid types.
> > > >>>>
> > > >>>> Logically, “x :: foo” is the same as “CAST(x AS foo)”. The parser
> > > >> should
> > > >>>> produce the same SqlCall in both cases. Then the parser’s job is
> > done.
> > > >>>>
> > > >>>> Julian
> > > >>>>
> > > >>>>
> > > >>>>> On Jun 2, 2019, at 6:42 AM, Muhammad Gelbana <
> m.gelb...@gmail.com>
> > > >>&

Re: Extracting all columns used in a query

2019-06-03 Thread Muhammad Gelbana
I don't konw if there is an API for that but visiting the  parsed/validated
SqlNode tree can do what you asked for.

Thanks,
Gelbana


On Tue, Jun 4, 2019 at 12:12 AM Adam Rivelli  wrote:

> Hi all,
>
> I'm trying to extract all of the (fully qualified) columns used by a query
> - similar to the information provided by
> RelMetadataQuery.getTableReferences()
> <
> https://calcite.apache.org/apidocs/org/apache/calcite/rel/metadata/RelMetadataQuery.html#getTableReferences-org.apache.calcite.rel.RelNode-
> >,
> but for column references. Is this possible to do using Calcite?
>
> I've been looking through the API docs and experimenting with the API, but
> I haven't found a straightforward way of doing this. Any help or
> information is appreciated.
>
> Adam
>


Re: Pluggable JDBC types

2019-06-03 Thread Muhammad Gelbana
Is that different from what I mentioned in my Jira comment ? Here it is
again:

Connection connection = DriverManager.getConnection("jdbc:calcite:", info);
connection.unwrap(CalciteConnection.class).getRootSchema().unwrap(CalciteSchema.class).add("
*TEXT*", new RelProtoDataType() {

@Override
public RelDataType apply(RelDataTypeFactory factory) {
return
factory.createTypeWithNullability(factory.createJavaType(String.class),
false);
// return
factory.createTypeWithNullability(factory.createSqlType(SqlTypeName.VARCHAR),
false); // Has the same effect
}
});

This still returns a column type name of VARCHAR, not *TEXT*.

I tried providing the type through the model as the UdtTest does but it's
giving me the same output.

Thanks,
Gelbana


On Mon, Jun 3, 2019 at 9:59 AM Julian Hyde  wrote:

> User-defined types are probably the way to go.
>
> > On Jun 2, 2019, at 8:28 PM, Muhammad Gelbana 
> wrote:
> >
> > That was my first attempt and it worked, but Julian pointed out that I
> can
> > support a type without modifying the parser (which I prefer) but I
> couldn't
> > get it to return the column type name as I wish.
> >
> > Thanks,
> > Gelbana
> >
> >
> > On Mon, Jun 3, 2019 at 3:13 AM Yuzhao Chen  wrote:
> >
> >> You don’t need to, just define a new type name in parser[1] and
> translate
> >> it to VARCHAR is okey.
> >>
> >> [1]
> >>
> https://github.com/apache/calcite/blob/b0e83c469ff57257c1ea621ff943ca76f626a9b7/server/src/main/codegen/config.fmpp#L375
> >>
> >> Best,
> >> Danny Chan
> >> 在 2019年6月3日 +0800 AM6:09,Muhammad Gelbana ,写道:
> >>> That I understand now. But how can I support casting to TEXT and having
> >> the
> >>> returned column type name as TEXT (ie. Not VARCHAR) ?
> >>>
> >>> Thanks,
> >>> Gelbana
> >>>
> >>>
> >>> On Sun, Jun 2, 2019 at 7:41 PM Julian Hyde  wrote:
> >>>
> >>>> The parser should only parse, not validate. This is a very important
> >>>> organizing principle for the parser.
> >>>>
> >>>> If I write “x :: text” or “x :: foo” it is up to the type system
> >>>> (implemented in the validator and elsewhere) to figure out whether
> >> “text”
> >>>> or “foo” are valid types.
> >>>>
> >>>> Logically, “x :: foo” is the same as “CAST(x AS foo)”. The parser
> >> should
> >>>> produce the same SqlCall in both cases. Then the parser’s job is done.
> >>>>
> >>>> Julian
> >>>>
> >>>>
> >>>>> On Jun 2, 2019, at 6:42 AM, Muhammad Gelbana 
> >>>> wrote:
> >>>>>
> >>>>> I'm trying to support the PostgreSQL TEXT type[1]. It's basically a
> >>>> VARCHAR.
> >>>>>
> >>>>> As Julian mentioned in his comment on Jira, I don't need to define a
> >>>>> keyword to achieve what I need so I tried exploring that and here is
> >>>> what I
> >>>>> observed so far:
> >>>>>
> >>>>> 1. If I define a new keyword in the parser, I face no trouble
> >> whatsoever
> >>>>> except for the numerous wiring I need to do for RexToLixTranslator,
> >>>>> JavaTypeFactoryImpl, SqlTypeAssignmentRules and SqlTypeName. I won't
> >> be
> >>>>> suprised if I'm missing anything but doing what I did at first
> >> managed to
> >>>>> get my queries through.
> >>>>>
> >>>>> 2. If I define the type by plugging it in through the root schema, I
> >> face
> >>>>> two problems: a) The field cannot be declared as nullable because the
> >>>> query
> >>>>> I'm using for testing gets data from (VALUES()) which doesn't produce
> >>>> null
> >>>>> values, so an exception is thrown. b) The returned column type name
> >> is
> >>>>> VARCHAR (although I delcared the new plugged type name to be TEXT),
> >> the
> >>>>> returned type number is valid though (Types.VARCHAR = 12)
> >>>>>
> >>>>> I think I'm doing something wrong that causes (2.a) but (2.b) seems a
> >>>> like
> >>>>> a bug to me. What do you think ?
> >>>>>
> >>>>> [1] https://issues.apache.org/jira/browse/CALCITE-3108
> >>>>>
> >>>>> Thanks,
> >>>>> Gelbana
> >>>>
> >>>>
> >>
>
>


Re: Pluggable JDBC types

2019-06-02 Thread Muhammad Gelbana
That was my first attempt and it worked, but Julian pointed out that I can
support a type without modifying the parser (which I prefer) but I couldn't
get it to return the column type name as I wish.

Thanks,
Gelbana


On Mon, Jun 3, 2019 at 3:13 AM Yuzhao Chen  wrote:

> You don’t need to, just define a new type name in parser[1] and translate
> it to VARCHAR is okey.
>
> [1]
> https://github.com/apache/calcite/blob/b0e83c469ff57257c1ea621ff943ca76f626a9b7/server/src/main/codegen/config.fmpp#L375
>
> Best,
> Danny Chan
> 在 2019年6月3日 +0800 AM6:09,Muhammad Gelbana ,写道:
> > That I understand now. But how can I support casting to TEXT and having
> the
> > returned column type name as TEXT (ie. Not VARCHAR) ?
> >
> > Thanks,
> > Gelbana
> >
> >
> > On Sun, Jun 2, 2019 at 7:41 PM Julian Hyde  wrote:
> >
> > > The parser should only parse, not validate. This is a very important
> > > organizing principle for the parser.
> > >
> > > If I write “x :: text” or “x :: foo” it is up to the type system
> > > (implemented in the validator and elsewhere) to figure out whether
> “text”
> > > or “foo” are valid types.
> > >
> > > Logically, “x :: foo” is the same as “CAST(x AS foo)”. The parser
> should
> > > produce the same SqlCall in both cases. Then the parser’s job is done.
> > >
> > > Julian
> > >
> > >
> > > > On Jun 2, 2019, at 6:42 AM, Muhammad Gelbana 
> > > wrote:
> > > >
> > > > I'm trying to support the PostgreSQL TEXT type[1]. It's basically a
> > > VARCHAR.
> > > >
> > > > As Julian mentioned in his comment on Jira, I don't need to define a
> > > > keyword to achieve what I need so I tried exploring that and here is
> > > what I
> > > > observed so far:
> > > >
> > > > 1. If I define a new keyword in the parser, I face no trouble
> whatsoever
> > > > except for the numerous wiring I need to do for RexToLixTranslator,
> > > > JavaTypeFactoryImpl, SqlTypeAssignmentRules and SqlTypeName. I won't
> be
> > > > suprised if I'm missing anything but doing what I did at first
> managed to
> > > > get my queries through.
> > > >
> > > > 2. If I define the type by plugging it in through the root schema, I
> face
> > > > two problems: a) The field cannot be declared as nullable because the
> > > query
> > > > I'm using for testing gets data from (VALUES()) which doesn't produce
> > > null
> > > > values, so an exception is thrown. b) The returned column type name
> is
> > > > VARCHAR (although I delcared the new plugged type name to be TEXT),
> the
> > > > returned type number is valid though (Types.VARCHAR = 12)
> > > >
> > > > I think I'm doing something wrong that causes (2.a) but (2.b) seems a
> > > like
> > > > a bug to me. What do you think ?
> > > >
> > > > [1] https://issues.apache.org/jira/browse/CALCITE-3108
> > > >
> > > > Thanks,
> > > > Gelbana
> > >
> > >
>


Re: Pluggable JDBC types

2019-06-02 Thread Muhammad Gelbana
That I understand now. But how can I support casting to TEXT and having the
returned column type name as TEXT (ie. Not VARCHAR) ?

Thanks,
Gelbana


On Sun, Jun 2, 2019 at 7:41 PM Julian Hyde  wrote:

> The parser should only parse, not validate. This is a very important
> organizing principle for the parser.
>
> If I write “x :: text” or “x :: foo” it is up to the type system
> (implemented in the validator and elsewhere) to figure out whether “text”
> or “foo” are valid types.
>
> Logically, “x :: foo” is the same as “CAST(x AS foo)”. The parser should
> produce the same SqlCall in both cases. Then the parser’s job is done.
>
> Julian
>
>
> > On Jun 2, 2019, at 6:42 AM, Muhammad Gelbana 
> wrote:
> >
> > I'm trying to support the PostgreSQL TEXT type[1]. It's basically a
> VARCHAR.
> >
> > As Julian mentioned in his comment on Jira, I don't need to define a
> > keyword to achieve what I need so I tried exploring that and here is
> what I
> > observed so far:
> >
> > 1. If I define a new keyword in the parser, I face no trouble whatsoever
> > except for the numerous wiring I need to do for RexToLixTranslator,
> > JavaTypeFactoryImpl, SqlTypeAssignmentRules and SqlTypeName. I won't be
> > suprised if I'm missing anything but doing what I did at first managed to
> > get my queries through.
> >
> > 2. If I define the type by plugging it in through the root schema, I face
> > two problems: a) The field cannot be declared as nullable because the
> query
> > I'm using for testing gets data from (VALUES()) which doesn't produce
> null
> > values, so an exception is thrown. b) The returned column type name is
> > VARCHAR (although I delcared the new plugged type name to be TEXT), the
> > returned type number is valid though (Types.VARCHAR = 12)
> >
> > I think I'm doing something wrong that causes (2.a) but (2.b) seems a
> like
> > a bug to me. What do you think ?
> >
> > [1] https://issues.apache.org/jira/browse/CALCITE-3108
> >
> > Thanks,
> > Gelbana
>
>


Pluggable JDBC types

2019-06-02 Thread Muhammad Gelbana
I'm trying to support the PostgreSQL TEXT type[1]. It's basically a VARCHAR.

As Julian mentioned in his comment on Jira, I don't need to define a
keyword to achieve what I need so I tried exploring that and here is what I
observed so far:

1. If I define a new keyword in the parser, I face no trouble whatsoever
except for the numerous wiring I need to do for RexToLixTranslator,
JavaTypeFactoryImpl, SqlTypeAssignmentRules and SqlTypeName. I won't be
suprised if I'm missing anything but doing what I did at first managed to
get my queries through.

2. If I define the type by plugging it in through the root schema, I face
two problems: a) The field cannot be declared as nullable because the query
I'm using for testing gets data from (VALUES()) which doesn't produce null
values, so an exception is thrown. b) The returned column type name is
VARCHAR (although I delcared the new plugged type name to be TEXT), the
returned type number is valid though (Types.VARCHAR = 12)

I think I'm doing something wrong that causes (2.a) but (2.b) seems a like
a bug to me. What do you think ?

[1] https://issues.apache.org/jira/browse/CALCITE-3108

Thanks,
Gelbana


[jira] [Created] (CALCITE-3108) Babel parser should parse the PostgreSQL TEXT type

2019-06-01 Thread Muhammad Gelbana (JIRA)
Muhammad Gelbana created CALCITE-3108:
-

 Summary: Babel parser should parse the PostgreSQL TEXT type
 Key: CALCITE-3108
 URL: https://issues.apache.org/jira/browse/CALCITE-3108
 Project: Calcite
  Issue Type: Bug
  Components: babel, core
Affects Versions: 1.19.0
Reporter: Muhammad Gelbana
Assignee: Muhammad Gelbana
 Fix For: 1.20.0


Casting to PostgreSQL TEXT (ie. VARCHAR) isn't currently supported. The 
following query would fail in parsing and execution.
{code:sql}SELECT EXPR$0::text FROM (VALUES (1, 2, 3)){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: calcite close connection

2019-05-27 Thread Muhammad Gelbana
I assume you're openning the connection to the underlying databases while
getting metadata (schemas/tables/columns) and scanning those tables only. I
believe you'll have to review your connection cleanup there.

AFAIK, Calcite doesn't know about your underlying connections.

Thanks,
Gelbana


On Mon, May 27, 2019 at 5:39 AM 勾王敏浩  wrote:

> Hi,
> I am currently using calcite to access postgre and other relational
> databases. But throws the exception "too many clients". While I have closed
> the calciteConnection after performing the operation. I want to know if the
> connections to the underlying databases have been closed after the
> connection of calcite has been closed. Thank you.
>
>
> Best,
> Wangminhao Gou


Re: JIRA Calcite dashboards

2019-05-26 Thread Muhammad Gelbana
Hopefully the "Calcite current version Unresolved" table of the "Apache
Calcite Release" dashboard can help us quickly identify ready PRs to speed
up the merging process.

Awesome work Stamatis, thanks!

Thanks,
Gelbana


On Mon, May 27, 2019 at 12:21 AM Michael Mior  wrote:

> Works now. Thanks!
> --
> Michael Mior
> mm...@apache.org
>
> Le dim. 26 mai 2019 à 17:50, Stamatis Zampetakis  a
> écrit :
> >
> > Thanks for checking Michael!
> >
> > I just changed the permissions for the filters involved in the JIRA. Can
> > you check again, please?
> >
> > On Sun, May 26, 2019 at 11:38 PM Michael Mior  wrote:
> >
> > > Thanks Stamatis! I could imagine this being helpful. However, I'm
> > > unable to view the release dashboard.
> > >
> > > I get the messages below:
> > >
> > > Looks like we can't show you the content of this gadget due to its
> > > configuration.
> > >
> > > The filter used isn't valid or it's restricted
> > >
> > > The filter configured for this gadget could not be retrieved. Please
> > > verify it is still valid on the issue navigator.
> > >
> > > --
> > > Michael Mior
> > > mm...@apache.org
> > >
> > > Le dim. 26 mai 2019 à 17:34, Stamatis Zampetakis  a
> > > écrit :
> > > >
> > > > Hello,
> > > >
> > > > I created two JIRA dashboards in order to track a bit better what is
> > > > happening in the project.
> > > >
> > > > Apache Calcite Release [1] which might be helpful for release
> managers to
> > > > have an overview of the ongoing release and take action when they
> deem
> > > > necessary.
> > > >
> > > > Apache Calcite Dev Overview [2] which might be helpful for
> contributors
> > > to
> > > > the project who want to follow progress on the tasks they are working
> > > > on/interested in.
> > > >
> > > > At the moment, anybody who can browse the Calcite project can access
> the
> > > > dashboards. If you want to enlarge or restrict the visibility to
> certain
> > > > roles let me know.
> > > >
> > > > Any feedback is much appreciated!
> > > >
> > > > Best,
> > > > Stamatis
> > > >
> > > > [1]
> > > >
> > >
> https://issues.apache.org/jira/secure/Dashboard.jspa?selectPageId=12333950
> > > > [2]
> > > >
> > >
> https://issues.apache.org/jira/secure/Dashboard.jspa?selectPageId=12333951
> > >
>


Re: Question related to Schema and TableFactory

2019-05-15 Thread Muhammad Gelbana
The CSV example has what you're looking for.

Thanks,
Gelbana


On Wed, May 15, 2019 at 4:19 PM Naveen Kumar  wrote:

> Hi,
>
> Can I generate relational node tree, if i just have schemas.
> Is TableFactory a essential part for query planner?
>
> Conceptually Schema should be enough to create a query planner and generate
> relational tree, If that is the case, can you help me with a sample code on
> how query planner can work with just schema to generate relational tree.
>
> Thanks
> Naveen
>


How to implement a binary operator using a single method for overloaded forms ?

2019-05-13 Thread Muhammad Gelbana
The title must be very confusing but I couldn't come up with a better one.

I'm trying to implement the PostgreSQL posix regex binary operator[1], the
operator can be case in-sensitive (~*), negated (!~), both (!~*) or simple
case sensitive (~). This can be implemented using a single method:
*posixRegex(String
op1, String op2, boolean isCaseSensitive, boolean isNegated)*

I defined the implementor method in RexImpTable but I can't have the
generated code to call the signature I mentioned. It's always attempting to
call *posixRegex(String op1, String op2)* which doesn't exist. I'm not sure
what I need to do here. Would someone guide me here please ?

[1] https://issues.apache.org/jira/browse/CALCITE-3063

Thanks,
Gelbana


[jira] [Created] (CALCITE-3063) Babel parse should parse PostgreSQL posix regular expressions

2019-05-10 Thread Muhammad Gelbana (JIRA)
Muhammad Gelbana created CALCITE-3063:
-

 Summary: Babel parse should parse PostgreSQL posix regular 
expressions
 Key: CALCITE-3063
 URL: https://issues.apache.org/jira/browse/CALCITE-3063
 Project: Calcite
  Issue Type: Bug
  Components: babel
Affects Versions: 1.19.0
Reporter: Muhammad Gelbana


Quoting from the referenced link below, posix operators are:
||Operator||Description||Example||
|{{~}}|Matches regular expression, case sensitive|{{'thomas' ~ '.*thomas.*'}}|
|{{~*}}|Matches regular expression, case insensitive|{{'thomas' ~* 
'.*Thomas.*'}}|
|{{!~}}|Does not match regular expression, case sensitive|{{'thomas' !~ 
'.*Thomas.*'}}|
|{{!~*}}|Does not match regular expression, case insensitive|{{'thomas' !~* 
'.*vadim.*'}}|

 

+Reference:+ 
https://www.postgresql.org/docs/11/functions-matching.html#FUNCTIONS-POSIX-REGEXP



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Fixing parenthesized joins

2019-05-09 Thread Muhammad Gelbana
I opened a PR to fix CALCITE-35 [1] but I can't set the affected version
(1.19) and component (Babel parser).

Would someone please set those ?
Or may be grant me the necessary privileges to do so on my own ?

[1] https://issues.apache.org/jira/browse/CALCITE-35

Thanks,
Gelbana


Re: How to convert custom Class type to Expression Type?

2019-05-07 Thread Muhammad Gelbana
I think it might be easier/cleaner to have the filter as an input to a
project having the expression processing the filter's output. I don't know
if it's possible to have the operatr as an input to an expression.

Thanks,
Gelbana


On Tue, May 7, 2019 at 1:09 PM Maria  wrote:

> Hi, all.
>I have one custom type called 'x.xx.xxx.EsSqlFilter', and I want to
> convert it to Expression type, then pass the instance obj  to a function
> through Expressions.call(), then it can be used in the specified function
> call,I tryed to use 'Expressions.newArrayInit(clazz, constantList(values))'
> to convert it, but it became a new Object with nothing in the function, all
> the attribute values are lost.
> I am newbie with linq4J. and got bogged down in this question.
>Can someone give me an suggestion? Any reply will be much appreciated.
>
>
> Best Wishes.


Re: Breaking changes and internal APIs

2019-05-07 Thread Muhammad Gelbana
Good subject !

I don't know if we have clear boundaries for internal and external APIs. If
we do, committers can clarify within a Jira if it's breaking an API.

Thanks,
Gelbana


On Tue, May 7, 2019 at 3:17 PM Stamatis Zampetakis 
wrote:

> Hello,
>
> While doing code-reviews there are times that I observe changes (let's
> assume that they are meaningful and inevitable) to public
> classes/methods/fields that could break clients if they are relying on
> them.
> In some cases, the changes may occur in some internal APIs that it is
> rather unlikely to be used by clients; still the classes are public and
> accessible.
> I was thinking that even then it is worth adding a few words in the release
> note but maybe I am going too far.
>
> I was wondering if there is a common agreement on this topic.
>
> Best,
> Stamatis
>


[jira] [Created] (CALCITE-3051) Babel parser should parse special table expressions

2019-05-07 Thread Muhammad Gelbana (JIRA)
Muhammad Gelbana created CALCITE-3051:
-

 Summary: Babel parser should parse special table expressions
 Key: CALCITE-3051
 URL: https://issues.apache.org/jira/browse/CALCITE-3051
 Project: Calcite
  Issue Type: Improvement
  Components: babel
Affects Versions: 1.19.0
Reporter: Muhammad Gelbana


PostgreSQL query
{code:sql}
SELECT * FROM "
(
(
  (S.C c INNER JOIN S.N n ON n.id = c.id)
  INNER JOIN S.A a ON (NOT a.isactive)
) INNER JOIN S.T t ON t.id = a.id
)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Difficulties with implementing a table for a custom convention

2019-05-06 Thread Muhammad Gelbana
Thank you all for your time but unfortunately I'm unable to overcome this
challenge so far. Allow me to explain more.

*This is how I'm using calcite*
Planner planner = Frameworks.getPlanner(frameworkConfig); //
frameworkConfig is irrelevant I suppose
SqlNode parsed = planner.parse("SELECT Tenant FROM Audit.audit where Tenant
!= 'castle'");
SqlNode validated = planner.validate(parsed);
RelRoot root = planner.rel(validated);
PreparedStatement prepared = RelRunners.run(root.rel);
ResultSet rs = prepared.executeQuery());

*This is the final physical plan I'm achieving*
EnumerableInterpreter: rowcount = 50.0, cumulative cost = {25.0 rows, 25.0
cpu, 0.0 io}, id = 102
  BindableProject(Tenant=[$2]): rowcount = 50.0, cumulative cost = {160.0
rows, 166.0 cpu, 0.0 io}, id = 100
GelbanaBindableRel: rowcount = 50.0, cumulative cost = {110.0 rows,
116.0 cpu, 0.0 io}, id = 98 *// My converter node*
  GelbanaFilter(condition=[<>($2, 'demo')]): rowcount = 50.0,
cumulative cost = {105.0 rows, 111.0 cpu, 0.0 io}, id = 96
GelbanaQuery(table=[[Audit, audit]]): rowcount = 100.0, cumulative
cost = {100.0 rows, 101.0 cpu, 0.0 io}, id = 3 *// My table scan*

My problem is that the interpreter goes through all the plan's operators
[1] and for each operator it attempts to wrap it as an instance of
org.apache.calcite.interpreter.Node [2] so the interpreter would call the
Node's method "run" [3] for each wrapped operator when it starts executing
the plan. But if it couldn't wrap the operator as a Node [4], it checks if
the operator implements the InterpretableRel interface to call it's
"implement" method. But if it doesn't implement the InterpretableRel
interface in that case, an exception is thrown.

My issue is that I don't need my Gelbana operators, including the table
scan which is this thread is all about, to be wrapped as nodes neither
implement the InterpretableRel interface because my converter
(GelbanaBindableRel) will be responsible for running its inputs, including
eventually the table scan(s).

To fix that, I did the following for my converter
@Override
public void childrenAccept(RelVisitor visitor) {
if (visitor instanceof CoreCompiler) return;
}

This allows me to prevent the interpreter from visiting my converter's
inputs [1] to avoid the trouble mentioned above. But it feels like I'm
hacking my way through calcite and not using it the way its desgined to. Or
is this a correct approach ? Could this have any side effects I might worry
about ?

[1]
https://github.com/apache/calcite/blob/0d504d20d47542e8d461982512ae0e7a94e4d6cb/core/src/main/java/org/apache/calcite/interpreter/Interpreter.java#L447
[2]
https://github.com/apache/calcite/blob/0d504d20d47542e8d461982512ae0e7a94e4d6cb/core/src/main/java/org/apache/calcite/interpreter/Interpreter.java#L451
[3]
https://github.com/apache/calcite/blob/0d504d20d47542e8d461982512ae0e7a94e4d6cb/core/src/main/java/org/apache/calcite/interpreter/Interpreter.java#L130
[4]
https://github.com/apache/calcite/blob/0d504d20d47542e8d461982512ae0e7a94e4d6cb/core/src/main/java/org/apache/calcite/interpreter/Interpreter.java#L452

Thanks,
Gelbana


On Sun, May 5, 2019 at 11:43 PM Stamatis Zampetakis 
wrote:

> Hi Muhammad,
>
> I'm not sure why you need to implement InterpretableRel interface. I'm
> probably missing some details.
>
> I suppose you are using Calcite through the JDBC interface thus I guess you
> are relying on CalcitePrepareImpl.
> If that's the case then using TranslatableTable interface seems like a good
> idea.
> The latter means that you are using the default VolcanoPlanner that expects
> the final result to be in Bindable or Enumerable convention [2].
> You may define various operators and tables to be in CustomConvention but
> for sure you need a Converter
> and respective rules to go from CustomConvention to EnumerableConvention
> (similar to MongoToEnumerableConverter and MongoToEnumerableConverterRule).
>
> The final plan should look be similar to the one below:
>
> CustomToEnumerableConverter
>   CustomProject(product_id=[CAST(ITEM($0, 'product_id')):DOUBLE])
> CustomTableScan(table=[[_foodmart, sales_fact_1998]])"
>
> Best,
> Stamatis
>
> [2]
>
> https://github.com/apache/calcite/blob/0d504d20d47542e8d461982512ae0e7a94e4d6cb/core/src/main/java/org/apache/calcite/prepare/CalcitePrepareImpl.java#L719
>
> On Sun, May 5, 2019 at 4:13 AM Yuzhao Chen  wrote:
>
> > You can set up the target traits to include the convention you want when
> > invoke the Program#run method [1], with the converters as rules of the
> > program.
> >
> > [1]
> >
> https://github.com/apache/calcite/blob/9ece70f5dcdb00dbc6712496c51f52c05178d4aa/core/src/main/java/org/apache/calcite/tools/Program.java#L38
> >
> > Best,
> > Danny Chan
> > 在 2019年5月4日 +0800 PM6:48,Muhammad Gelbana ,写道:

Re: Rewriting queries with Calcite

2019-05-06 Thread Muhammad Gelbana
The method I've always followed so far to provide my rules, is to override
the AbstractRelNode.register method for the translated table node (Your
discovered tabels would implement the TranslatableTable interface and
implement the toRel method, check the Druid adapter for an example).

Thanks,
Gelbana


On Mon, May 6, 2019 at 12:33 PM Ivan Grgurina  wrote:

> We found out that if SQL query is appropriately prepared (manually),
> Calcite can handle mapping it to multiple schemas. That is when using
> Calcite instead of JDBC directly in Java application.
>
> What ended up being the biggest mystery at the moment is how to, for
> example, integrate our custom ConverterRule so that application uses it
> when querying database. The idea being that we can force original query
> into the one where Calcite can handle mapping it to multiple databases.
>
> I'll give an example. Lets say you have 1 database at the start, with the
> appropriate Calcite schema. Lets say that database gets split up *vertically
> *by some external rules and you have their Calcite schemas.
> Then you create Java app that wants to target original 1 database and
> Calcite (some custom rules) convert it so it goes to multiple databases.
>
> So, if you have a query that goes something like this in original SQL
> format:
>
> "SELECT * FROM db.medinfo"
>
> , you would convert it into the query that goes something like this:
>
> "SELECT db1.medinfo1.id, db1.medinfo1.firstname, db1.medinfo1.lastname,
> db2.medinfo2.age, db3.medinfo3.illness
> FROM db1.medinfo1, db2.medinfo2, db3.medinfo3
> WHERE db1.medinfo1.id = db2.medinfo2.id AND db1.medinfo1.id =
> db3.medinfo3.id".
>
> My first question is how does Calcite recognize my custom rules? Is it by
> reflexion because I extended some class and then it automatically applies
> it, or is there some hook publish-subscribe system in place for extending
> Calcite with a library of custom rules?
> My second question is how can I make the above-mentioned application use
> my rules? Ergo, how do I connect Application-Library-Calcite?
>
> *Ivan Grgurina*
>
> Research Assistant (ZEMRIS)
> --
>
> <https://www.linkedin.com/in/igrgurina/>
> <https://www.fer.unizg.hr/ivan.grgurina>
>
>
> --
> *From:* Muhammad Gelbana 
> *Sent:* Saturday, May 4, 2019 3:56 PM
> *To:* dev@calcite.apache.org
> *Subject:* Re: Rewriting queries with Calcite
>
> What do you mean by "rewrite queries to multiple data sources" ?
>
> Assuming you mean to run specific portions of the query plan against
> specific datasources, I beleive this can be done by changing the convention
> of the nodes for those protions. Each convention will map to a specific
> datasource. You can do that by writing converter rules.
>
> Assuming you mean to simply rewrite an SQL query, I beleive this can be
> done by visiting the root SQL node after parsing the query and rewrite the
> visitied nodes the way you wish. This actually can be done by other
> libraries than calcite, so I don't think that's what you're looking for.
>
> Assuming you mean to optimize the query plan, you'll need to write
> optimization rules (mentioned in the website's docs) to do that.
>
> As Stamatis said, a more specific question might get you a more specific
> answer.
>
> Thanks,
> Gelbana
>
>
> On Fri, May 3, 2019 at 3:22 PM Stamatis Zampetakis 
> wrote:
>
> > Hi Ivan,
> >
> > It sounds like an interesting project, and I think Calcite will
> definitely
> > help you get there.
> >
> > However your questions are quite broad so it is difficult to provide a
> > concrete answer.
> > The best place to get started is the official website [1] where there
> are a
> > lot of examples and use-cases for Calcite.
> > Other than that there have been various discussions in the dev list such
> as
> > [2] where people have shared many useful resources.
> > Have a look and don't hesitate to come back to us.
> >
> > Good luck with your thesis!
> > Stamatis
> >
> > [1] https://calcite.apache.org/
> > [2]
> >
> >
> https://lists.apache.org/thread.html/3b32557adfc19e79e04a2d2e5ffcfa742c21e0fcfa3bd431025020ed@%3Cdev.calcite.apache.org%3E
> >
> > On Thu, May 2, 2019 at 11:30 AM Ivan Grgurina 
> > wrote:
> >
> > > Hi, I'm working with Apache Calcite for my master thesis.
> > >
> > > The idea is to rewrite queries to multiple data sources, and Calcite
> is a
> > > serious candidate to be the tool for that job.
> > >
> > > At the moment, I'm trying to use Calcite as a library to create ru

Re: [ANNOUNCE] Stamatis Zampetakis joins Calcite PMC

2019-05-04 Thread Muhammad Gelbana
As Julian said, you totally earned this. I've personally enjoyed your
helpful guidance and answers to my questions. Congratulations, thank you
and keep up the good work.

Thanks,
Gelbana


On Tue, Apr 30, 2019 at 6:09 PM Julian Hyde  wrote:

> It’s unusual for someone to go from committer to PMC member in only a few
> months, but you’ve totally earned this. Congratulations and welcome,
> Stamatis.
>
> > On Apr 30, 2019, at 3:26 AM, Stamatis Zampetakis 
> wrote:
> >
> > Thank you all for your kind words and for entrusting me this new role!
> >
> > It's a great honor to be part of the PMC but even more being part of a
> > project with such a lively community.
> > I really appreciate the fact that we have many people so eager to help
> and
> > from whom I also learn new things every day.
> > It motivates me to do my best and will continue to do so.
> >
> > Best,
> > Stamatis
> >
> > On Tue, Apr 30, 2019 at 11:32 AM Zoltan Haindrich  wrote:
> >
> >> Congratulations Stamatis!
> >>
> >> On 4/28/19 4:40 PM, Andrei Sereda wrote:
> >>> Congrats, Stamatis.
> >>>
> >>> On Sat, Apr 27, 2019 at 11:08 PM Michael Mior 
> wrote:
> >>>
>  Congratulations Stamatis and thanks for all you've done!
>  --
>  Michael Mior
>  mm...@apache.org
> 
>  Le ven. 26 avr. 2019 à 22:44, Francis Chuang
>   a écrit :
> >
> > I'm pleased to announce that Stamatis has accepted an invitation to
> > join the Calcite PMC. Stamatis has been a consistent and helpful
> > figure in the Calcite community for which we are very grateful. We
> > look forward to the continued contributions and support.
> >
> > Please join me in congratulating Stamatis!
> >
> > - Francis (on behalf of the Calcite PMC)
> 
> >>>
> >>
>
>


Re: Rewriting queries with Calcite

2019-05-04 Thread Muhammad Gelbana
What do you mean by "rewrite queries to multiple data sources" ?

Assuming you mean to run specific portions of the query plan against
specific datasources, I beleive this can be done by changing the convention
of the nodes for those protions. Each convention will map to a specific
datasource. You can do that by writing converter rules.

Assuming you mean to simply rewrite an SQL query, I beleive this can be
done by visiting the root SQL node after parsing the query and rewrite the
visitied nodes the way you wish. This actually can be done by other
libraries than calcite, so I don't think that's what you're looking for.

Assuming you mean to optimize the query plan, you'll need to write
optimization rules (mentioned in the website's docs) to do that.

As Stamatis said, a more specific question might get you a more specific
answer.

Thanks,
Gelbana


On Fri, May 3, 2019 at 3:22 PM Stamatis Zampetakis 
wrote:

> Hi Ivan,
>
> It sounds like an interesting project, and I think Calcite will definitely
> help you get there.
>
> However your questions are quite broad so it is difficult to provide a
> concrete answer.
> The best place to get started is the official website [1] where there are a
> lot of examples and use-cases for Calcite.
> Other than that there have been various discussions in the dev list such as
> [2] where people have shared many useful resources.
> Have a look and don't hesitate to come back to us.
>
> Good luck with your thesis!
> Stamatis
>
> [1] https://calcite.apache.org/
> [2]
>
> https://lists.apache.org/thread.html/3b32557adfc19e79e04a2d2e5ffcfa742c21e0fcfa3bd431025020ed@%3Cdev.calcite.apache.org%3E
>
> On Thu, May 2, 2019 at 11:30 AM Ivan Grgurina 
> wrote:
>
> > Hi, I'm working with Apache Calcite for my master thesis.
> >
> > The idea is to rewrite queries to multiple data sources, and Calcite is a
> > serious candidate to be the tool for that job.
> >
> > At the moment, I'm trying to use Calcite as a library to create rules
> that
> > will rewrite queries during planner execution. I'm basing my current
> > solution on https://github.com/tzolov/calcite-sql-rewriter code.
> >
> > My question would be if that's the best practice? What's the best way of
> > creating a library based on Calcite that will be used in the end-user
> > applications? What is the dev idea behind connecting that library to
> > end-user application, as well as connecting it to Calcite?
> >
> > I have some idea of how that should work (based on above mentioned tool),
> > but I would love to hear how do it properly from the devs.
> >
> > Thanks
> >
> >
> > *Ivan Grgurina*
> >
> > Research Assistant (ZEMRIS)
> > --
> >
> > 
> > 
> >
> >
> >
>


Difficulties with implementing a table for a custom convention

2019-05-04 Thread Muhammad Gelbana
I implemented a new convention but I'm facing difficulties with
implementing tables for that convention.

Since I need to apply my convention rules and converters, I assume I must
implement TranslatableTable so I can override the RelNode produced by the
toRel method and provide the rules I need through the
RelNode.register(RelOptPlanner planner) call back (Is there another way ?)

At the same time, I need for my table's RelNode to implement my
convention's interface.

After implementing TranslatableTable and my convention's interface, I
discovered that my table's RelNode have to either implement an interpreter
node or the InterpretableRel interface. But this led to what seems to me as
that my table scan is executed twice.

I tried many things, such as having my table extend TableScan,
AbstractEnumerable2 QueryableTable and other interfaces/class that either
didn't give me what I want or failed with strange exceptions.

What I need is to implement my table scan following the same convention I'm
writing. So what is the correct way to do this ?

The MongoDB adapter way is to implement the TranslatableTable and the
QueryableTable interfaces, but that's confusing me. I mean, it's already
implementing its own new convention interface, why would it need to
implement another one to implement its table scans ? I thought while
executing the final plan, the first calcite identified node (enumerable or
bindable, enumerable as in MongoDB adapter's example), the interpreter
would execute that node, which will internally execute all its input nodes
including the table scan, correct ?

Thanks,
Gelbana


Re: Delegate queries to source system

2019-04-17 Thread Muhammad Gelbana
To push down predicates and projectsions (selected columns), checkout the
CSV example project, it has an example about how to push down those.
To push down more than that, checkout the Druid adapter example.

I'm not sure about what I'm about to say but I beleive it's possible to
push down the whole query if you convert the plan nodes to the JDBC
convention and somehow configuring calcite to use the PostgreSQL dialect.
Please tell me if this, or anywhere near what I said, worked for you and
how you did it. I remember seeing a similar behavior with Apache Drill
which uses Apache Calcite.

Thanks,
Gelbana


On Tue, Apr 16, 2019 at 3:24 PM Mark Pasterkamp <
markpasterkamp1...@gmail.com> wrote:

> Dear all,
>
> I have setup calcite to use postgres as a datasource. Right now I am
> running into an out of memory exception while executing the following
> query: "select * from table order by id limit 10". Checking the log of
> postgres it seems like calcite is wanting to first load all data into the
> memory (since it is executing "select * from table") and then sort it to
> only retain 10 elements.
>
> My machine can't load this table into main memory so I was wondering
> whether it is possible to delegate this query to postgres. I have found on
> the wiki [1] that it is possible to push down operations. I assume that
> this can do the trick but I am not really sure how to implement this. Am I
> supposed to create multiple FilterableTables for when these queries are
> executed or is it better to create a new plan rule which matches on a sort
> - projection - tablescan? And in the latter case, how can I make sure that
> it is not calcite handeling these operations? Would transforming them to
> their equivalent JdbcRules [2] do the trick? I am slightly confused by the
> meaning of a "jdbc calling convention".
>
> And Finally, if this does indeed do what I hope it does (push them down to
> postgres in my case), how can I make sure that the planner uses this rule
> to rewrite these queries to push down most of these memory expensive
> queries to postgres?
>
>
> Mark
>
>
> [1]
>
> https://calcite.apache.org/docs/adapter.html#pushing-operations-down-to-your-table
> [2]
>
> https://github.com/apache/calcite/blob/master/core/src/main/java/org/apache/calcite/adapter/jdbc/JdbcRules.java
>


Re: Help in understanding entities and abstractions in Calcite

2019-04-17 Thread Muhammad Gelbana
In my experience, most of the provided information didn't make sense at the
beginning. But after a lot of questions, things cleared up a little bit.

I suggest you start with the website and the Javadocs (Specially the
interfaces, they are loaded with information). But again, in my experience,
I was still very puzzled with the technical terms used.

What really helped me out in writing code was reading some of the code for
the CSV example project and then the Druid adapter project. The CSV example
project is a very good start because you won't get distracted by any
special CSV code as it's just a..CSV file. While the Druid adapter will
expose code to communicate with the Druid datasource and rules to collect
relational operators (SQL operators such as Joins, Aggregates/Sorts,
TableScans "i.e. FROM") supported by the Druid datasoruce.

A good tool to search for information about Calcite, is this website:
http://search-hadoop.com/?project=Calcite=mail+_hash_+dev
You can use it to search for information in Calcite's code, jira, website,
mailing list and javadocs at the same time.

But even after doing all that, I still needed to ask questions. So don't
hesitate to do so, when you're ready.

Thanks,
Gelbana


On Wed, Apr 17, 2019 at 8:25 AM Walaa Eldin Moustafa 
wrote:

> You might also read this paper: https://arxiv.org/pdf/1802.10233.pdf
> There is a SIGMOD version of it as well.
>
> Thanks,
> Walaa.
>
> On Tue, Apr 16, 2019 at 9:22 AM Chunwei Lei 
> wrote:
> >
> > Hi, Naveen
> >
> > You can find some helpful docs in
> > https://calcite.apache.org/docs/algebra.html. Wish it can help you.
> >
> >
> > Best,
> > Chunwei
> >
> > Naveen Kumar  于2019年4月16日周二
> 下午8:05写道:
> > >
> > > Hi,
> > >
> > > I am a engineer at Flipkart, we are building SQL over our stream
> processing
> > > platform using Calcite.
> > > I am finding it hard to develop intuition for abstractions and
> entities in
> > > Apache Calcite, is there a book or documentation that walks through
> them?
> > >
> > > I would love if I could chat with one of you and ask pointed questions.
> > >
> > > Thanks,
> > > Naveen
>


[CALCITE-2844] Only two test cases are failing after supporting table functions

2019-04-01 Thread Muhammad Gelbana
I made some progress with that issue and it would be great if someone could
lend me a hand please.

I skipped this test[1] and that one[2] and then all tests are passing. But
I can't figure out why are these ones failing.

[1]
https://github.com/MGelbana/calcite/blob/CALCITE-2844/babel/src/test/java/org/apache/calcite/test/BabelParserTest.java#L50
[2]
https://github.com/MGelbana/calcite/blob/CALCITE-2844/babel/src/test/java/org/apache/calcite/test/BabelParserTest.java#L53

Thanks,
Gelbana


Re: Calcite doesn't work with LOOKAHEAD(3)

2019-03-31 Thread Muhammad Gelbana
I think upgrading JavaCC maven plugin could break things if:
1. The grammar changed, and I highly doubt that, or,
2. We have bugs in our grammar that were fixed in the upgraded JavaCC
library.

Thanks,
Gelbana


On Sun, Mar 31, 2019 at 11:41 PM Muhammad Gelbana 
wrote:

> Thanks for the suggestion Stamatis, but that didn't work for me. It caused
> compilation errors in SqlParserImpl and I couldn't see a way to resolve
> them.
>
> Thanks,
> Gelbana
>
>
> On Sun, Mar 31, 2019 at 6:01 PM Stamatis Zampetakis 
> wrote:
>
>> I am not sure why the update breaks so many tests neither if there is a
>> problem with the LOOKAHEAD but regarding CALCITE-2844, I would be inclined
>> to modify lines around [3] to make it work.
>>
>> In particular, I would try to make  with parentheses optional (just
>> in case you didn't try this so far).
>>
>> Best,
>> Stamatis
>>
>> [3]
>>
>> https://github.com/apache/calcite/blob/81fa5314e94e86b6cf8df244b03f9d57c884f54d/core/src/main/codegen/templates/Parser.jj#L1884
>>
>> Στις Κυρ, 31 Μαρ 2019 στις 5:16 μ.μ., ο/η Muhammad Gelbana <
>> m.gelb...@gmail.com> έγραψε:
>>
>> > I was trying to support selecting from table functions[1]. I tried
>> > extending TableRef2[2] (Production ?) to support table functions by
>> adding
>> >
>> > > LOOKAHEAD(3)
>> > >
>> > tableRef = TableFunctionCall(getPos()))
>> > >
>> > |
>> > >
>> > before
>> >
>> > > LOOKAHEAD(2)
>> > > tableRef = CompoundIdentifier()
>> > >
>> >
>> > but it broke other tests. I tried putting my modification at the end of
>> the
>> > choices while increasing the CompoundIdentifier() lookahead to 3 to
>> avoid
>> > that choice when it faces the left bracket, but it didn't work too. I
>> tried
>> > setting aggresively high lookahead values such as 50, and it didn't work
>> > too. I won't be surprised if I'm doing anything wrong as I'm not
>> accustomed
>> > to working with grammar files anyway.
>> >
>> > The only thing I'm considering now is to create a new production (I'm
>> not
>> > sure if I'm using this word correctly) such as TableRef3 and have that
>> > going down the common path between TableFunctionCall() and
>> > CompoundIdentifier() because TableFunctionCall() eventually attempts to
>> > cosnume a CompoundIdentifier(). This way I won't have to bother about
>> > tuning lookaheads I suppose.
>> >
>> > I can create a branch of what I've accomplished so far if you wish.
>> >
>> > [1] https://issues.apache.org/jira/browse/CALCITE-2844
>> > [2]
>> >
>> >
>> https://github.com/apache/calcite/blob/master/core/src/main/codegen/templates/Parser.jj#L1811
>> >
>> > Thanks,
>> > Gelbana
>> >
>> >
>> > On Sun, Mar 31, 2019 at 4:15 PM Hongze Zhang  wrote:
>> >
>> > > Just out of my curiosity, could you please share your case about
>> > > "LOOKAHEAD doest not work as expected"? Does changing to JavaCC 5.0
>> > > actually fixes the problem?
>> > >
>> > > Thanks,
>> > > Hongze
>> > >
>> > >
>> > > > On Mar 31, 2019, at 19:17, Muhammad Gelbana 
>> > wrote:
>> > > >
>> > > > I'm facing trouble with supporting selecting from table function for
>> > > Babel
>> > > > parser and I beleive that LOOKAHEAD isn't working as expected too.
>> > > > I thought it might actually be a bug so I checked out the master
>> branch
>> > > and
>> > > > updated the JavaCC maven plugin version to 2.6 (it's currently 2.4),
>> > but
>> > > > that failed *142* test cases and errored *9*.
>> > > >
>> > > > The plugin v2.4 imports the JavaCC library v4
>> > > > The plugin v2.6 imports the JavaCC library v5
>> > > >
>> > > > Unfortunately the release notes for the JavaCC library are broken
>> and
>> > I'm
>> > > > not aware of another source for the release notes for that project.
>> > > > Should I open a Jira to upgrade that plugin version ?
>> > > >
>> > > > Thanks,
>> > > > Gelbana
>> > > >
>> > > >
>> > > > On Thu, Mar 28, 2019 at 4:18 AM Rui Li 
>> wrote:
>> > > >
>> > > >> Thank

Re: Calcite doesn't work with LOOKAHEAD(3)

2019-03-31 Thread Muhammad Gelbana
Thanks for the suggestion Stamatis, but that didn't work for me. It caused
compilation errors in SqlParserImpl and I couldn't see a way to resolve
them.

Thanks,
Gelbana


On Sun, Mar 31, 2019 at 6:01 PM Stamatis Zampetakis 
wrote:

> I am not sure why the update breaks so many tests neither if there is a
> problem with the LOOKAHEAD but regarding CALCITE-2844, I would be inclined
> to modify lines around [3] to make it work.
>
> In particular, I would try to make  with parentheses optional (just
> in case you didn't try this so far).
>
> Best,
> Stamatis
>
> [3]
>
> https://github.com/apache/calcite/blob/81fa5314e94e86b6cf8df244b03f9d57c884f54d/core/src/main/codegen/templates/Parser.jj#L1884
>
> Στις Κυρ, 31 Μαρ 2019 στις 5:16 μ.μ., ο/η Muhammad Gelbana <
> m.gelb...@gmail.com> έγραψε:
>
> > I was trying to support selecting from table functions[1]. I tried
> > extending TableRef2[2] (Production ?) to support table functions by
> adding
> >
> > > LOOKAHEAD(3)
> > >
> > tableRef = TableFunctionCall(getPos()))
> > >
> > |
> > >
> > before
> >
> > > LOOKAHEAD(2)
> > > tableRef = CompoundIdentifier()
> > >
> >
> > but it broke other tests. I tried putting my modification at the end of
> the
> > choices while increasing the CompoundIdentifier() lookahead to 3 to avoid
> > that choice when it faces the left bracket, but it didn't work too. I
> tried
> > setting aggresively high lookahead values such as 50, and it didn't work
> > too. I won't be surprised if I'm doing anything wrong as I'm not
> accustomed
> > to working with grammar files anyway.
> >
> > The only thing I'm considering now is to create a new production (I'm not
> > sure if I'm using this word correctly) such as TableRef3 and have that
> > going down the common path between TableFunctionCall() and
> > CompoundIdentifier() because TableFunctionCall() eventually attempts to
> > cosnume a CompoundIdentifier(). This way I won't have to bother about
> > tuning lookaheads I suppose.
> >
> > I can create a branch of what I've accomplished so far if you wish.
> >
> > [1] https://issues.apache.org/jira/browse/CALCITE-2844
> > [2]
> >
> >
> https://github.com/apache/calcite/blob/master/core/src/main/codegen/templates/Parser.jj#L1811
> >
> > Thanks,
> > Gelbana
> >
> >
> > On Sun, Mar 31, 2019 at 4:15 PM Hongze Zhang  wrote:
> >
> > > Just out of my curiosity, could you please share your case about
> > > "LOOKAHEAD doest not work as expected"? Does changing to JavaCC 5.0
> > > actually fixes the problem?
> > >
> > > Thanks,
> > > Hongze
> > >
> > >
> > > > On Mar 31, 2019, at 19:17, Muhammad Gelbana 
> > wrote:
> > > >
> > > > I'm facing trouble with supporting selecting from table function for
> > > Babel
> > > > parser and I beleive that LOOKAHEAD isn't working as expected too.
> > > > I thought it might actually be a bug so I checked out the master
> branch
> > > and
> > > > updated the JavaCC maven plugin version to 2.6 (it's currently 2.4),
> > but
> > > > that failed *142* test cases and errored *9*.
> > > >
> > > > The plugin v2.4 imports the JavaCC library v4
> > > > The plugin v2.6 imports the JavaCC library v5
> > > >
> > > > Unfortunately the release notes for the JavaCC library are broken and
> > I'm
> > > > not aware of another source for the release notes for that project.
> > > > Should I open a Jira to upgrade that plugin version ?
> > > >
> > > > Thanks,
> > > > Gelbana
> > > >
> > > >
> > > > On Thu, Mar 28, 2019 at 4:18 AM Rui Li 
> wrote:
> > > >
> > > >> Thanks Hongze, that's good to know.
> > > >>
> > > >> On Thu, Mar 28, 2019 at 8:43 AM Hongze Zhang 
> > wrote:
> > > >>
> > > >>>> Besides, if I enable forceLaCheck, JavaCC suggests to use a
> > lookahead
> > > >> of
> > > >>> 3
> > > >>>> or more. I guess we'd better get rid of these warnings if we want
> to
> > > >>> stick
> > > >>>> to lookahead(2).
> > > >>>
> > > >>> That makes sense. Actually we had a discussion[1] on moving to
> > > >>> "LOOKAHEAD=1", and seems we are close to finish it. By doing t

Re: Calcite doesn't work with LOOKAHEAD(3)

2019-03-31 Thread Muhammad Gelbana
I was trying to support selecting from table functions[1]. I tried
extending TableRef2[2] (Production ?) to support table functions by adding

> LOOKAHEAD(3)
>
tableRef = TableFunctionCall(getPos()))
>
|
>
before

> LOOKAHEAD(2)
> tableRef = CompoundIdentifier()
>

but it broke other tests. I tried putting my modification at the end of the
choices while increasing the CompoundIdentifier() lookahead to 3 to avoid
that choice when it faces the left bracket, but it didn't work too. I tried
setting aggresively high lookahead values such as 50, and it didn't work
too. I won't be surprised if I'm doing anything wrong as I'm not accustomed
to working with grammar files anyway.

The only thing I'm considering now is to create a new production (I'm not
sure if I'm using this word correctly) such as TableRef3 and have that
going down the common path between TableFunctionCall() and
CompoundIdentifier() because TableFunctionCall() eventually attempts to
cosnume a CompoundIdentifier(). This way I won't have to bother about
tuning lookaheads I suppose.

I can create a branch of what I've accomplished so far if you wish.

[1] https://issues.apache.org/jira/browse/CALCITE-2844
[2]
https://github.com/apache/calcite/blob/master/core/src/main/codegen/templates/Parser.jj#L1811

Thanks,
Gelbana


On Sun, Mar 31, 2019 at 4:15 PM Hongze Zhang  wrote:

> Just out of my curiosity, could you please share your case about
> "LOOKAHEAD doest not work as expected"? Does changing to JavaCC 5.0
> actually fixes the problem?
>
> Thanks,
> Hongze
>
>
> > On Mar 31, 2019, at 19:17, Muhammad Gelbana  wrote:
> >
> > I'm facing trouble with supporting selecting from table function for
> Babel
> > parser and I beleive that LOOKAHEAD isn't working as expected too.
> > I thought it might actually be a bug so I checked out the master branch
> and
> > updated the JavaCC maven plugin version to 2.6 (it's currently 2.4), but
> > that failed *142* test cases and errored *9*.
> >
> > The plugin v2.4 imports the JavaCC library v4
> > The plugin v2.6 imports the JavaCC library v5
> >
> > Unfortunately the release notes for the JavaCC library are broken and I'm
> > not aware of another source for the release notes for that project.
> > Should I open a Jira to upgrade that plugin version ?
> >
> > Thanks,
> > Gelbana
> >
> >
> > On Thu, Mar 28, 2019 at 4:18 AM Rui Li  wrote:
> >
> >> Thanks Hongze, that's good to know.
> >>
> >> On Thu, Mar 28, 2019 at 8:43 AM Hongze Zhang  wrote:
> >>
> >>>> Besides, if I enable forceLaCheck, JavaCC suggests to use a lookahead
> >> of
> >>> 3
> >>>> or more. I guess we'd better get rid of these warnings if we want to
> >>> stick
> >>>> to lookahead(2).
> >>>
> >>> That makes sense. Actually we had a discussion[1] on moving to
> >>> "LOOKAHEAD=1", and seems we are close to finish it. By doing this we
> have
> >>> extra benefits that we don't need to turn forceLaCheck on and JavaCC
> >> should
> >>> give suggestions during maven build.
> >>>
> >>> Hongze
> >>>
> >>>
> >>> [1] https://issues.apache.org/jira/browse/CALCITE-2847
> >>>
> >>>> On Mar 27, 2019, at 10:40, Rui Li  wrote:
> >>>>
> >>>> Thanks Hongze for looking into the issue! Are you suggesting this is
> >> more
> >>>> likely to be a JavaCC bug?
> >>>> I filed a ticket anyway in case we want to further track it:
> >>>> https://issues.apache.org/jira/browse/CALCITE-2957
> >>>> Besides, if I enable forceLaCheck, JavaCC suggests to use a lookahead
> >> of
> >>> 3
> >>>> or more. I guess we'd better get rid of these warnings if we want to
> >>> stick
> >>>> to lookahead(2).
> >>>>
> >>>> On Wed, Mar 27, 2019 at 8:54 AM Hongze Zhang 
> >> wrote:
> >>>>
> >>>>> Thanks, Yuzhao.
> >>>>>
> >>>>> Since the more generic problem is that the production "E()"[1] causes
> >>> the
> >>>>> parent production's looking ahead returns too early, I tried to find
> a
> >>> bad
> >>>>> case of the same reason under current default setting LOOKAHEAD=2 but
> >> it
> >>>>> seems that under this number we didn't have a chance to meet the
> issue
> >>> yet.
> >>>>>
> >>>>> So after that I s

Re: Calcite doesn't work with LOOKAHEAD(3)

2019-03-31 Thread Muhammad Gelbana
I'm facing trouble with supporting selecting from table function for Babel
parser and I beleive that LOOKAHEAD isn't working as expected too.
I thought it might actually be a bug so I checked out the master branch and
updated the JavaCC maven plugin version to 2.6 (it's currently 2.4), but
that failed *142* test cases and errored *9*.

The plugin v2.4 imports the JavaCC library v4
The plugin v2.6 imports the JavaCC library v5

Unfortunately the release notes for the JavaCC library are broken and I'm
not aware of another source for the release notes for that project.
Should I open a Jira to upgrade that plugin version ?

Thanks,
Gelbana


On Thu, Mar 28, 2019 at 4:18 AM Rui Li  wrote:

> Thanks Hongze, that's good to know.
>
> On Thu, Mar 28, 2019 at 8:43 AM Hongze Zhang  wrote:
>
> > > Besides, if I enable forceLaCheck, JavaCC suggests to use a lookahead
> of
> > 3
> > > or more. I guess we'd better get rid of these warnings if we want to
> > stick
> > > to lookahead(2).
> >
> > That makes sense. Actually we had a discussion[1] on moving to
> > "LOOKAHEAD=1", and seems we are close to finish it. By doing this we have
> > extra benefits that we don't need to turn forceLaCheck on and JavaCC
> should
> > give suggestions during maven build.
> >
> > Hongze
> >
> >
> > [1] https://issues.apache.org/jira/browse/CALCITE-2847
> >
> > > On Mar 27, 2019, at 10:40, Rui Li  wrote:
> > >
> > > Thanks Hongze for looking into the issue! Are you suggesting this is
> more
> > > likely to be a JavaCC bug?
> > > I filed a ticket anyway in case we want to further track it:
> > > https://issues.apache.org/jira/browse/CALCITE-2957
> > > Besides, if I enable forceLaCheck, JavaCC suggests to use a lookahead
> of
> > 3
> > > or more. I guess we'd better get rid of these warnings if we want to
> > stick
> > > to lookahead(2).
> > >
> > > On Wed, Mar 27, 2019 at 8:54 AM Hongze Zhang 
> wrote:
> > >
> > >> Thanks, Yuzhao.
> > >>
> > >> Since the more generic problem is that the production "E()"[1] causes
> > the
> > >> parent production's looking ahead returns too early, I tried to find a
> > bad
> > >> case of the same reason under current default setting LOOKAHEAD=2 but
> it
> > >> seems that under this number we didn't have a chance to meet the issue
> > yet.
> > >>
> > >> So after that I suggest to not to treat this as a Calcite's issue
> > >> currently.
> > >>
> > >> Best,
> > >> Hongze
> > >>
> > >> [1]
> > >>
> >
> https://github.com/apache/calcite/blob/11c067f9992d9c8bc29e2326dd8b299ad1e9dbdc/core/src/main/codegen/templates/Parser.jj#L335
> > >>
> > >>> On Mar 26, 2019, at 20:42, Yuzhao Chen  wrote:
> > >>>
> > >>> Maybe we should fire a jira if it is a bug.
> > >>>
> > >>> Best,
> > >>> Danny Chan
> > >>> 在 2019年3月26日 +0800 PM8:33,Hongze Zhang ,写道:
> >  Ops, correct a typo:
> > 
> >  "... after uncommenting a line ..." -> "... after commenting a line
> >  ...".
> > 
> >  Best,
> >  Hongze
> > 
> >  -- Original Message --
> >  From: "Hongze Zhang" 
> >  To: dev@calcite.apache.org
> >  Sent: 2019/3/26 19:28:08
> >  Subject: Re: Calcite doesn't work with LOOKAHEAD(3)
> > 
> > > Firstly, thank you very much for sharing the case, Rui!
> > >
> > > I have run a test with the SQL you provided and also run into the
> > same
> > >> exception (under a global LOOKAHEAD 3). After debugging the generated
> > >> parser code, I think the problem is probably in the generated
> LOOKAHEAD
> > >> method SqlParserImpl#jj_3R_42():
> > >
> > >
> > >> final private boolean jj_3R_42() {
> > >> if (!jj_rescan) trace_call("SqlSelect(LOOKING AHEAD...)");
> > >> if (jj_scan_token(SELECT)) { if (!jj_rescan)
> > >> trace_return("SqlSelect(LOOKAHEAD FAILED)"); return true; }
> > >> if (jj_3R_190()) { if (!jj_rescan)
> trace_return("SqlSelect(LOOKAHEAD
> > >> FAILED)"); return true; }
> > >> { if (!jj_rescan) trace_return("SqlSelect(LOOKAHEAD SUCCEEDED)");
> > >> return false; }
> > >> }
> > >
> > > The LOOKAHEAD method checks only a single token . This is
> > >> definitely not enough since we have already set the number to 3.
> > >
> > > Unfortunately I didn't find a root cause so far, but after
> > >> uncommenting a line[1] in production "SqlSelect()" then everything
> goes
> > >> back to normal. I'm inclined to believe JavaCC has some unexpected
> > behavior
> > >> when dealing with LOOKAHEAD on a production with the shape like
> > >> "SqlSelectKeywords()"[2].
> > >
> > > Please feel free to log a JIRA ticket with which we can track
> further
> > >> information of the issue.
> > >
> > > Best,
> > > Hongze
> > >
> > >
> > > [1]
> > >>
> >
> https://github.com/apache/calcite/blob/1b430721c0d9e22b2252ffcd893b42959cb7966c/core/src/main/codegen/templates/Parser.jj#L1030
> > > [2]
> > >>
> >
> 

How to decide the fix version ?

2019-03-20 Thread Muhammad Gelbana
When a new Jira is created, should the fix version be set to the ongoing
release ? The next one ? Or even left blank till its decided ? And how/when
will it be decided ?

Thanks,
Gelbana


Re: [ANNOUNCE] New committer: Stamatis Zampetakis

2019-03-14 Thread Muhammad Gelbana
Congratulations Stamatis :)

Thanks for frequently answering my questions and discussing my raised
topics.

On Fri, Mar 15, 2019, 2:45 AM Michael Mior  wrote:

>  No problem. I'll leave that to you then once the release is done :)
> Thanks!
> --
> Michael Mior
> mm...@apache.org
>
> Le jeu. 14 mars 2019 à 19:03, Stamatis Zampetakis  a
> écrit :
> >
> > Thanks for noticing Michael.
> >
> > Actually, I started doing it at some point but then there were
> > inconsistencies between master, site, and svn, so I decided to do it
> after
> > the release where everything is in line.
> >
> > On Thu, Mar 14, 2019, 10:15 PM Francis Chuang 
> > wrote:
> >
> > > I also noticed Hongze was not added to the community page as well. Once
> > > Kevin releases 1.19.0, we should add both of them to the page.
> > >
> > > On 15/03/2019 2:11 am, Michael Mior wrote:
> > > > I just noticed that Stamatis was never added to the community page
> > > > site. Stamatis, feel free to add yourself once the freeze for the
> > > > current release is over. Otherwise, I'm happy to do so.
> > > > --
> > > > Michael Mior
> > > > mm...@apache.org
> > > >
> > > > Le mer. 30 janv. 2019 à 13:01, Jesus Camacho Rodriguez
> > > >  a écrit :
> > > >>
> > > >> Apache Calcite's Project Management Committee (PMC) has invited
> > > >> Stamatis Zampetakis to become a committer, and we are pleased to
> > > >> announce that he has accepted.
> > > >>
> > > >> Over the past few months, Stamatis has made several contributions to
> > > >> Calcite and he is a very active participant in discussions in issues
> > > >> and mailing lists.
> > > >>
> > > >> Stamatis, welcome, thank you for your contributions, and we look
> > > >> forward your further interactions with the community! If you wish,
> > > >> please feel free to tell us more about yourself and what you are
> > > >> working on.
> > > >>
> > > >> Jesús (on behalf of the Apache Calcite PMC)
> > >
> > >
>


[CALCITE-2843] PR review request

2019-03-13 Thread Muhammad Gelbana
Could someone kindly review this PR please ?
https://github.com/apache/calcite/pull/1066

Thanks,
Gelbana


Re: CALCITE-2905: Maven -> Gradle: any thoughts

2019-03-10 Thread Muhammad Gelbana
I'm always in favor of anything that would lower our building time and
apparently, gradle supports parallel execution[1].
Will this ease the project importing process to Eclipse ? This is usually a
problem to me. I have to close projects to avoid displaying their build
errors, define source folders, run mvn eclipse:eclipse (and some say I
don't have to) and still have a couple of projects showing build errors in
Eclipse.

Do you know if Gradle will make lives easier to use with Eclipse ?

[1] https://guides.gradle.org/performance/#easy_improvements

Thanks,
Gelbana


On Sun, Mar 10, 2019 at 11:35 AM Vladimir Sitnikov <
sitnikov.vladi...@gmail.com> wrote:

> Hi,
>
> I wonder what you think of migrating Maven to Gradle.
>
> I think one of the main points for having Gradle would be:
> 1) Eliminate "mvn install" for local testing. Calcite consists of
> multiple Maven modules, however Maven always uses jars from the local
> repository.
> That is if you modify a file in "core", then you can't just invoke mvn
> test from "cassandra". You have to "mvn install" "core" first.
> There are workarounds (e.g. "mvn install" all the modules every time)
>
> In Gradle, "multi-module" build feels more like "always composite
> module". In other words, even if you invoke "build" task from within
> "core" module, Gradle would find all the modules in current project,
> it would compute all the dependencies and build accordingly.
> In my opinion it makes a big difference.
>
> There's a support for cross-project incremental builds as well. I
> haven't used that, however the idea there is that one can have
> "calcite" and "drill" as different Gradle projects, however one can
> modify a file in Calcite and invoke "build" from a Drill folder. It
> would build Calcite first.
>
> 2) Maven task/plugins often fail to declare inputs/outputs. This
> causes issues like MSHARED-394: Avoid rewrite of destination in
> DefaultMavenFileFilter#filterFile when producing the same contents.
> Gradle embrases tasks authors to declare inputs outputs (e.g. files or
> just property values) and it enables the build system to track stale
> tasks.
>
> Gradle supports "buildSrc" folder which can contain code that is
> available to the buildfiles of a current project. It enables to
> express build logic in a much more sound programming language than
> XML.
>
> Vladimir
>


Re: CALCITE-2457. JUnit5 migration.

2019-03-10 Thread Muhammad Gelbana
The only benefit I can think of out of updating JUnit or our maven plugins
in general, is getting closer to have a faster build if we leverage maven's
or JUnit's parallel execution feature.

Unfortunately it's still experimental for maven and JUnit5.

But it's a good idea to stay a couple of steps behind rather than having to
do a lot of changes at once.

Thanks,
Gelbana


On Sun, Mar 10, 2019 at 5:32 AM Andrei Sereda  wrote:

> We already require maven 3.5.2  (or newer)
>
> On Sat, Mar 9, 2019, 21:49 YuZhao Chan  wrote:
>
> > Maven version 3.6.0 seems a too much new version, now most of the
> > developers use 3.2.x and 3.3.x version maven, i think it will be bad to
> run
> > fail test cases just because the maven version is old.
> >
> > Best,
> > YuZhao Chen
> > 在 2019年3月10日 +0800 AM1:28,Andrei Sereda ,写道:
> > > Greetings,
> > >
> > >
> > > I would like to start a gradual migration of calcite test codebase to
> > > [JUnit5](https://junit.org/junit5/). The plan is to do in several
> steps
> > > outlined below :
> > >
> > > 1. Upgrade maven wrapper to 3.6.0 (surefire plugin needs to work with
> > > JUnit5 >= 2.22.0). Maybe enforce maven 3.6.0 during builds.
> > > 2. Add new dependencies to maven pom (jupiter and vantage).
> > > 3. Migrate all basic tests to new JUnit5 API. Basic in this context
> means
> > > tests without [rules](https://github.com/junit-team/junit4/wiki/rules)
> > or
> > > [runners](
> > >
> >
> https://github.com/junit-team/junit4/wiki/test-runners#runwith-annotation)
> > > just basic `@Test` / `@Before` / `@Ignore` annotations. Code where I
> can
> > > just apply string/replace and make it work in JUnit5.
> > > 4. Migrate remaining tests (with `@Parameterized` / `@ClassRule` etc.).
> > For
> > > example, I will have to write custom extensions for existing elastic /
> > > mongo / cassandra / geode class rules.
> > >
> > > For developers that means you will need to have a reasonably recent
> IDE /
> > > Maven:
> > > 1. For IntelliJ this is >= 2016.2
> > > 2. For Eclipse this is >= Oxygen.1a (4.7.1a)
> > > 3. For Maven >= 3.6.0 (released on 2018-10-24)
> > >
> > > Questions to fellow calcitians:
> > >
> > > 1. Do you agree with JUnit5 migration ?
> > > 2. Do you agree with the plan ?
> > > 3. Should I wait for 1.20 release ?
> > > 4. Anything I missed ?
> > >
> > > Regards,
> > > Andrei.
> >
>


[jira] [Created] (CALCITE-2901) RexSubQuery.scalar needs to allow specifying a different nullability value instead of the hard coded "true" value

2019-03-07 Thread Muhammad Gelbana (JIRA)
Muhammad Gelbana created CALCITE-2901:
-

 Summary: RexSubQuery.scalar needs to allow specifying a different 
nullability value instead of the hard coded "true" value
 Key: CALCITE-2901
 URL: https://issues.apache.org/jira/browse/CALCITE-2901
 Project: Calcite
  Issue Type: Improvement
  Components: core
Affects Versions: 1.18.0
Reporter: Muhammad Gelbana


The RexSubQuery.scalar(RelNode rel) method creates a subquery node with a hard 
coded nullability value of *true*, which might not be always valid.
{code:java}
public static RexSubQuery scalar(RelNode rel) {
final List fieldList = rel.getRowType().getFieldList();
assert fieldList.size() == 1;
final RelDataTypeFactory typeFactory = rel.getCluster().getTypeFactory();
final RelDataType type =
typeFactory.createTypeWithNullability(fieldList.get(0).getType(), true);
return new RexSubQuery(type, SqlStdOperatorTable.SCALAR_QUERY,
ImmutableList.of(), rel);
}
{code}
I prupose a slight change which is to update the method's signature to accept a 
boolean flag to specify the nullability of the subquery's type. Alongside an 
overloading method calls the modified one with the hard coded *true* 
nullability for backward compatibility.

Please tell me if such change is acceptable so I can do it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: An exception after rewriting a casting expression to a scalar subquery

2019-03-07 Thread Muhammad Gelbana
https://issues.apache.org/jira/browse/CALCITE-2901

Thanks,
Gelbana


On Thu, Mar 7, 2019 at 11:33 AM Stamatis Zampetakis 
wrote:

> Sorry, I meant to write instead of
>
> final RelDataType type =
> typeFactory.createTypeWithNullability(fieldList.get(0).getType(), true);
>
> in my previous email.
>
> I think it would make sense to change the hardcoded value but I didn't try
> to see if there are test failures after the change neither I know the
> original motive of setting the value to true.
>
> Try it on and log a JIRA case to continue the discussion there.
>
> Best,
> Stamatis
>
>
>
> Στις Πέμ, 7 Μαρ 2019 στις 10:16 π.μ., ο/η Muhammad Gelbana <
> m.gelb...@gmail.com> έγραψε:
>
> > Actually the types is derived this way
> >
> > final RelDataType type =
> > typeFactory.createTypeWithNullability(fieldList.get(0).getType(),
> *true*);
> >
> > Reference:
> >
> >
> https://github.com/apache/calcite/blob/d10aeb7f7e50dc7028ce102a5f590d0c50c49fa8/core/src/main/java/org/apache/calcite/rex/RexSubQuery.java#L99
> >
> > If you beleive it's valid to provide a way to override this hard coded
> > nallability flag, I would love to do it.
> >
> > Thanks,
> > Gelbana
> >
> > On Thu, Mar 7, 2019 at 9:52 AM Stamatis Zampetakis 
> > wrote:
> >
> > > Hi Gelbana,
> > >
> > > I am not sure why the scalar type is always nullable at this part of
> the
> > > code but I would expect that the type is obtained as follows:
> > >
> > > final RelDataType type =
> > typeFactory.copyType(fieldList.get(0).getType());
> > > // which copies also the nullability of the type
> > >
> > > instead of
> > >
> > > final RelDataType type =
> > > typeFactory.createTypeWithNullability(fieldList.get(0).getType(),
> > > fieldList.get(0).getType().isNullable());
> > >
> > > Best,
> > > Stamatis
> > >
> > > Στις Τρί, 5 Μαρ 2019 στις 2:01 μ.μ., ο/η Muhammad Gelbana <
> > > m.gelb...@gmail.com> έγραψε:
> > >
> > > > 'm trying to rewrite the below query to be
> > > > SELECT (SELECT PRONAME FROM PG_PROC WHERE OID = col1) FROM (VALUES
> > > > ('array_in', 'array_out')) as tbl(col1, col2)
> > > >
> > > > When I try to test my code using this query
> > > > SELECT col1::regproc FROM (VALUES ('array_in', 'array_out')) as
> > tbl(col1,
> > > > col2)
> > > >
> > > > The casting expression (col1::regproc) type is derived as *not*
> > nullable
> > > > because the casting is applied on a column selected from VALUES.
> > > >
> > > > But RexSubQuery.scalar[1] always returns a RelNode with a nullable
> > type.
> > > >
> > > > The exception I get when I try to run the query after rewriting is:
> > > >
> > > > set type is RecordType(REGPROC *NOT NULL* EXPR$0) NOT NULL
> > > > expression type is RecordType(REGPROC EXPR$0) NOT NULL
> > > > set is
> > > >
> > >
> >
> rel#11:LogicalProject.NONE.[0](input=HepRelVertex#10,EXPR$0=$SCALAR_QUERY({
> > > > LogicalFilter(condition=[=($1, $0)])
> > > >   LogicalProject(PRONAME=[$0])
> > > > LogicalTableScan(table=[[PG_CATALOG, PG_PROC]])
> > > > }))
> > > > expression is LogicalProject(EXPR$0=[$2])
> > > >   LogicalJoin(condition=[true], joinType=[left])
> > > > LogicalValues(tuples=[[{ 'array_in', 'array_out' }]])
> > > > LogicalAggregate(group=[{}], agg#0=[SINGLE_VALUE($0)])
> > > >   LogicalFilter(condition=[=($1, $0)])
> > > > LogicalProject(PRONAME=[$0])
> > > >   LogicalTableScan(table=[[PG_CATALOG, PG_PROC]])
> > > >
> > > > at
> > > >
> > > >
> > >
> >
> org.apache.calcite.plan.RelOptUtil.verifyTypeEquivalence(RelOptUtil.java:381)
> > > > at
> > > >
> > org.apache.calcite.plan.hep.HepRuleCall.transformTo(HepRuleCall.java:57)
> > > > at
> > > >
> > >
> >
> org.apache.calcite.plan.RelOptRuleCall.transformTo(RelOptRuleCall.java:234)
> > > > at
> > > >
> > > >
> > >
> >
> org.apache.calcite.rel.rules.SubQueryRemoveRule$SubQueryProjectRemoveRule.onMatch(SubQueryRemoveRule.java:518)
> > > >
> > > > Shouldn't we be able to specify if the scalar query type created by
> > > > RexSubQuery.scalar is nullable or not ?
> > > >
> > > > [1]
> > > >
> > > >
> > >
> >
> https://github.com/apache/calcite/blob/d10aeb7f7e50dc7028ce102a5f590d0c50c49fa8/core/src/main/java/org/apache/calcite/rex/RexSubQuery.java#L94
> > > >
> > > > Thanks,
> > > > Gelbana
> > > >
> > >
> >
>


Re: An exception after rewriting a casting expression to a scalar subquery

2019-03-07 Thread Muhammad Gelbana
Actually the types is derived this way

final RelDataType type =
typeFactory.createTypeWithNullability(fieldList.get(0).getType(), *true*);

Reference:
https://github.com/apache/calcite/blob/d10aeb7f7e50dc7028ce102a5f590d0c50c49fa8/core/src/main/java/org/apache/calcite/rex/RexSubQuery.java#L99

If you beleive it's valid to provide a way to override this hard coded
nallability flag, I would love to do it.

Thanks,
Gelbana

On Thu, Mar 7, 2019 at 9:52 AM Stamatis Zampetakis 
wrote:

> Hi Gelbana,
>
> I am not sure why the scalar type is always nullable at this part of the
> code but I would expect that the type is obtained as follows:
>
> final RelDataType type = typeFactory.copyType(fieldList.get(0).getType());
> // which copies also the nullability of the type
>
> instead of
>
> final RelDataType type =
> typeFactory.createTypeWithNullability(fieldList.get(0).getType(),
> fieldList.get(0).getType().isNullable());
>
> Best,
> Stamatis
>
> Στις Τρί, 5 Μαρ 2019 στις 2:01 μ.μ., ο/η Muhammad Gelbana <
> m.gelb...@gmail.com> έγραψε:
>
> > 'm trying to rewrite the below query to be
> > SELECT (SELECT PRONAME FROM PG_PROC WHERE OID = col1) FROM (VALUES
> > ('array_in', 'array_out')) as tbl(col1, col2)
> >
> > When I try to test my code using this query
> > SELECT col1::regproc FROM (VALUES ('array_in', 'array_out')) as tbl(col1,
> > col2)
> >
> > The casting expression (col1::regproc) type is derived as *not* nullable
> > because the casting is applied on a column selected from VALUES.
> >
> > But RexSubQuery.scalar[1] always returns a RelNode with a nullable type.
> >
> > The exception I get when I try to run the query after rewriting is:
> >
> > set type is RecordType(REGPROC *NOT NULL* EXPR$0) NOT NULL
> > expression type is RecordType(REGPROC EXPR$0) NOT NULL
> > set is
> >
> rel#11:LogicalProject.NONE.[0](input=HepRelVertex#10,EXPR$0=$SCALAR_QUERY({
> > LogicalFilter(condition=[=($1, $0)])
> >   LogicalProject(PRONAME=[$0])
> > LogicalTableScan(table=[[PG_CATALOG, PG_PROC]])
> > }))
> > expression is LogicalProject(EXPR$0=[$2])
> >   LogicalJoin(condition=[true], joinType=[left])
> > LogicalValues(tuples=[[{ 'array_in', 'array_out' }]])
> > LogicalAggregate(group=[{}], agg#0=[SINGLE_VALUE($0)])
> >   LogicalFilter(condition=[=($1, $0)])
> > LogicalProject(PRONAME=[$0])
> >   LogicalTableScan(table=[[PG_CATALOG, PG_PROC]])
> >
> > at
> >
> >
> org.apache.calcite.plan.RelOptUtil.verifyTypeEquivalence(RelOptUtil.java:381)
> > at
> > org.apache.calcite.plan.hep.HepRuleCall.transformTo(HepRuleCall.java:57)
> > at
> >
> org.apache.calcite.plan.RelOptRuleCall.transformTo(RelOptRuleCall.java:234)
> > at
> >
> >
> org.apache.calcite.rel.rules.SubQueryRemoveRule$SubQueryProjectRemoveRule.onMatch(SubQueryRemoveRule.java:518)
> >
> > Shouldn't we be able to specify if the scalar query type created by
> > RexSubQuery.scalar is nullable or not ?
> >
> > [1]
> >
> >
> https://github.com/apache/calcite/blob/d10aeb7f7e50dc7028ce102a5f590d0c50c49fa8/core/src/main/java/org/apache/calcite/rex/RexSubQuery.java#L94
> >
> > Thanks,
> > Gelbana
> >
>


An exception after rewriting a casting expression to a scalar subquery

2019-03-05 Thread Muhammad Gelbana
'm trying to rewrite the below query to be
SELECT (SELECT PRONAME FROM PG_PROC WHERE OID = col1) FROM (VALUES
('array_in', 'array_out')) as tbl(col1, col2)

When I try to test my code using this query
SELECT col1::regproc FROM (VALUES ('array_in', 'array_out')) as tbl(col1,
col2)

The casting expression (col1::regproc) type is derived as *not* nullable
because the casting is applied on a column selected from VALUES.

But RexSubQuery.scalar[1] always returns a RelNode with a nullable type.

The exception I get when I try to run the query after rewriting is:

set type is RecordType(REGPROC *NOT NULL* EXPR$0) NOT NULL
expression type is RecordType(REGPROC EXPR$0) NOT NULL
set is
rel#11:LogicalProject.NONE.[0](input=HepRelVertex#10,EXPR$0=$SCALAR_QUERY({
LogicalFilter(condition=[=($1, $0)])
  LogicalProject(PRONAME=[$0])
LogicalTableScan(table=[[PG_CATALOG, PG_PROC]])
}))
expression is LogicalProject(EXPR$0=[$2])
  LogicalJoin(condition=[true], joinType=[left])
LogicalValues(tuples=[[{ 'array_in', 'array_out' }]])
LogicalAggregate(group=[{}], agg#0=[SINGLE_VALUE($0)])
  LogicalFilter(condition=[=($1, $0)])
LogicalProject(PRONAME=[$0])
  LogicalTableScan(table=[[PG_CATALOG, PG_PROC]])

at
org.apache.calcite.plan.RelOptUtil.verifyTypeEquivalence(RelOptUtil.java:381)
at
org.apache.calcite.plan.hep.HepRuleCall.transformTo(HepRuleCall.java:57)
at
org.apache.calcite.plan.RelOptRuleCall.transformTo(RelOptRuleCall.java:234)
at
org.apache.calcite.rel.rules.SubQueryRemoveRule$SubQueryProjectRemoveRule.onMatch(SubQueryRemoveRule.java:518)

Shouldn't we be able to specify if the scalar query type created by
RexSubQuery.scalar is nullable or not ?

[1]
https://github.com/apache/calcite/blob/d10aeb7f7e50dc7028ce102a5f590d0c50c49fa8/core/src/main/java/org/apache/calcite/rex/RexSubQuery.java#L94

Thanks,
Gelbana


Re: Some issues about using calcite

2019-03-03 Thread Muhammad Gelbana
Have you tried the same methods with other databases ?

On Sun, Mar 3, 2019, 8:36 AM 李天安  wrote:

> Hi,
> My Name is Tianan Li, recently I’m using your project calcite
> v1.18.0. I have met some issues and I need your help.
> When I connect to postgresql via calcite, I found that I can not
> get remarks of table and columns via DatabaseMetaData.getTables ans
> databaseMetaData.getColumns methods. What’s more, I can not  get primary
> keys of a table via DatabaseMetaData.getPrimaryKeys method.
> Is that something wrong with my usage or a bug?
> Thanks, I’m looking forward your reply.
>
> Best,
> Tianan Li
>


Re: Supportin PostgreSQL OID casts

2019-02-26 Thread Muhammad Gelbana
I believe
org.apache.calcite.prepare.CalcitePrepareImpl.createPlanner(Context,
Context, RelOptCostFactory) is the correct location to add my rule(s). But
since this should only run when the praser is Babel, I tried to find that
information from within the mentioned method (*createPlanner*) but I
couldn't. Am I missing something or should I pass through such information
to the *createPlanner* method ?

Another thing, the expression may be in a "Project" too (i.e. SELECT
typname::regproc FROM pg_catalog.pg_type WHERE typname LIKE 'bool')
I suppose I'll have to have 2 rules. One for Project and another for Filter.

Thanks,
Gelbana


On Tue, Feb 26, 2019 at 4:12 PM Michael Mior  wrote:

> Writing a rule would certainly work. You would want to match a Filter
> RelNode and then perhaps use an implementation of RexShuttle on the
> condition and implement visitCall to check for appropriate casts to be
> transformed into the subquery you want using RexSubQuery.scalar to
> generate the new query.
>
> --
> Michael Mior
> mm...@apache.org
>
> Le mar. 26 févr. 2019 à 05:55, Muhammad Gelbana  a
> écrit :
> >
> > I'm willing to implement running PostgreSQL queries involving OID casts
> >
> > *For example:*
> > SELECT * FROM pg_attribute
> > WHERE attrelid = 'mytable'::regclass;
> >
> > *Should be executed as:*
> > SELECT * FROM pg_attribute
> > WHERE attrelid = (SELECT oid FROM pg_class WHERE relname = 'mytable');
> >
> > Note that *reglass* maps to the *pg_class* table.
> >
> > What is the "calcite way" to implement this ? Should I write a rule ?
> > Or should I rewrite the query before implementing it (I'm not sure where
> is
> > that) ?
> >
> > Thanks,
> > Gelbana
>


Supportin PostgreSQL OID casts

2019-02-26 Thread Muhammad Gelbana
I'm willing to implement running PostgreSQL queries involving OID casts

*For example:*
SELECT * FROM pg_attribute
WHERE attrelid = 'mytable'::regclass;

*Should be executed as:*
SELECT * FROM pg_attribute
WHERE attrelid = (SELECT oid FROM pg_class WHERE relname = 'mytable');

Note that *reglass* maps to the *pg_class* table.

What is the "calcite way" to implement this ? Should I write a rule ?
Or should I rewrite the query before implementing it (I'm not sure where is
that) ?

Thanks,
Gelbana


Re: Failed to parse a PostgreSQL query using the Babel conformance

2019-02-23 Thread Muhammad Gelbana
Nevermind, I successfully parsed the operator and all test cases are
passing. I'm working on implementing the operator now.

Thanks,
Gelbana


On Thu, Feb 21, 2019 at 12:08 AM Muhammad Gelbana 
wrote:

> I'm struggling with parsing the expressoin properly. If I simply add the
> operator (i.e. ::) to the binary operators list, the query is parsed but
> the operand that is supposed to be a type, is parsed as an identifier
> instead. And eventually the validation fails because that identifier (ex:
> integer, regproc..etc) isn't found in any table, which is true because it's
> a type (i.e. keyword), not an identifier.
>
> Could someone guide me on this please?
>
> I also need some help understanding this part of the parser:
> ---
> LOOKAHEAD(3) op = BinaryRowOperator() {
> checkNonQueryExpression(exprContext);
> list.add(new SqlParserUtil.ToTreeListItem(op, getPos()));
> }
> Expression2b(ExprContext.ACCEPT_SUB_QUERY, list)
> ---
>
> To me, this looks like the operator is consumed before its operands.
> Shouldn't this expression be something like
> ---
> list.add(new SqlParserUtil.ToTreeListItem(SimpleIdentifier(), getPos()));
> // LHS operaand
> list.add(new SqlParserUtil.ToTreeListItem(BinaryRowOperator(), getPos()));
> // Binary operator
> list.add(new SqlParserUtil.ToTreeListItem(SimpleIdentifier(), getPos()));
> // RHS operand
> ---
> How is it possible to identify the operator before its operands ?!
>
> Thanks,
> Gelbana
>
>
> On Fri, Feb 15, 2019 at 9:49 PM Julian Hyde  wrote:
>
>> I’ve added comments to the JIRA case.
>>
>> > On Feb 15, 2019, at 5:22 AM, Muhammad Gelbana 
>> wrote:
>> >
>> > Here is what I've done so far for CALCITE-2843
>> > <https://issues.apache.org/jira/browse/CALCITE-2843>:
>> > https://github.com/MGelbana/calcite/pull/1/files
>> > I appreciate a quick overview and guidance if I'm going in the wrong
>> > direction.
>> >
>> > Thanks,
>> > Gelbana
>> >
>> >
>> > On Thu, Feb 14, 2019 at 5:57 PM Muhammad Gelbana 
>> > wrote:
>> >
>> >> @Stamatis, I very appreciate you taking the time to comment on the
>> issues
>> >> I opened basd on this thread. I'm currently going through Babel's
>> Parser.jj
>> >> file and JavaCC documentations trying to understand what I need to do
>> and
>> >> where.
>> >>
>> >> Considering you're probably more acquainted than I am. I'll gladly work
>> >> with you on a branch to fix this, based on your instructions of course.
>> >> Otherwise, I'll continue working on my own.
>> >>
>> >> Thanks,
>> >> Gelbana
>> >>
>> >>
>> >> On Mon, Feb 11, 2019 at 11:31 PM Muhammad Gelbana > >
>> >> wrote:
>> >>
>> >>> Your replies are very much appreciated. I'll see what I can do.
>> >>>
>> >>> @Julian, I believe '=' acts as a boolean operator here because the
>> query
>> >>> returns boolean results for that part of the selection.
>> >>>
>> >>> Thanks,
>> >>> Gelbana
>> >>>
>> >>>
>> >>> On Mon, Feb 11, 2019 at 8:38 PM Julian Hyde  wrote:
>> >>>
>> >>>> There are a few Postgres-isms in that SQL:
>> >>>> The “::” (as a shorthand for cast) in 'typinput='array_in'::regproc
>> >>>> The ‘=‘ (as a shorthand for alias) in 'typinput='array_in'::regproc’
>> >>>> Use of a table function without the ’TABLE’ keyword, in 'from
>> >>>> generate_series(1, array_upper(current_schemas(false), 1))’
>> >>>>
>> >>>> Babel does not handle any of those right now, but it could.
>> >>>> Contributions welcome.
>> >>>>
>> >>>> Julian
>> >>>>
>> >>>>
>> >>>>> On Feb 11, 2019, at 6:14 AM, Stamatis Zampetakis > >
>> >>>> wrote:
>> >>>>>
>> >>>>> Hi Gelbana,
>> >>>>>
>> >>>>> In order to use the Babel parser you need to also set an appropriate
>> >>>>> factory to your parser configuration since
>> >>>>> setting only the conformance is not enough.
>> >>>>>
>> >>>>> Try adding the following:
>> >>>>> ...
>> >>>>> configBuilder().setParserFactory(SqlBabelParserImpl.FACTORY);

Re: Failed to parse a PostgreSQL query using the Babel conformance

2019-02-20 Thread Muhammad Gelbana
I'm struggling with parsing the expressoin properly. If I simply add the
operator (i.e. ::) to the binary operators list, the query is parsed but
the operand that is supposed to be a type, is parsed as an identifier
instead. And eventually the validation fails because that identifier (ex:
integer, regproc..etc) isn't found in any table, which is true because it's
a type (i.e. keyword), not an identifier.

Could someone guide me on this please?

I also need some help understanding this part of the parser:
---
LOOKAHEAD(3) op = BinaryRowOperator() {
checkNonQueryExpression(exprContext);
list.add(new SqlParserUtil.ToTreeListItem(op, getPos()));
}
Expression2b(ExprContext.ACCEPT_SUB_QUERY, list)
---

To me, this looks like the operator is consumed before its operands.
Shouldn't this expression be something like
---
list.add(new SqlParserUtil.ToTreeListItem(SimpleIdentifier(), getPos()));
// LHS operaand
list.add(new SqlParserUtil.ToTreeListItem(BinaryRowOperator(), getPos()));
// Binary operator
list.add(new SqlParserUtil.ToTreeListItem(SimpleIdentifier(), getPos()));
// RHS operand
---
How is it possible to identify the operator before its operands ?!

Thanks,
Gelbana


On Fri, Feb 15, 2019 at 9:49 PM Julian Hyde  wrote:

> I’ve added comments to the JIRA case.
>
> > On Feb 15, 2019, at 5:22 AM, Muhammad Gelbana 
> wrote:
> >
> > Here is what I've done so far for CALCITE-2843
> > <https://issues.apache.org/jira/browse/CALCITE-2843>:
> > https://github.com/MGelbana/calcite/pull/1/files
> > I appreciate a quick overview and guidance if I'm going in the wrong
> > direction.
> >
> > Thanks,
> > Gelbana
> >
> >
> > On Thu, Feb 14, 2019 at 5:57 PM Muhammad Gelbana 
> > wrote:
> >
> >> @Stamatis, I very appreciate you taking the time to comment on the
> issues
> >> I opened basd on this thread. I'm currently going through Babel's
> Parser.jj
> >> file and JavaCC documentations trying to understand what I need to do
> and
> >> where.
> >>
> >> Considering you're probably more acquainted than I am. I'll gladly work
> >> with you on a branch to fix this, based on your instructions of course.
> >> Otherwise, I'll continue working on my own.
> >>
> >> Thanks,
> >> Gelbana
> >>
> >>
> >> On Mon, Feb 11, 2019 at 11:31 PM Muhammad Gelbana 
> >> wrote:
> >>
> >>> Your replies are very much appreciated. I'll see what I can do.
> >>>
> >>> @Julian, I believe '=' acts as a boolean operator here because the
> query
> >>> returns boolean results for that part of the selection.
> >>>
> >>> Thanks,
> >>> Gelbana
> >>>
> >>>
> >>> On Mon, Feb 11, 2019 at 8:38 PM Julian Hyde  wrote:
> >>>
> >>>> There are a few Postgres-isms in that SQL:
> >>>> The “::” (as a shorthand for cast) in 'typinput='array_in'::regproc
> >>>> The ‘=‘ (as a shorthand for alias) in 'typinput='array_in'::regproc’
> >>>> Use of a table function without the ’TABLE’ keyword, in 'from
> >>>> generate_series(1, array_upper(current_schemas(false), 1))’
> >>>>
> >>>> Babel does not handle any of those right now, but it could.
> >>>> Contributions welcome.
> >>>>
> >>>> Julian
> >>>>
> >>>>
> >>>>> On Feb 11, 2019, at 6:14 AM, Stamatis Zampetakis 
> >>>> wrote:
> >>>>>
> >>>>> Hi Gelbana,
> >>>>>
> >>>>> In order to use the Babel parser you need to also set an appropriate
> >>>>> factory to your parser configuration since
> >>>>> setting only the conformance is not enough.
> >>>>>
> >>>>> Try adding the following:
> >>>>> ...
> >>>>> configBuilder().setParserFactory(SqlBabelParserImpl.FACTORY);
> >>>>>
> >>>>> Having said that I am not sure if Babel can handle the syntax you
> >>>> provided.
> >>>>>
> >>>>> Best,
> >>>>> Stamatis
> >>>>>
> >>>>>
> >>>>>
> >>>>> Στις Σάβ, 9 Φεβ 2019 στις 10:46 μ.μ., ο/η Muhammad Gelbana <
> >>>>> m.gelb...@gmail.com> έγραψε:
> >>>>>
> >>>>>> I'm trying to parse a PostgreSQL metadata query but a parsing
> >>>> exception is
> >>>>>> thrown.
> >>>>

Re: Failed to parse a PostgreSQL query using the Babel conformance

2019-02-15 Thread Muhammad Gelbana
Here is what I've done so far for CALCITE-2843
<https://issues.apache.org/jira/browse/CALCITE-2843>:
https://github.com/MGelbana/calcite/pull/1/files
I appreciate a quick overview and guidance if I'm going in the wrong
direction.

Thanks,
Gelbana


On Thu, Feb 14, 2019 at 5:57 PM Muhammad Gelbana 
wrote:

> @Stamatis, I very appreciate you taking the time to comment on the issues
> I opened basd on this thread. I'm currently going through Babel's Parser.jj
> file and JavaCC documentations trying to understand what I need to do and
> where.
>
> Considering you're probably more acquainted than I am. I'll gladly work
> with you on a branch to fix this, based on your instructions of course.
> Otherwise, I'll continue working on my own.
>
> Thanks,
> Gelbana
>
>
> On Mon, Feb 11, 2019 at 11:31 PM Muhammad Gelbana 
> wrote:
>
>> Your replies are very much appreciated. I'll see what I can do.
>>
>> @Julian, I believe '=' acts as a boolean operator here because the query
>> returns boolean results for that part of the selection.
>>
>> Thanks,
>> Gelbana
>>
>>
>> On Mon, Feb 11, 2019 at 8:38 PM Julian Hyde  wrote:
>>
>>> There are a few Postgres-isms in that SQL:
>>> The “::” (as a shorthand for cast) in 'typinput='array_in'::regproc
>>> The ‘=‘ (as a shorthand for alias) in 'typinput='array_in'::regproc’
>>> Use of a table function without the ’TABLE’ keyword, in 'from
>>> generate_series(1, array_upper(current_schemas(false), 1))’
>>>
>>> Babel does not handle any of those right now, but it could.
>>> Contributions welcome.
>>>
>>> Julian
>>>
>>>
>>> > On Feb 11, 2019, at 6:14 AM, Stamatis Zampetakis 
>>> wrote:
>>> >
>>> > Hi Gelbana,
>>> >
>>> > In order to use the Babel parser you need to also set an appropriate
>>> > factory to your parser configuration since
>>> > setting only the conformance is not enough.
>>> >
>>> > Try adding the following:
>>> > ...
>>> > configBuilder().setParserFactory(SqlBabelParserImpl.FACTORY);
>>> >
>>> > Having said that I am not sure if Babel can handle the syntax you
>>> provided.
>>> >
>>> > Best,
>>> > Stamatis
>>> >
>>> >
>>> >
>>> > Στις Σάβ, 9 Φεβ 2019 στις 10:46 μ.μ., ο/η Muhammad Gelbana <
>>> > m.gelb...@gmail.com> έγραψε:
>>> >
>>> >> I'm trying to parse a PostgreSQL metadata query but a parsing
>>> exception is
>>> >> thrown.
>>> >>
>>> >> Here is my code:
>>> >>
>>> >> Config parserConfig =
>>> >> configBuilder().setConformance(SqlConformanceEnum.BABEL).build();
>>> >> FrameworkConfig frameworkConfig =
>>> >> Frameworks.newConfigBuilder().parserConfig(parserConfig).build();
>>> >> Planner planner = Frameworks.getPlanner(frameworkConfig);
>>> >> planner.parse("SELECT typinput='array_in'::regproc, typtype FROM
>>> >> pg_catalog.pg_type LEFT JOIN (select ns.oid as nspoid, ns.nspname,
>>> r.r from
>>> >> pg_namespace as ns join ( select s.r, (current_schemas(false))[s.r] as
>>> >> nspname from generate_series(1, array_upper(current_schemas(false),
>>> 1)) as
>>> >> s(r) ) as r using ( nspname )) as sp ON sp.nspoid = typnamespace WHERE
>>> >> typname = $1 ORDER BY sp.r, pg_type.oid DESC LIMIT 1");
>>> >>
>>> >> *The exception title is* "Exception in thread "main"
>>> >> org.apache.calcite.sql.parser.SqlParseException: Encountered ":" at
>>> line 1,
>>> >> column 27."
>>> >>
>>> >> Am I doing something wrong or is the parser still not ready for such
>>> syntax
>>> >> ?
>>> >>
>>> >> Thanks,
>>> >> Gelbana
>>> >>
>>>
>>>


Re: Failed to parse a PostgreSQL query using the Babel conformance

2019-02-14 Thread Muhammad Gelbana
@Stamatis, I very appreciate you taking the time to comment on the issues I
opened basd on this thread. I'm currently going through Babel's Parser.jj
file and JavaCC documentations trying to understand what I need to do and
where.

Considering you're probably more acquainted than I am. I'll gladly work
with you on a branch to fix this, based on your instructions of course.
Otherwise, I'll continue working on my own.

Thanks,
Gelbana


On Mon, Feb 11, 2019 at 11:31 PM Muhammad Gelbana 
wrote:

> Your replies are very much appreciated. I'll see what I can do.
>
> @Julian, I believe '=' acts as a boolean operator here because the query
> returns boolean results for that part of the selection.
>
> Thanks,
> Gelbana
>
>
> On Mon, Feb 11, 2019 at 8:38 PM Julian Hyde  wrote:
>
>> There are a few Postgres-isms in that SQL:
>> The “::” (as a shorthand for cast) in 'typinput='array_in'::regproc
>> The ‘=‘ (as a shorthand for alias) in 'typinput='array_in'::regproc’
>> Use of a table function without the ’TABLE’ keyword, in 'from
>> generate_series(1, array_upper(current_schemas(false), 1))’
>>
>> Babel does not handle any of those right now, but it could. Contributions
>> welcome.
>>
>> Julian
>>
>>
>> > On Feb 11, 2019, at 6:14 AM, Stamatis Zampetakis 
>> wrote:
>> >
>> > Hi Gelbana,
>> >
>> > In order to use the Babel parser you need to also set an appropriate
>> > factory to your parser configuration since
>> > setting only the conformance is not enough.
>> >
>> > Try adding the following:
>> > ...
>> > configBuilder().setParserFactory(SqlBabelParserImpl.FACTORY);
>> >
>> > Having said that I am not sure if Babel can handle the syntax you
>> provided.
>> >
>> > Best,
>> > Stamatis
>> >
>> >
>> >
>> > Στις Σάβ, 9 Φεβ 2019 στις 10:46 μ.μ., ο/η Muhammad Gelbana <
>> > m.gelb...@gmail.com> έγραψε:
>> >
>> >> I'm trying to parse a PostgreSQL metadata query but a parsing
>> exception is
>> >> thrown.
>> >>
>> >> Here is my code:
>> >>
>> >> Config parserConfig =
>> >> configBuilder().setConformance(SqlConformanceEnum.BABEL).build();
>> >> FrameworkConfig frameworkConfig =
>> >> Frameworks.newConfigBuilder().parserConfig(parserConfig).build();
>> >> Planner planner = Frameworks.getPlanner(frameworkConfig);
>> >> planner.parse("SELECT typinput='array_in'::regproc, typtype FROM
>> >> pg_catalog.pg_type LEFT JOIN (select ns.oid as nspoid, ns.nspname, r.r
>> from
>> >> pg_namespace as ns join ( select s.r, (current_schemas(false))[s.r] as
>> >> nspname from generate_series(1, array_upper(current_schemas(false),
>> 1)) as
>> >> s(r) ) as r using ( nspname )) as sp ON sp.nspoid = typnamespace WHERE
>> >> typname = $1 ORDER BY sp.r, pg_type.oid DESC LIMIT 1");
>> >>
>> >> *The exception title is* "Exception in thread "main"
>> >> org.apache.calcite.sql.parser.SqlParseException: Encountered ":" at
>> line 1,
>> >> column 27."
>> >>
>> >> Am I doing something wrong or is the parser still not ready for such
>> syntax
>> >> ?
>> >>
>> >> Thanks,
>> >> Gelbana
>> >>
>>
>>


[jira] [Created] (CALCITE-2844) Babel parser doesn't parse PostgreSQL table function

2019-02-12 Thread Muhammad Gelbana (JIRA)
Muhammad Gelbana created CALCITE-2844:
-

 Summary: Babel parser doesn't parse PostgreSQL table function
 Key: CALCITE-2844
 URL: https://issues.apache.org/jira/browse/CALCITE-2844
 Project: Calcite
  Issue Type: Bug
  Components: babel
Affects Versions: 1.18.0
Reporter: Muhammad Gelbana
Assignee: Julian Hyde
 Fix For: next


*Query*
{code:sql}
SELECT typinput, typtype FROM pg_catalog.pg_type LEFT JOIN (select ns.oid as 
nspoid, ns.nspname, r.r from pg_namespace as ns join ( select s.r, 
(current_schemas(false))[s.r] as nspname from generate_series(1, 
array_upper(current_schemas(false), 1)) as s(r) ) as r using ( nspname )) as sp 
ON sp.nspoid = typnamespace WHERE typname = $1 ORDER BY sp.r, pg_type.oid DESC 
LIMIT 1{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CALCITE-2843) Babel parser doesn't parse PostgreSQL casting operator

2019-02-12 Thread Muhammad Gelbana (JIRA)
Muhammad Gelbana created CALCITE-2843:
-

 Summary: Babel parser doesn't parse PostgreSQL casting operator
 Key: CALCITE-2843
 URL: https://issues.apache.org/jira/browse/CALCITE-2843
 Project: Calcite
  Issue Type: Bug
  Components: babel
Affects Versions: 1.18.0
Reporter: Muhammad Gelbana
Assignee: Julian Hyde
 Fix For: next


*Query*
{code:sql}
SELECT typinput='array_in'::regproc, typtype FROM pg_catalog.pg_type LEFT JOIN 
(select ns.oid as nspoid, ns.nspname, r.r from pg_namespace as ns join ( select 
s.r, (current_schemas(false))[s.r] as nspname from generate_series(1, 
array_upper(current_schemas(false), 1)) as s(r) ) as r using ( nspname )) as sp 
ON sp.nspoid = typnamespace WHERE typname = $1 ORDER BY sp.r, pg_type.oid DESC 
LIMIT 1{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Failed to parse a PostgreSQL query using the Babel conformance

2019-02-11 Thread Muhammad Gelbana
Your replies are very much appreciated. I'll see what I can do.

@Julian, I believe '=' acts as a boolean operator here because the query
returns boolean results for that part of the selection.

Thanks,
Gelbana


On Mon, Feb 11, 2019 at 8:38 PM Julian Hyde  wrote:

> There are a few Postgres-isms in that SQL:
> The “::” (as a shorthand for cast) in 'typinput='array_in'::regproc
> The ‘=‘ (as a shorthand for alias) in 'typinput='array_in'::regproc’
> Use of a table function without the ’TABLE’ keyword, in 'from
> generate_series(1, array_upper(current_schemas(false), 1))’
>
> Babel does not handle any of those right now, but it could. Contributions
> welcome.
>
> Julian
>
>
> > On Feb 11, 2019, at 6:14 AM, Stamatis Zampetakis 
> wrote:
> >
> > Hi Gelbana,
> >
> > In order to use the Babel parser you need to also set an appropriate
> > factory to your parser configuration since
> > setting only the conformance is not enough.
> >
> > Try adding the following:
> > ...
> > configBuilder().setParserFactory(SqlBabelParserImpl.FACTORY);
> >
> > Having said that I am not sure if Babel can handle the syntax you
> provided.
> >
> > Best,
> > Stamatis
> >
> >
> >
> > Στις Σάβ, 9 Φεβ 2019 στις 10:46 μ.μ., ο/η Muhammad Gelbana <
> > m.gelb...@gmail.com> έγραψε:
> >
> >> I'm trying to parse a PostgreSQL metadata query but a parsing exception
> is
> >> thrown.
> >>
> >> Here is my code:
> >>
> >> Config parserConfig =
> >> configBuilder().setConformance(SqlConformanceEnum.BABEL).build();
> >> FrameworkConfig frameworkConfig =
> >> Frameworks.newConfigBuilder().parserConfig(parserConfig).build();
> >> Planner planner = Frameworks.getPlanner(frameworkConfig);
> >> planner.parse("SELECT typinput='array_in'::regproc, typtype FROM
> >> pg_catalog.pg_type LEFT JOIN (select ns.oid as nspoid, ns.nspname, r.r
> from
> >> pg_namespace as ns join ( select s.r, (current_schemas(false))[s.r] as
> >> nspname from generate_series(1, array_upper(current_schemas(false), 1))
> as
> >> s(r) ) as r using ( nspname )) as sp ON sp.nspoid = typnamespace WHERE
> >> typname = $1 ORDER BY sp.r, pg_type.oid DESC LIMIT 1");
> >>
> >> *The exception title is* "Exception in thread "main"
> >> org.apache.calcite.sql.parser.SqlParseException: Encountered ":" at
> line 1,
> >> column 27."
> >>
> >> Am I doing something wrong or is the parser still not ready for such
> syntax
> >> ?
> >>
> >> Thanks,
> >> Gelbana
> >>
>
>


Failed to parse a PostgreSQL query using the Babel conformance

2019-02-09 Thread Muhammad Gelbana
I'm trying to parse a PostgreSQL metadata query but a parsing exception is
thrown.

Here is my code:

Config parserConfig =
configBuilder().setConformance(SqlConformanceEnum.BABEL).build();
FrameworkConfig frameworkConfig =
Frameworks.newConfigBuilder().parserConfig(parserConfig).build();
Planner planner = Frameworks.getPlanner(frameworkConfig);
planner.parse("SELECT typinput='array_in'::regproc, typtype FROM
pg_catalog.pg_type LEFT JOIN (select ns.oid as nspoid, ns.nspname, r.r from
pg_namespace as ns join ( select s.r, (current_schemas(false))[s.r] as
nspname from generate_series(1, array_upper(current_schemas(false), 1)) as
s(r) ) as r using ( nspname )) as sp ON sp.nspoid = typnamespace WHERE
typname = $1 ORDER BY sp.r, pg_type.oid DESC LIMIT 1");

*The exception title is* "Exception in thread "main"
org.apache.calcite.sql.parser.SqlParseException: Encountered ":" at line 1,
column 27."

Am I doing something wrong or is the parser still not ready for such syntax
?

Thanks,
Gelbana


Re: Apply a rule to the same node only once

2018-12-07 Thread Muhammad Gelbana
I'm not sure but I believe you have to leave the matched node in a new
shape that doesn't make the rule match again. Like at least collect the
node's information and put them into a new node, convert the matched node
to the new node and have a boolean flag in the newly created node marked as
true. And then let the rule matches look for that flag, if it's there and
it's true, then don't run the rule.

Having that said, converting the node into a new one, with a new type. The
rule matcher can check if the node is of that type, and if it is, then the
rule was applied once before. So you don't need to run it again.

Hopefully that helps.



Thanks,
Gelbana


On Tue, Nov 27, 2018 at 5:06 AM Hequn Cheng  wrote:

> Hi,
>
> Does Calcite provide a way to apply a rule for the same node only once with
> HepPlanner?
> I find that we can use `addMatchLimit()` to limit times for the whole plan,
> but it seems there is no way to limit times for a RelNode, similar to a
> ConvertRule.
> Any help would be appreciated!
>
> Best,
> Hequn
>


  1   2   >