Sorry, I'm just thinking aloud... I want to use ProjectableFilterableTable too much :-))

1. First of all, we may extend the SqlTypeName with additional field that is a presumable expense for such type. It's a bad way.

The ProjectableFilterableTable has a number of interfaces.
2. getJdbcTableType() to return VIEW, and getStatistic(), getRowType() -- they all don't have filters and/or projections to deal with.

3. We have also the DataContext that we can expand. But this looks like spaghetti hack...

4. We can extend Schema or SchemaPlus to ask for relative price of columns set. 5. The same way we can extend Table/AbstractTable/ProjectableFilterableTable with a function to ask for relative price of columns set. (If we do 4 or 5, we'd better also pass the List<RexNode>, too, for the completeness.)

6. I still don't understand what RelOptCluster or RelTraitSet are, but I guess it's not a good way...

7. We may extend only BindableTableScan, but then we need to check for "instanceof BindableTableScan". 8. Not sure about pushing this up to TableScan, as it works only with RelMetadataQuery...

9. We may change the ProjectableFilterableTable.scan() to be able to accept RelOptCost; if it's null, a regular scan() shall run. And if it's not, then this RelOptCost shall be filled with the price of filters and projects... But this is a change of the API...

- Alexey.

On 10/26/2017 10:30 PM, Luis Fernando Kauer wrote:
  BindableTableScan.computeSelfCost does not use in computing the cost if there 
are filters or how many projects are used, it just multiplies the cost by 0.01.
Just by changing the cost to take into account the number of projects over the 
identity (all projects) makes the planner choose the plan that pushes projects 
and filter to BindableTableScan, because the overall cost is lower than 
removing the Project by AggregateProjectMergeRule.
Any suggestion or example on a good way to compute the cost taking into account 
the projects and filters?
     Em quinta-feira, 26 de outubro de 2017 16:38:14 BRST, Luis Fernando Kauer 
<[email protected]> escreveu:
I'm sorry, but I have no idea what you are talking about.
Cassandra Adapter has code to translate the plan to run the query in Cassandra 
Server.
If you are only interested in querying CSV files I don't see how copying that 
code without understanding it will help you.
First of all, you need to decide whether you will use 
ProjectableFilterableTable or TranslatableTable.
You must try to understand how the rules work and how to check which rules are 
being fired and which ones are being chosen.
Did you follow the tutorial for creating CSV Adapter? It creates a rule to push 
the used projects to the table scan. That is a great start.
It's a good idea to take a look at the built in rules available in Calcite too.
You should take a look into FilterTableScanRule and ProjectTableScanRule, which 
are the rules that push the projects and filters used with 
ProjectableFilterableTable into a BindableTableScan, and the other rules int 
Bindables.java.
The rules work fine when there is no aggregate function, pushing both filter 
and projects into BindableTableScan.  The problem seems to be with 
AggregateProjectMergeRule which removes the Project from the plan.
If you remove the filter from your test cases you'll see that the projects are 
pushed to the BindableTableScan.
I was able to simulate your problem using ScannableTableTest.testProjectableFilterable2WithProject changing the query into "select \"k\", 
count(*) from (select \"k\",\"j\" from \"s\".\"beatles\" where \"i\" = 4) x group by \"k\"".
The plan:
LogicalAggregate(group=[{0}], EXPR$1=[COUNT()])
   LogicalProject(k=[$1])
     LogicalFilter(condition=[=($0, 4)])
       LogicalProject(i=[$0], k=[$2])
         LogicalTableScan(table=[[s, beatles]])

PhysicalPlan:
EnumerableAggregate(group=[{2}], EXPR$1=[COUNT()]): rowcount = 10.0, cumulative 
cost = {61.25 rows, 50.0 cpu, 0.0 io}, id = 112
   EnumerableInterpreter: rowcount = 100.0, cumulative cost = {50.0 rows, 50.0 
cpu, 0.0 io}, id = 110
     BindableTableScan(table=[[s, beatles]], filters=[[=($0, 4)]]): rowcount = 
100.0, cumulative cost = {1.0 rows, 1.01 cpu, 0.0 io}, id = 62


If I disable AggregateProjectMergeRule, the physical plan is:
EnumerableAggregate(group=[{0}], EXPR$1=[COUNT()]): rowcount = 10.0, cumulative 
cost = {61.25 rows, 50.0 cpu, 0.0 io}, id = 102
   EnumerableInterpreter: rowcount = 100.0, cumulative cost = {50.0 rows, 50.0 
cpu, 0.0 io}, id = 100
     BindableTableScan(table=[[s, beatles]], filters=[[=($0, 4)]], 
projects=[[2]]): rowcount = 100.0, cumulative cost = {1.0 rows, 1.01 cpu, 0.0 
io}, id = 78


Regards,

Luis Fernando



     Em quinta-feira, 26 de outubro de 2017 13:19:46 BRST, Alexey Roytman 
<[email protected]> escreveu:
Thanks for the hints.

I've tried to use [i.e. copy-pasted a lot of] Cassandra*.java for my
CSV-files example. It's really too wordy! So lot of code I need to
understand later!..

But what bothers me most for now is the fact that I just cannot pass
List<RexNode> to [my modification of] CassandraTable.query(); I need to
convert it to some string form within List<String> using
CassandraFilter.Translator, and then, when passed to [my modification
of] CassandraTable.query(), I need to parse these List<String> back...
Is there way to eliminate this back-and-forth serialization-deserialization?

- Alexey.

(P.S. Sorry for not keeping the email thread for now...)

Julian Hyde wrote wrote:
By "write a rule" I mean write a class that extends RelOptRule. An
example is CassandraRules.CassandraFilterRule.
ProjectableFilterableTable was "only" designed for the case that
occurs 80% of the time but requires 20% of the functionality. Rules
run in a richer environment so have more power and flexibility.

Reply via email to