BindableTableScan.computeSelfCost does not use in computing the cost if there
are filters or how many projects are used, it just multiplies the cost by 0.01.
Just by changing the cost to take into account the number of projects over the
identity (all projects) makes the planner choose the plan that pushes projects
and filter to BindableTableScan, because the overall cost is lower than
removing the Project by AggregateProjectMergeRule.
Any suggestion or example on a good way to compute the cost taking into account
the projects and filters?
Em quinta-feira, 26 de outubro de 2017 16:38:14 BRST, Luis Fernando Kauer
<[email protected]> escreveu:
I'm sorry, but I have no idea what you are talking about.
Cassandra Adapter has code to translate the plan to run the query in Cassandra
Server.
If you are only interested in querying CSV files I don't see how copying that
code without understanding it will help you.
First of all, you need to decide whether you will use
ProjectableFilterableTable or TranslatableTable.
You must try to understand how the rules work and how to check which rules are
being fired and which ones are being chosen.
Did you follow the tutorial for creating CSV Adapter? It creates a rule to push
the used projects to the table scan. That is a great start.
It's a good idea to take a look at the built in rules available in Calcite too.
You should take a look into FilterTableScanRule and ProjectTableScanRule, which
are the rules that push the projects and filters used with
ProjectableFilterableTable into a BindableTableScan, and the other rules int
Bindables.java.
The rules work fine when there is no aggregate function, pushing both filter
and projects into BindableTableScan. The problem seems to be with
AggregateProjectMergeRule which removes the Project from the plan.
If you remove the filter from your test cases you'll see that the projects are
pushed to the BindableTableScan.
I was able to simulate your problem using ScannableTableTest.testProjectableFilterable2WithProject changing the query into "select \"k\",
count(*) from (select \"k\",\"j\" from \"s\".\"beatles\" where \"i\" = 4) x group by \"k\"".
The plan:
LogicalAggregate(group=[{0}], EXPR$1=[COUNT()])
LogicalProject(k=[$1])
LogicalFilter(condition=[=($0, 4)])
LogicalProject(i=[$0], k=[$2])
LogicalTableScan(table=[[s, beatles]])
PhysicalPlan:
EnumerableAggregate(group=[{2}], EXPR$1=[COUNT()]): rowcount = 10.0, cumulative
cost = {61.25 rows, 50.0 cpu, 0.0 io}, id = 112
EnumerableInterpreter: rowcount = 100.0, cumulative cost = {50.0 rows, 50.0
cpu, 0.0 io}, id = 110
BindableTableScan(table=[[s, beatles]], filters=[[=($0, 4)]]): rowcount =
100.0, cumulative cost = {1.0 rows, 1.01 cpu, 0.0 io}, id = 62
If I disable AggregateProjectMergeRule, the physical plan is:
EnumerableAggregate(group=[{0}], EXPR$1=[COUNT()]): rowcount = 10.0, cumulative
cost = {61.25 rows, 50.0 cpu, 0.0 io}, id = 102
EnumerableInterpreter: rowcount = 100.0, cumulative cost = {50.0 rows, 50.0
cpu, 0.0 io}, id = 100
BindableTableScan(table=[[s, beatles]], filters=[[=($0, 4)]],
projects=[[2]]): rowcount = 100.0, cumulative cost = {1.0 rows, 1.01 cpu, 0.0
io}, id = 78
Regards,
Luis Fernando
Em quinta-feira, 26 de outubro de 2017 13:19:46 BRST, Alexey Roytman
<[email protected]> escreveu:
Thanks for the hints.
I've tried to use [i.e. copy-pasted a lot of] Cassandra*.java for my
CSV-files example. It's really too wordy! So lot of code I need to
understand later!..
But what bothers me most for now is the fact that I just cannot pass
List<RexNode> to [my modification of] CassandraTable.query(); I need to
convert it to some string form within List<String> using
CassandraFilter.Translator, and then, when passed to [my modification
of] CassandraTable.query(), I need to parse these List<String> back...
Is there way to eliminate this back-and-forth serialization-deserialization?
- Alexey.
(P.S. Sorry for not keeping the email thread for now...)
Julian Hyde wrote wrote:
By "write a rule" I mean write a class that extends RelOptRule. An
example is CassandraRules.CassandraFilterRule.
ProjectableFilterableTable was "only" designed for the case that
occurs 80% of the time but requires 20% of the functionality. Rules
run in a richer environment so have more power and flexibility.