Hi,
I am playing with the planner but I can't get it work for a very simple
query.
Th table is
 MYTABLE(id integer, name varchar)            definition is given in code
snippet
the query is "SELECT * FROM MYTABLE"

The error is:
org.apache.calcite.plan.RelOptPlanner$CannotPlanException: Node
[rel#7:Subset#0.NONE.[0, 1].any] could not be implemented; planner state:

Root: rel#7:Subset#0.NONE.[0, 1].any
Original rel:
LogicalProject(subset=[rel#6:Subset#1.NONE.[0, 1].any], id=[$0],
name=[$1]): rowcount = 15.0, cumulative cost = {15.0 rows, 30.0 cpu, 0.0
io}, id = 5
  EnumerableTableScan(subset=[rel#4:Subset#0.ENUMERABLE.[0, 1].any],
table=[[default, MYTABLE]]): rowcount = 15.0, cumulative cost = {15.0 rows,
16.0 cpu, 0.0 io}, id = 2

Sets:
Set#0, type: RecordType(INTEGER id, VARCHAR name)
    rel#4:Subset#0.ENUMERABLE.[0, 1].any, best=rel#2, importance=0.9
        rel#2:EnumerableTableScan.ENUMERABLE.[[0, 1]].any(table=[default,
MYTABLE]), rowcount=15.0, cumulative cost={15.0 rows, 16.0 cpu, 0.0 io}
        rel#9:EnumerableProject.ENUMERABLE.[[0,
1]].any(input=rel#4:Subset#0.ENUMERABLE.[0, 1].any,id=$0,name=$1),
rowcount=15.0, cumulative cost={30.0 rows, 46.0 cpu, 0.0 io}
    rel#7:Subset#0.NONE.[0, 1].any, best=null, importance=1.0
        rel#5:LogicalProject.NONE.[[0,
1]].any(input=rel#4:Subset#0.ENUMERABLE.[0, 1].any,id=$0,name=$1),
rowcount=15.0, cumulative cost={inf}
        rel#8:AbstractConverter.NONE.[0,
1].any(input=rel#4:Subset#0.ENUMERABLE.[0, 1].any,convention=NONE,sort=[0,
1],dist=any), rowcount=15.0, cumulative cost={inf}

Does anybody has an hint for me ?
I am using currert master of Calcite (1.15-SNAPSHOT)

Thank you

Enrico


My code is:
   @Test
    public void test() throws Exception {
        Table table = new TableImpl();
        CalciteSchema schema = CalciteSchema.createRootSchema(true, true,
"default");
        schema.add("MYTABLE", table);
        SchemaPlus rootSchema = schema.plus();
        SqlRexConvertletTable convertletTable =
StandardConvertletTable.INSTANCE;
        SqlToRelConverter.Config config = SqlToRelConverter.Config.DEFAULT;
        FrameworkConfig frameworkConfig = new FrameworkConfigImpl(config,
rootSchema, convertletTable);
        Planner imp = Frameworks.getPlanner(frameworkConfig);
        SqlNode sqlNode = imp.parse("SELECT * FROM MYTABLE");
        sqlNode = imp.validate(sqlNode);
        RelRoot relRoot = imp.rel(sqlNode);
        RelNode project = relRoot.project();
        RelOptPlanner planner = project.getCluster().getPlanner();
        planner.setRoot(project);
        RelNode findBestExp = planner.findBestExp();
        System.out.println("best:" + findBestExp);
    }

    private class FrameworkConfigImpl implements FrameworkConfig {

        private final SqlToRelConverter.Config config;
        private final SchemaPlus rootSchema;
        private final SqlRexConvertletTable convertletTable;

        public FrameworkConfigImpl(SqlToRelConverter.Config config,
SchemaPlus rootSchema, SqlRexConvertletTable convertletTable) {
            this.config = config;
            this.rootSchema = rootSchema;
            this.convertletTable = convertletTable;
        }

        @Override
        public SqlParser.Config getParserConfig() {
            return SqlParser.Config.DEFAULT;
        }

        @Override
        public SqlToRelConverter.Config getSqlToRelConverterConfig() {
            return config;
        }

        @Override
        public SchemaPlus getDefaultSchema() {
            return rootSchema;
        }

        @Override
        public RexExecutor getExecutor() {
            return new RexExecutorImpl(new DataContextImpl());
        }

        @Override
        public ImmutableList<Program> getPrograms() {
            return ImmutableList.of(Programs.standard());
        }

        @Override
        public SqlOperatorTable getOperatorTable() {
            return new SqlStdOperatorTable();
        }

        @Override
        public RelOptCostFactory getCostFactory() {
            return null;
        }

        @Override
        public ImmutableList<RelTraitDef> getTraitDefs() {

            return ImmutableList.of(ConventionTraitDef.INSTANCE,
                    RelCollationTraitDef.INSTANCE,
                    RelDistributionTraitDef.INSTANCE
            );
        }

        @Override
        public SqlRexConvertletTable getConvertletTable() {
            return convertletTable;
        }

        @Override
        public Context getContext() {
            return new ContextImpl();
        }

        @Override
        public RelDataTypeSystem getTypeSystem() {
            return RelDataTypeSystem.DEFAULT;
        }

        class DataContextImpl implements DataContext {

            public DataContextImpl() {
            }

            @Override
            public SchemaPlus getRootSchema() {
                return rootSchema;
            }

            @Override
            public JavaTypeFactory getTypeFactory() {
                throw new UnsupportedOperationException("Not supported
yet."); //To change body of generated methods, choose Tools | Templates.
            }

            @Override
            public QueryProvider getQueryProvider() {
                throw new UnsupportedOperationException("Not supported
yet."); //To change body of generated methods, choose Tools | Templates.
            }

            @Override
            public Object get(String name) {
                throw new UnsupportedOperationException("Not supported
yet."); //To change body of generated methods, choose Tools | Templates.
            }

        }

        private class ContextImpl implements Context {

            public ContextImpl() {
            }

            @Override
            public <C> C unwrap(Class<C> aClass) {
                return null;
            }
        }
    }

    private static class TableImpl implements Table {

        public TableImpl() {
        }

        @Override
        public RelDataType getRowType(RelDataTypeFactory typeFactory) {
            return typeFactory
                    .builder()
                    .add("id",
typeFactory.createSqlType(SqlTypeName.INTEGER))
                    .add("name",
typeFactory.createSqlType(SqlTypeName.VARCHAR))
                    .build();
        }

        @Override
        public Statistic getStatistic() {
            return new StatisticImpl();
        }

        @Override
        public Schema.TableType getJdbcTableType() {
            throw new UnsupportedOperationException("Not supported yet.");
//To change body of generated methods, choose Tools | Templates.
        }

        @Override
        public boolean isRolledUp(String column) {
            return true;
        }

        @Override
        public boolean rolledUpColumnValidInsideAgg(String column, SqlCall
call, SqlNode parent, CalciteConnectionConfig config) {
            return false;
        }

        class StatisticImpl implements Statistic {

            public StatisticImpl() {
            }

            @Override
            public Double getRowCount() {
                return 15d;
            }

            @Override
            public boolean isKey(ImmutableBitSet columns) {
                return false;
            }

            @Override
            public List<RelReferentialConstraint>
getReferentialConstraints() {
                return Collections.emptyList();
            }

            @Override
            public List<RelCollation> getCollations() {
                RelCollation c = new RelCollationImpl(
                        ImmutableList.of(
                                new RelFieldCollation(0,
RelFieldCollation.Direction.ASCENDING),
                                new RelFieldCollation(1,
RelFieldCollation.Direction.ASCENDING)
                        )) {
                };
                return Arrays.asList(c);
            }

            @Override
            public RelDistribution getDistribution() {
                return RelDistributions.ANY;
            }
        }
    }




2017-11-06 19:48 GMT+01:00 Julian Hyde <jh...@apache.org>:

> Yes that is definitely possible. I am too busy to write a code snippet but
> you should take a look at PlannerTest.
>
> > On Nov 6, 2017, at 3:05 AM, Stéphane Campinas <
> stephane.campi...@gmail.com> wrote:
> >
> > Hi,
> >
> > I am trying to use the Volcano planner in order to optimise queries based
> > on statistics but I am having some issues understanding how to achieve
> > this, even after looking at the Github repository for tests.
> > A first goal I would like to achieve would be to choose a join
> > implementation based on its cost.
> >
> > For example, a query tree can have several joins, and depending on the
> > position of the join in the tree, an certain implementation would be more
> > efficient than another.
> > Would that be possible ? If so, could you share a code snippet ?
> >
> > Thanks
> >
> > --
> > Campinas Stéphane
>
>

Reply via email to