is a good way to do it from the beginning (e.g. I
lose wanted information about the original operators), but I think it
serves my purpose as it this rule is enforced before execution.
2017-05-22 12:13 GMT+03:00 Γιώργος Θεοδωράκης :
> Hello,
> I tried to write something by myself, and your e
owed by a Filter
> followed by a Project returns "sfp", and the rule to push an Aggregate
> into Druid knows that it can succeed because "sfpa" is in the list of
> valid signatures.
>
> Julian
>
>
>
> On Thu, May 11, 2017 at 4:16 AM, Γιώργος Θεοδ
I am trying to "separate" certain subsets of Operators in a query tree and
transform them to a more "general" RelNode implementation, that holds the
information required to rebuild them. I want to implement something more
general than CALC (for more types of operators), that works like this:
Opera
Here http://www.redbook.io/index.html are also some interesting readings
about database systems with references for relevant papers.
Chapter 7 is about Query optimization (it mentions Calcite), and in
previous chapters, it also discusses System R, Volcano and some basic
optimization concepts.
201
Hello,
I am trying to implement a cost model, in which some parameters are
computed based on estimations made by their children. For example, if I
have Filter2(Filter1(Scan)), I want to use the cpu estimation I have from
Filter1 to compute the cost parameters of Filter2 operator.
Code from Filter
o they can’t
> be implemented.
>
> > On Apr 30, 2017, at 2:52 PM, Γιώργος Θεοδωράκης
> wrote:
> >
> > Hello,
> >
> > I have written a very simple rule for pushing a filter through filter,
> > which worked perfectly when I applied it with Volcano on the r
Hello,
I have written a very simple rule for pushing a filter through filter,
which worked perfectly when I applied it with Volcano on the regular
implementation of operators. Here is the code of my rule:
...
public void onMatch(RelOptRuleCall call) {
...
final LogicalFilter newFilter
I try something like this:
Iterate until you find the your window rel and then
LogicalWindow windowAgg = (LogicalWindow) rel;
int windowRange = createWindowFrame(windowAgg.getConstants());
...
private int createWindowFrame(List constants) { int windowFrame
= 0; for ( RexLiteral con : constants)
I created a JIRA issue (I hope I did it right as it is my first). Thank you
Julian.
2017-01-19 22:12 GMT+02:00 Julian Hyde :
> Can you log a JIRA case for this? I will answer there.
>
> On Thu, Jan 19, 2017 at 11:25 AM, Γιώργος Θεοδωράκης
> wrote:
> > Hello,
> >
&
Hello,
I have created my own operators and Convention to apply my custom cost
logic. I have tried many rules with both Volcano and HepPlanner and
everything works fine. When I apply LoptOptimizeRule I get the correct
output. However, when I try to use:
JoinPushThroughJoinRule.LEFT,
JoinPushThrough
I had to set the convention right like this before using the transform
method:
RelTraitSet traitSet =
planner.getEmptyTraitSet().replace(SaberRel.SABER_LOGICAL);
RelNode volcanoPlan = planner.transform(0, traitSet, convertedNode);
2016-12-22 16:48 GMT+02:00 Γιώργος Θεοδωράκης :
> I h
numerable convention? When I add the
Enumerable rules I get the Enumerable logical plan and not my custom. What
should I do?
2016-12-17 22:11 GMT+02:00 Γιώργος Θεοδωράκης :
> Hello,
>
> I think I have understood the basics about RelOptCost, transformer and
> converter rules and how to
Hello,
I think I have understood the basics about RelOptCost, transformer and
converter rules and how to use them in Volcano in order to set my own
cost-model, from examples I have seen in Drill and Hive mainly.
As I am thinking about t it right now, I should:
1)define my cost model and optimizati
Hello,
I am trying to improve my query planner based on hive's implementation of
Calcite Planner (
https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java).
I have split my optimizing procedure in a similar way like Hive's planner.
At first, I use
Hello,
I am trying to get an optimized join reorder for a given RelNode. Until
now, I used VolcanoPlanner with these rules and it worked well for a small
number of Joins :
JoinPushThroughJoinRule.LEFT,
JoinPushThroughJoinRule.RIGHT,
JoinAssociateRule.INSTANCE
W
uleCall.java:236 is just a “re-throw”).
>
> > On Nov 15, 2016, at 3:59 PM, Γιώργος Θεοδωράκης
> wrote:
> >
> > Hello Julian,
> >
> > I get no matter what I do this exception:
> > Exception in thread "main" java.lang.AssertionError: Internal
> flagged non-deterministic (or something) to prevent the merge from
> happening.
>
> Julian
>
> > On Nov 14, 2016, at 1:06 AM, Γιώργος Θεοδωράκης
> wrote:
> >
> > Hello,
> >
> > I want to create a rule that pushes a filter through another filter ( I
&
Hello,
I want to create a rule that pushes a filter through another filter ( I
don't merge them) according to their selectivities to optimize the final
plan. I am using other rules as templates to create it but I keep getting
errors, as I haven't understood correctly the basics. I want to have
som
do not claim that it is
> all implemented.
>
> Did you do a search of the existing tests? JdbcTest.testWinAgg2 features
> windows that have a variety of bounds, and produces the correct results.
> There are also tests in winagg.iq.
>
> I suspect that the “constants” field of Windo
icalProject(productid=[$0], EXPR$1=[CASE(>($2, 0), CAST($3):INTEGER,
null)])
LogicalWindow(window#0=[window(partition {} order by [0] range between $2
PRECEDING and CURRENT ROW aggs [COUNT($1), $SUM0($1)])])
LogicalProject(productid=[$1], units=[$2])
LogicalTableScan(table=[[s, ord
ter the volcano pass should work.
>
> On Tue, Nov 1, 2016 at 4:05 AM, Γιώργος Θεοδωράκης <
> giwrgosrth...@gmail.com>
> wrote:
>
> > I am wondering if is it possible to push down projections in Volcano
> > generally. With the cost model of Volcano, a projection adds r
pushed-down projections. I also tried to use a combination of both planners
but I got errors. What should I do?
2016-10-27 16:00 GMT+03:00 Γιώργος Θεοδωράκης :
> I fixed the second error by changing my Statistics to:
>
> public Statistic getStatistic() {
> int rowCount = rows.size();
>
Γιώργος Θεοδωράκης :
> Hi,
> I was missing the implementations of operators, and I added the built in
> EnumerableRules until I create my own, in order to fix it. However, the
> plan I get from Volcano Optimizer is different from the one I get from
> HepPlanner, although I use the
'll re-address once I can find its usage and
> benefits.
>
> Hope this helps.
>
> Thanks,
> Jungtaek Lim (HeartSaVioR)
>
> 2016년 10월 4일 (화) 오후 7:08, Γιώργος Θεοδωράκης 님이
> 작성:
>
> > I think I did as you said:
> >
> > https://github.com/giwrgosthe
Hello Jordan,
I try to create my custom cost functions, and as you suggested I should
built my own relational algebra and override computeSelfCost. After I have
created my CustomRelNodes, I want to add some of the rules Calcite has to
optimize my initial plan. Till now, I am using heuristic planne
e out what physical plan you want the
> planner to create. Then work backwards and figure out a cost model
> whereby that plan is better than the other alternatives, and write
> transformation rules that can validly create that physical plan from
> your logical plan.
>
> Julian
&g
ld ones? Any hints on where to start?
2016-10-10 15:33 GMT+03:00 Γιώργος Θεοδωράκης :
> Hello,
>
> I am trying to optimize the logical/physical plan of a given streaming
> query with Calcite and execute it in a separate engine. So far, I am using
> heuristic planner and some cost-b
ows between
UNBOUNDED PRECEDING and UNBOUNDED FOLLOWING aggs [SUM($1), COUNT($1)])],
window#1=[window(partition {0} order by [] range between UNBOUNDED
PRECEDING and UNBOUNDED FOLLOWING aggs [SUM($1), COUNT($1)])])
LogicalProject(productid=[$1], units=[$2])
LogicalTableScan(table=[[
Hi,
I was wondering if there is any possible way to define windows with SQl in
Calcite for queries that don't have an aggregate function? For example, I
want to define the queries from Linear Road Benchmark of the STREAM project
(http://infolab.stanford.edu/stream/cql-benchmark.html):
1)
SELECT D
Hello,
I am trying to optimize the logical/physical plan of a given streaming
query with Calcite and execute it in a separate engine. So far, I am using
heuristic planner and some cost-based push-down rules and get a
"relational" optimization on the plan. By relational, I mean that this is
basic o
physical convention instead of Convention.NONE. I can respond
> with a full example if you need it in a bit. I just don't have the capacity
> to write it ATM.
>
> On Mon, Oct 3, 2016 at 8:51 AM, Γιώργος Θεοδωράκης <
> giwrgosrth...@gmail.com>
> wrote:
>
> > Hi
Hi,
I want to parse an Sql query and transform it to an optimized relational
plan (not convert it to physical !!) using calcite rules based on my
database schema and metadata. Right now, the only helpful example I have
found for my purpose is taken from
https://github.com/milinda/samza-sql/blob/ma
Hello,
I have a logical plan as RelNode, and I want to break it into single
operators and get a list of RelNodes. With RelNode.getInputs() I get every
node except the parent. How do I get the parent node? For example, I have
LogicalProject(orderid=[$0], productid=[$1], units=[$2])
LogicalFilter(
> count. The table scan should compute its cost from that, and uses 100d as a
> default IIRC.
>
> > On Sep 25, 2016, at 1:56 PM, Γιώργος Θεοδωράκης
> wrote:
> >
> > I believe it has to do with the implementation of my tables, as I get
> fixed
> > numbers:
&g
)
How can I define a table that gives back correct metadata for the rows?
Right now my tables implement ScannableTable.
2016-09-24 21:15 GMT+03:00 Γιώργος Θεοδωράκης :
> Hello,
>
> I am using a HepPlanner for query optimization on logical operators. When
> I run the optimizations,
Hello,
I am using a HepPlanner for query optimization on logical operators. When I
run the optimizations, I get an optimized plan according to the rules I
have used but wrong metadata results. My code is :
SqlNode sqlNode = planner.parse(query);
SqlNode validatedSqlNode = planner.validate(sqlNode
ess of the rules, and
> don’t want to execute queries, create a sub-class of RelOptRulesTest.
>
> Julian
>
>
> > On Sep 17, 2016, at 8:08 AM, Γιώργος Θεοδωράκης
> wrote:
> >
> > Hi,
> >
> > I am trying to create a basic planner that enforces rules
Hi,
I am trying to create a basic planner that enforces rules on simple
queries. At the moment I have created a planner from the examples (and
samza-sql integration I found online) and used HepPlanner for testing some
rules. My question is which form should my test data be? I am using
something li
what it can't find.
>
> You could also try simply doing an "Import Maven Project" provided by
> m2e's integration instead of invoking `mvn eclipse:eclipse` to generate the
> configuration files. I import the project directly and have success with
> this.
>
&
-core 1.9.0, example-csv-1.9.0, calcite-linq4j 1.9.0 after creating
them with mvn install command.
2016-09-06 21:15 GMT+03:00 Γιώργος Θεοδωράκης :
> I've tried with avatica, avatica-metrics, standalone-server and server,
> all in version 1.8.0 jars from maven repository as dependenci
Hello, I am trying to import latest version of calcite in eclipse. I have
downloaded the source code from github as zip, used these commands:
$mvn install
$mvn eclipse:eclipse
and finally imported the project as an existing maven project. However, I
get many errors (in core's pom.xml , classes miss
and 1.8.0
> versions of avatica- jars.
>
> Julian
>
>
> > On Sep 6, 2016, at 7:45 AM, Γιώργος Θεοδωράκης
> wrote:
> >
> > I have imported as external jars calcite-example-csv, calcite-core,
> avatica
> > , linq4j, avatica-metrics, avatica-standalone wit
should use a 1.9.0-SNAPSHOT version of other Calcite jars.
> (You can build and install in your local repo using ‘mvn install’.)
>
> 1.9 should be released in a couple of weeks.
>
> Julian
>
> > On Sep 5, 2016, at 3:04 AM, Γιώργος Θεοδωράκης
> wrote:
> >
> >
lumn names in your
> query.
>
> Julian
>
> > On Sep 4, 2016, at 09:43, Γιώργος Θεοδωράκης
> wrote:
> >
> > I have correctly used sqlline to run queries on a streaming table, but
> now
> > I face problems trying to implement it programmatically with java. I h
g-csv
).
Can somebody help me by giving a template or finding what's wrong with my
code?
Thank you in advance,
George
2016-09-03 18:14 GMT+03:00 Γιώργος Θεοδωράκης :
> When I tried a query like SELECT STREAM ss.depts.deptno FROM ss.depts
> WHERE ss.depts.deptno < 30; it gave me a co
nly
the name it wouldn't run. I still haven't fixed my sOrders.csv yet, but I
suppose it has to do with how I have created.
2016-09-03 15:39 GMT+03:00 Γιώργος Θεοδωράκης :
> I am trying to create a simple streaming query ( like SELECT STREAM * FROM
> ORDERS WHERE units > 10). I
I am trying to create a simple streaming query ( like SELECT STREAM * FROM
ORDERS WHERE units > 10). I have created a stream using a socket that saves
the orders in an sOrders.csv file and I have changed the
model-stream-table.json like this:
{
version: '1.0',
defaultSchema: 'CUSTOM_TABLE',
s
Hello,
My name is George and I am an undergraduate computer science student. I am
doing some research for my diploma thesis about query optimization on
distributed systems. After reading some basics about Calcite project, I
thought I could use it as an SQL optimizer on top of Spark.
I have a Hadoo
48 matches
Mail list logo