[
https://issues.apache.org/jira/browse/DERBY-47?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12475414
]
A B commented on DERBY-47:
--------------------------
Thank you very much for your excellent questions, Mike. My attempted answers
are below...
> Can you say something why you chose to use x = ? predicate with special flag
> vs. just having a new
> multiple-probe inlist predicate
Good question. I guess the short answer is simply: code reuse. All of the
optimization, modification, generation, and execution-time logic for a
single-sided predicate is already written and has (presumably) been working for
years. Among other things this includes the notion of "start/stop" keys to
(re-)position an index scan, which is ultimately what we want and is something
that store already knows about. By using a flag we can slightly alter the
behavior at key points of certain methods and then, for everything else, we
just let Derby do what it already knows how to do. Minimal code changes are
required and if something breaks, odds are that it is in the "slightly altered"
behavior (or lack thereof), of which there is far less than "everything else".
If anyone knows of how we could improve/simplify the logic and/or performance
by creating a new multi-probe predicate then I am certainly open to
investigating that path further. But for now it seemed like the creation of "x
= ?" with a flag was the simplest and quickest way to go, and it seems to
provide the desired results. So that's where I ended up...
> Also what happens to a query that is effectively an IN list that is hand
> written using OR's instead
> (ie, where i = 1 or i = 2 or ...). Is that already changed to an IN list
> before we get to your new
> code here?
Yes, transformation of ORs into IN-lists occurs during preprocessing of the OR
list. In OrNode.preprocess() there is logic to recognize if an OR list is
transformable into an IN-list and, if so, the IN-list is created and then the
"preprocess()" method of the IN-list is called. Since the creation of "probe
predicates" occurs as part of IN-list preprocessing, this means that Yes, ORs
are already converted to an IN-list before my new code takes effect.
As a side note, if there is an OR clause which itself has an IN-list as one of
its operands then OrNode preprocessing will, with my proposed changes, combine
the existing IN-list with the newly-created IN-list. For example:
select i, c from t1 where i in (1, 3, 5, 6) or i = 2 or i = 4
will be changed to:
select i, c from t1 where i in (1, 3, 5, 6, 2, 4)
This conversion will happen as part of OrNode.preprocess(), as well.
> What is the costing % number of rows for a where IN (?) (ie. a parameter at
> compile time vs a
> constant, in a non-unique index)? Is this just the cardinality statistic if
> it exists?
Generally speaking the way costing for a base table conglomerate works is that
we figure out how many rows there are in the table before any predicates are
applied. Then, if we have a start/stop predicate *and* we have statistics, we
will calculate a percentage of the rows expected (called "start/stop
selectivity") based on the statistics. This ultimately brings us to the
"selectivity(Object[]) method of StatisticsImpl, where there is the following
code:
if (numRows == 0.0)
return 0.1;
return (double)(1/(double)numUnique);
I.e. the selectivity is 1 over the number of unique values in the conglomerate.
Is this what you mean by "just the cardinality statistic if it exists?"
In any event we then multiply that percentage by the estimated row count to get
a final estimated row count (I'm leaving out lots of "magic" costing operations
here to keep things simple (and because I don't really understand all of that
magic myself...)).
> What is the default without the statistic?
If we do not have statistics for a specific conglomerate then we will simply
default the start/stop selectivity to 1.0, i.e. the row count will not be
adjusted (at least not as relates to this discussion).
> Where I am going is that it probably does not make sense to have the estimate
> of the sum of terms
> be larger than the number of rows in the db.
Yes, you're absolutely right. This actually occurred to me yesterday, which is
why I was poking around the stats code and thus was able to answer your
previous question ;) I agree that the estimated row count should not exceed
the total number of rows. I think we could just account for this by adding an
explicit check to see if rowCount * sizeOfInList yields a number larger than
the number of rows in the conglomerate. If so then we set it to the number of
rows in the conglomerate and that's that.
> And just want to understand how many terms will it take before we give up on
> the multiple probe.
Another great question. The answer is that we do not ever give up on
multi-probing as part of "costing" per se. Rather, we calculate a cost and
then we compare that cost with all of the other costs found so far; if it's
cheaper we use it, otherwise we discard it. Note that "cheaper" here
encapsulates a lot of other logic and optimizer info that is far beyond the
scope of this discussion.
So in the context of row counts, if the number of IN-list predicates multiplied
by the estimated row count (after stat selectivity is applied) yields a high
precentage row count (ex. all rows in the table) then the odds of the optimizer
choosing to use that particular index are lower. It *may* still choose to use
the index, in which case multi-probing will take effect, but it probably will
not (it all depends). Thus the point at which we give up on multi-probing is a
factor of how unique the column values are and how many values are in the
IN-list. If you're just looking at the size of IN-list, then smaller lists are
more likely to result in IN-list probing than larger ones--which I think is
what we would expect.
That's a bit of a vague answer but so much of it depends on the query and the
data in question that I wouldn't want to say anything more specific than that...
> Some possible improvements to IN optimization
> ---------------------------------------------
>
> Key: DERBY-47
> URL: https://issues.apache.org/jira/browse/DERBY-47
> Project: Derby
> Issue Type: Improvement
> Components: SQL
> Affects Versions: 10.0.2.0
> Environment: all
> Reporter: Sunitha Kambhampati
> Assigned To: A B
> Attachments: d47_engine_doNotCommit_v1.patch,
> d47_engine_doNotCommit_v1.stat, d47_mp_relOpPredCheck_v1.patch,
> d47_mp_relOpPredCheck_v1.stat, derby-47-performance-data.txt,
> derby-47-performance-data.txt, Derby47PerformanceTest.java,
> Derby47PerformanceTest.java, InListOperatorNode.java,
> QueryPlanUniqueIndexAndWordIndexOneTerm.txt,
> QueryPlanUniqueIndexAndWordIndexTwoTerms.txt,
> QueryPlanUniqueIndexOnlyOneTerm.txt, QueryPlanUniqueIndexOnlyTwoTerms.txt,
> readlocks.diff, readlocks_withContext.diff
>
>
> Consider a simple case of -
> A table tbl has 10000 rows, there is a primary key index on i1
> and the query in question is
> select * from tbl where i1 in (-1,100000)
> derby does a table scan of the entire table even though the "IN" list has
> only two values and the comparison is on a field that has an index.
> Briefly looking at the code, it seems like we insert a between and use the IN
> list to get the start and stop values for the scan. Thus the range of the
> values in the "IN" list here plays an important role.
> Thus if the query was changed to select * from tbl where i1 in (-1, 1), an
> index scan would be chosen.
> It would be nice if we could do something clever in this case where there is
> clearly an index on the field and the number of values in the IN list is
> known. Maybe use the rowcount estimate and the IN list size to do some
> optimizations.
> - consider the length of the "IN" list to do searches on the table. ie use
> the IN list values to do index key searches on the table,
> -or try to convert it to a join. Use the "IN" list values to create a
> temporary table and do a join. It is most likely that the optimizer will
> choose the table with "IN" list here as the outer table in the join and thus
> will do key searches on the larger table.
> -------------------------------------------------------------------
> some query plans that I logged using derby.language.logQueryPlan=true for
> some similar queries:
> Table has ascending values from 0 - 9999 for i1. primary key index on i1.
> GMT Thread[UT0,5,main] (XID = 19941), (SESSIONID = 0), select * from
> scanfixed where i1 in (-1,9999,9998,9997,9996,9995,9994,9993,9992,9991,9990)
> ******* Project-Restrict ResultSet (2):
> Number of opens = 1
> Rows seen = 10000
> Rows filtered = 9990
> restriction = true
> projection = false
> constructor time (milliseconds) = 0
> open time (milliseconds) = 0
> next time (milliseconds) = 0
> close time (milliseconds) = 0
> restriction time (milliseconds) = 0
> projection time (milliseconds) = 0
> optimizer estimated row count: 750.38
> optimizer estimated cost: 8579.46
> Source result set:
> Table Scan ResultSet for SCANFIXED at read committed isolation level
> using instantaneous share row locking chosen by the optimizer
> Number of opens = 1
> Rows seen = 10000
> Rows filtered = 0
> Fetch Size = 16
> constructor time (milliseconds) = 0
> open time (milliseconds) = 0
> next time (milliseconds) = 0
> close time (milliseconds) = 0
> next time in milliseconds/row = 0
> scan information:
> Bit set of columns fetched=All
> Number of columns fetched=9
> Number of pages visited=417
> Number of rows qualified=10000
> Number of rows visited=10000
> Scan type=heap
> start position:
> null stop position:
> null qualifiers:
> Column[0][0] Id: 0
> Operator: <=
> Ordered nulls: false
> Unknown return value: false
> Negate comparison result: false
> Column[0][1] Id: 0
> Operator: <
> Ordered nulls: false
> Unknown return value: true
> Negate comparison result: true
> optimizer estimated row count: 750.38
> optimizer estimated cost: 8579.46
> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> l
> 2004-10-14 18:59:47.577 GMT Thread[UT0,5,main] (XID = 19216), (SESSIONID =
> 0), select * from scanfixed where i1 in
> (9999,9998,9997,9996,9995,9994,9993,9992,9991,9990) ******* Project-Restrict
> ResultSet (3):
> Number of opens = 1
> Rows seen = 10
> Rows filtered = 0
> restriction = true
> projection = true
> constructor time (milliseconds) = 0
> open time (milliseconds) = 0
> next time (milliseconds) = 0
> close time (milliseconds) = 0
> restriction time (milliseconds) = 0
> projection time (milliseconds) = 0
> optimizer estimated row count: 4.80
> optimizer estimated cost: 39.53
> Source result set:
> Index Row to Base Row ResultSet for SCANFIXED:
> Number of opens = 1
> Rows seen = 10
> Columns accessed from heap = {0, 1, 2, 3, 4, 5, 6, 7, 8}
> constructor time (milliseconds) = 0
> open time (milliseconds) = 0
> next time (milliseconds) = 0
> close time (milliseconds) = 0
> optimizer estimated row count: 4.80
> optimizer estimated cost: 39.53
> Index Scan ResultSet for SCANFIXED using index SCANFIXEDX at
> read committed isolation level using instantaneous share row locking chosen
> by the optimizer
> Number of opens = 1
> Rows seen = 10
> Rows filtered = 0
> Fetch Size = 16
> constructor time (milliseconds) = 0
> open time (milliseconds) = 0
> next time (milliseconds) = 0
> close time (milliseconds) = 0
> next time in milliseconds/row = 0
> scan information:
> Bit set of columns fetched=All
> Number of columns fetched=2
> Number of deleted rows visited=0
> Number of pages visited=2
> Number of rows qualified=10
> Number of rows visited=10
> Scan type=btree
> Tree height=2
> start position:
> >= on first 1 column(s).
> Ordered null semantics on the following columns:
> stop position:
> > on first 1 column(s).
> Ordered null semantics on the following columns:
> qualifiers:
> None
> optimizer estimated row count: 4.80
> optimizer estimated cost: 39.53
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.