GitHub user glapark opened a pull request:
https://github.com/apache/hive/pull/454
findBestMatch() tests the inclusion of default partition name
This pull request implements the change discussed in the Hive user mailing
list regarding non-determinisitic behavior of hive in generating DAGS. From the
discussion thread:
I have been looking further into this issue, and have found that the
non-determinstic behavior of Hive in generating DAGs is actually due to the
logic in AggregateStatsCache.findBestMatch() called from
AggregateStatsCache.get(), as well as the disproportionate distribution of
Nulls in __HIVE_DEFAULT_PARTITION__ (in the case of the TPC-DS dataset).
Here is what is happening. Let me use web_sales table and ws_web_site_sk
column in the 10TB TPC-DS dataset as a running example.
1. In the course of running TPC-DS queries, Hive asks MetaStore about the
column statistics of 1823 partNames in the web_sales/ws_web_site_sk
combination, either without __HIVE_DEFAULT_PARTITION__ or with
__HIVE_DEFAULT_PARTITION__.
--- Without __HIVE_DEFAULT_PARTITION__, it reports a total of 901180
nulls.
--- With __HIVE_DEFAULT_PARTITION__, however, it report a total of
1800087 nulls, almost twice as many.
2. The first call to MetaStore returns the correct result, but all
subsequent requests are likely to return the same result from the cache,
irrespective of the inclusion of __HIVE_DEFAULT_PARTITION__. This is because
AggregateStatsCache.findBestMatch() treats __HIVE_DEFAULT_PARTITION__ in the
same way as other partNames, and the difference in the size of partNames[] is
just 1. The outcome depends on the duration of intervening queries, so
everything is now non-deterministic.
3. If a wrong value of numNulls is returned, Hive generates a different
DAG, which usually takes much longer than the correct one (e.g., 150s to 1000s
for the first part of Query 24, and 40s to 120s for Query 5). I guess the
problem is particularly pronounced here because of the huge number of nulls in
__HIVE_DEFAULT_PARTITION__. It is ironic to see that the query optimizer is so
efficient that a single wrong guess of numNulls creates a very inefficient DAG.
Note that this behavior cannot be avoided by setting
hive.metastore.aggregate.stats.cache.max.variance to zero because the
difference in the number of partNames[] between the argument and the entry in
the cache is just 1.
I think that AggregateStatsCache.findBestMatch() should treat
__HIVE_DEFAULT_PARTITION__ in a special way, by not returning the result in the
cache if there is a difference in the inclusion of partName
__HIVE_DEFAULT_PARTITION__ (or should provide the use with an option to
activate this feature).
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/mr3-project/hive
compare.default.partition.findBestMatch
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/hive/pull/454.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #454
----
commit 00034ddb4fd8b7e0615c991a5d15233a798a1968
Author: gla <gla@...>
Date: 2018-10-25T10:46:32Z
findBestMatch() tests the inclusion of default partition name
----
---