> But is it? Is it impossible for the BRIN bitmap index scan to return 0 rows
> (say, if the value being matched is outside the min/max boundary for every
> block range?) Granted, if we document that it always returns 0 and should be
> ignored, then confusing the actual 0 with the 0 as a
> On 30 Dec 2015, at 18:38, Emre Hasegeli wrote:
>
>> which is much closer to the actual number of rows removed by the index
>> recheck + the one left.
>
> Is it better to be closer? We are saying those are the "actual"
> values not the estimates. If we cannot provide the
Emre Hasegeli writes:
>> which is much closer to the actual number of rows removed by the index
>> recheck + the one left.
> Is it better to be closer? We are saying those are the "actual"
> values not the estimates. If we cannot provide the actual rows, I
> think it is
> On 30 Dec 2015, at 17:02, Tom Lane wrote:
>
> Oleksii Kliukin writes:
>> Bitmap Heap Scan on example (cost=744.44..757.64 rows=6 width=0) (actual
>> time=73.895..73.895 rows=0 loops=1)
>> Output: 1
>> Recheck Cond: (example.event_time = (now() -
Oleksii Kliukin writes:
>> On 30 Dec 2015, at 17:02, Tom Lane wrote:
>> Another idea would be to use the heap's row density as calculated
>> by the last ANALYZE (ie, reltuples/relpages), with a fallback to 100
>> if relpages=0. This'd only be convenient
> which is much closer to the actual number of rows removed by the index
> recheck + the one left.
Is it better to be closer? We are saying those are the "actual"
values not the estimates. If we cannot provide the actual rows, I
think it is better to provide nothing. Something closer to the
Oleksii Kliukin writes:
> Bitmap Heap Scan on example (cost=744.44..757.64 rows=6 width=0) (actual
> time=73.895..73.895 rows=0 loops=1)
>Output: 1
>Recheck Cond: (example.event_time = (now() - '5 mons'::interval))
>Rows Removed by Index Recheck: 4030
>Heap
> I don’t see how to solve this problem without changing explain analyze output
> to accommodate for “unknown” value. I don’t think “0” is a non-confusing
> representation of “unknown” for most people, and from the practical
> standpoint, a “best effort” estimate is better than 0 (i.e. I will
> On 30 Dec 2015, at 21:12, Tom Lane wrote:
>
> Emre Hasegeli writes:
>>> I don’t see how to solve this problem without changing explain analyze
>>> output to accommodate for “unknown” value. I don’t think “0” is a
>>> non-confusing representation of
Emre Hasegeli writes:
>> I donât see how to solve this problem without changing explain analyze
>> output to accommodate for âunknownâ value. I donât think â0â is a
>> non-confusing representation of âunknownâ for most people, and from the
>> practical
Tom Lane wrote:
> Emre Hasegeli writes:
> >> I don’t see how to solve this problem without changing explain analyze
> >> output to accommodate for “unknown” value. I don’t think “0” is a
> >> non-confusing representation of “unknown” for most people, and from the
> >>
Alvaro Herrera writes:
> Tom Lane wrote:
>> We do already have a nearby precedent for returning zero when we don't
>> have an accurate answer: that's what BitmapAnd and BitmapOr plan nodes
>> do. (This is documented btw, at the bottom of section 14.1.)
> Hmm, but
> On 30 Dec 2015, at 17:44, Tom Lane wrote:
>
> Oleksii Kliukin writes:
>>> On 30 Dec 2015, at 17:02, Tom Lane wrote:
>>> Another idea would be to use the heap's row density as calculated
>>> by the last ANALYZE (ie,
Hi,
While experimenting with BRIN on PostgreSQL 9.5RC1 I came across the following
plan (which is, btw a very good example of how BRIN rocks for the clustered
data, the size of this table is around 90GB, the size of the index is around
3MB):
explain (analyze, buffers, verbose) select 1 from
14 matches
Mail list logo