higepon <[email protected]> wrote:
> I found the current planner doesn't care about "lossy mode" on Bitmap Scan.
Good point. I saw the bad behavior on DBT-3 (TPC-H) benchmark before.
Loss-less bitmap scan was faster than seq Scan,
but lossy bitmap scan was slower than seq Scan:
EXPLAIN ANALYZE SELECT * FROM test WHERE v < 0.2;
-- default
Bitmap Heap Scan on test (cost=3948.42..11005.77 rows=210588 width=8)
(actual time=47.550..202.925 rows=200142)
-- SET work_mem=64 (NOTICE: the cost is same as above!)
Bitmap Heap Scan on test (cost=3948.42..11005.77 rows=210588 width=8)
(actual time=52.057..358.145 rows=200142)
-- SET enable_bitmapscan = off
Seq Scan on test (cost=0.00..16924.70 rows=210588 width=8)
(actual time=0.182..280.450 rows=200142)
> My understanding is that we can know whether the plan is lossy or not
> like following.
Sure, we need it! Also, I hope some methods to determine whether the
bitmap scan was lossy or not, and how amount of work_mem is required to do
loss-less bitmap scan. For example, a new GUC variable trace_bitmapscan to
print the information of bitmap scan, like trace_sort for sorting.
Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center
--
Sent via pgsql-hackers mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers