On 2/22/15 8:32 PM, Tomas Vondra wrote:
On 23.2.2015 03:20, Jim Nasby wrote:
On 2/22/15 5:41 PM, Tomas Vondra wrote:
Otherwise, the code looks OK to me. Now, there are a few features I'd
like to have for production use (to minimize the impact):

1) no index support:-(

     I'd like to see support for more relation types (at least btree
     indexes). Are there any plans for that? Do we have an idea on how to
     compute that?

It'd be cleaner if had actual an actual am function for this, but see
below.

2) sampling just a portion of the table

     For example, being able to sample just 5% of blocks, making it less
     obtrusive, especially on huge tables. Interestingly, there's a
     TABLESAMPLE patch in this CF, so maybe it's possible to reuse some
     of the methods (e.g. functions behind SYSTEM sampling)?

3) throttling

     Another feature minimizing impact of running this on production might
     be some sort of throttling, e.g. saying 'limit the scan to 4 MB/s'
     or something along those lines.

4) prefetch

     fbstat_heap is using visibility map to skip fully-visible pages,
     which is nice, but if we skip too many pages it breaks readahead
     similarly to bitmap heap scan. I believe this is another place where
     effective_io_concurrency (i.e. prefetch) would be appropriate.

All of those wishes are solved in one way or another by vacuum and/or
analyze. If we had a hook in the tuple scanning loop and at the end of
vacuum you could just piggy-back on it. But really all we'd need for
vacuum to be able to report this info is one more field in LVRelStats, a
call to GetRecordedFreeSpace for all-visible pages, and some logic to
deal with pages skipped because we couldn't get the vacuum lock.

Should we just add this to vacuum instead?

Possibly. I think the ultimate goal is to be able to get this info
easily and without disrupting the system performance too much (which is
difficult without sampling/throttling). If we can stuff that into
autovacuum reasonably, and then get the info from catalogs, I'm OK with
that.

Doing the counting in vacuum/analyze (auto or not) is quite easy, and it would happen at the same time we're doing useful work. We would automatically get the benefit of the throttling and sampling work that those routines already do.

However I'm not sure putting this into autovacuum is actually possible,
because how do you merge data from multiple partial runs (when each of
them skipped different pages)?

ISTM that's just a form of sampling, no?

Besides, we don't need the same lock for figuring out bloat. We could still measure bloat even if we can't vacuum the page, but I think that's overkill. If we're skipping enough pages to mess with the bloat measurement then we most likely need to teach vacuum how to revisit pages.

Also, autovacuum is not the only place
where we free space - we'd have to handle HOT for example, I guess.

I wasn't thinking about trying to keep live bloat statistics, so HOT wouldn't affect this.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to