>I'd rather see us implement a buffer replacement policy that considers
>both frequency + recency (unlike LRU, which considers only recency).
>Ideally, that would work "automagically". I'm hoping to get a chance to
>implement ARC[1] during the 7.5 cycle.
Actually i've already done some work back i
Neil Conway wrote:
> On Mon, 2003-10-27 at 15:31, Jan Wieck wrote:
> > Well, "partial solution" isn't quite what I would call it, and it surely
> > needs integration with sequential scans. I really do expect the whole
> > hack to fall apart if some concurrent seqscans are going on
>
> I'd rather
On Mon, 2003-10-27 at 15:31, Jan Wieck wrote:
> Well, "partial solution" isn't quite what I would call it, and it surely
> needs integration with sequential scans. I really do expect the whole
> hack to fall apart if some concurrent seqscans are going on
I'd rather see us implement a buffer repl
Tom Lane wrote:
Jan Wieck <[EMAIL PROTECTED]> writes:
What happens instead is that vacuum not only evicts the whole buffer
cache by forcing all blocks of said table and its indexes in, it also
dirties a substantial amount of that and leaves the dirt to be cleaned
up by all the other backends.
[
Jan Wieck <[EMAIL PROTECTED]> writes:
> What happens instead is that vacuum not only evicts the whole buffer
> cache by forcing all blocks of said table and its indexes in, it also
> dirties a substantial amount of that and leaves the dirt to be cleaned
> up by all the other backends.
[ thinks
To add some medium-hard data to the discussion, I hacked a PG 7.3.4 a
little. The system I am talking about below run's an artificial
application that very well resembles the behaviour of a TPC-C benchmark
implementation. Without vacuuming the database, it can just so sustain a
factor 5 scaled
Christopher Browne <[EMAIL PROTECTED]> writes:
> How about a "flip side" for this...
> VACUUM CACHE;
> This new operation vacuums only those pages of relations that are in
> cache.
This might conceivably be a useful behavior (modulo the problem of
fixing index entries) ... but I think we'd not wa
The world rejoiced as [EMAIL PROTECTED] (Tom Lane) wrote:
> The latter point is really the crux of the problem. The point of having
> the VACUUM process is to keep maintenance work out of the critical path
> of foreground queries. Anything that moves even part of that
> maintenance work into the
Greg Stark <[EMAIL PROTECTED]> writes:
> Tom Lane <[EMAIL PROTECTED]> writes:
>
> > You keep ignoring the problem of removing index entries. To vacuum an
> > individual page, you need to be willing to read in (and update) all
> > index pages that reference the tuples-to-be-deleted.
>
> Hm. I
Shridhar Daithankar <[EMAIL PROTECTED]> writes:
> If an index tuple had transaction information duplicated along with
> heap tuple, two types of tuples can be removed, independent of each
> other?
Possibly ... but I think we have already considered and rejected that
proposal, more than once.
Tom Lane <[EMAIL PROTECTED]> writes:
> You keep ignoring the problem of removing index entries. To vacuum an
> individual page, you need to be willing to read in (and update) all
> index pages that reference the tuples-to-be-deleted.
Hm. If the visibility information were stored in the index
Tom Lane wrote:
Shridhar Daithankar <[EMAIL PROTECTED]> writes:
I was thinking about it. How about vacuuming a page when it is been
pushed out of postgresql buffer cache? It is is memory so not much IO
is involved.
You keep ignoring the problem of removing index entries. To vacuum an
individua
Shridhar Daithankar <[EMAIL PROTECTED]> writes:
> I was thinking about it. How about vacuuming a page when it is been
> pushed out of postgresql buffer cache? It is is memory so not much IO
> is involved.
You keep ignoring the problem of removing index entries. To vacuum an
individual page, you n
Gaetano Mendola wrote:
The vacuum cost is the same of a full scan table ( select count(*) ) ?
Why not do a sort of "vacuum" if a scan table happen ( during a simple
select that invole a full scan table for example )?
I was thinking about it. How about vacuuming a page when it is been pushed out
of
Greg Stark wrote:
The more I think about this vacuum i/o problem, the more I think we have it
wrong. The added i/o from vacuum really ought not be any worse than a single
full table scan. And there are probably the occasional query doing full table
scans already in those systems.
For the folks havi
Neil Conway <[EMAIL PROTECTED]> writes:
> Uh, no -- if it is the cache, we're better off fixing the buffer
> replacement policy, not trying to hack around it.
If we can. As long as we are largely depending on the kernel's buffer
cache, we may not be able to "just fix it" ...
On Fri, 2003-10-17 at 16:22, Greg Stark wrote:
> If it's just a matter of all the read i/o from vacuum then we're best off
> sleeping for a few milliseconds every few kilobytes. If it's the cache then
> we're probably better off reading a few megabytes and then sleeping for
> several seconds to all
The more I think about this vacuum i/o problem, the more I think we have it
wrong. The added i/o from vacuum really ought not be any worse than a single
full table scan. And there are probably the occasional query doing full table
scans already in those systems.
For the folks having this issue, i
18 matches
Mail list logo