Hannu Krosing <[EMAIL PROTECTED]> writes:
> If you mean the full cycle, then it is probably not worth it, as even a
> single 'clean index' pass can take hours on larger tables.
The patch Heikki is working on will probably alleviate that problem,
because it will allow vacuum to scan the indexes in
Ühel kenal päeval, R, 2006-03-03 kell 11:37, kirjutas Tom Lane:
> Alvaro Herrera <[EMAIL PROTECTED]> writes:
> > So for you it would certainly help a lot to be able to vacuum the first
> > X pages of the big table, stop, release locks, create new transaction,
> > continue with the next X pages, lat
On Mon, May 01, 2006 at 10:24:50PM +0200, Dawid Kuroczko wrote:
> VACUUM table WHERE some_col > now()-'1 hour'::interval;
>
> I.e. Let vacuum run "piggyback" on some index. This would allow
> for a quick vacuum of a fraction of a large table. Especially when
> the table is large, and only some d
On Fri, 2006-04-28 at 15:58 -0400, Bruce Momjian wrote:
> Tom Lane wrote:
> > Alvaro Herrera <[EMAIL PROTECTED]> writes:
> > > So for you it would certainly help a lot to be able to vacuum the first
> > > X pages of the big table, stop, release locks, create new transaction,
> > > continue with the
On 5/1/06, Martijn van Oosterhout wrote:
On Mon, May 01, 2006 at 01:19:30PM -0500, Jim C. Nasby wrote:> ISTM that tying this directly to maintenance_work_mem is a bit> confusing, since the idea is to keep vacuum transaction duration down so> that it isn't causing dead tuples to
On Mon, May 01, 2006 at 01:19:30PM -0500, Jim C. Nasby wrote:
> ISTM that tying this directly to maintenance_work_mem is a bit
> confusing, since the idea is to keep vacuum transaction duration down so
> that it isn't causing dead tuples to build up itself. It seems like it
> would be better to hav
"Jim C. Nasby" <[EMAIL PROTECTED]> writes:
>>> Alvaro Herrera <[EMAIL PROTECTED]> writes:
So for you it would certainly help a lot to be able to vacuum the first
X pages of the big table, stop, release locks, create new transaction,
continue with the next X pages, lather, rinse, repe
On Fri, Apr 28, 2006 at 03:58:16PM -0400, Bruce Momjian wrote:
> Tom Lane wrote:
> > Alvaro Herrera <[EMAIL PROTECTED]> writes:
> > > So for you it would certainly help a lot to be able to vacuum the first
> > > X pages of the big table, stop, release locks, create new transaction,
> > > continue w
Tom Lane wrote:
> Alvaro Herrera <[EMAIL PROTECTED]> writes:
> > So for you it would certainly help a lot to be able to vacuum the first
> > X pages of the big table, stop, release locks, create new transaction,
> > continue with the next X pages, lather, rinse, repeat.
>
> > This is perfectly doa
On Mon, 2006-03-13 at 17:38 +0900, ITAGAKI Takahiro wrote:
> Simon Riggs <[EMAIL PROTECTED]> wrote:
>
> > > "Zeugswetter Andreas DCP SD" <[EMAIL PROTECTED]> wrote:
> > > > Ok, we cannot reuse a dead tuple. Maybe we can reuse the space of a dead
> > > > tuple by reducing the tuple to it's header in
Simon Riggs <[EMAIL PROTECTED]> wrote:
> > "Zeugswetter Andreas DCP SD" <[EMAIL PROTECTED]> wrote:
> > > Ok, we cannot reuse a dead tuple. Maybe we can reuse the space of a dead
> > > tuple by reducing the tuple to it's header info.
> >
> > Attached patch realizes the concept of his idea. The dea
"Zeugswetter Andreas DCP SD" <[EMAIL PROTECTED]> wrote:
> Ok, we cannot reuse a dead tuple. Maybe we can reuse the space of a dead
> tuple by reducing the tuple to it's header info.
I was just working about your idea. In my work, bgwriter truncates
dead tuples and leaves only their headers. I'll
Zeugswetter Andreas DCP SD wrote:
>
> > > But you could do the indexes first and remember how far you
> > can vacuum
> > > the heap later.
> >
> > But the indexes _can't_ be done first; you _first_ need to
> > know which tuples are dead, which requires looking at the
> > table itself.
>
> If
> > But you could do the indexes first and remember how far you
> can vacuum
> > the heap later.
>
> But the indexes _can't_ be done first; you _first_ need to
> know which tuples are dead, which requires looking at the
> table itself.
If we already had the "all tuples visible" bitmap I thin
Jim C. Nasby wrote:
... how many pages per bit ...
Are we trying to set up a complex solution to a problem
that'll be mostly moot once partitioning is easier and
partitioned tables are common?
In many cases I can think of the bulk of the data would be in
old partitions that are practically ne
Centuries ago, Nostradamus foresaw when [EMAIL PROTECTED] ("Zeugswetter Andreas
DCP SD") would write:
>> > But what about index clearing? When do you scan each index?
>>
>> At the end of each iteration (or earlier, depending on
>> maintenance_work_mem). So for each iteration you would need
>>
On Fri, Mar 03, 2006 at 11:37:00AM -0500, Tom Lane wrote:
> Bruce and I were discussing this the other day; it'd be pretty easy to
> make plain VACUUM start a fresh transaction immediately after it
> finishes a scan heap/clean indexes/clean heap cycle. The infrastructure
> for this (in particular,
On Fri, Mar 03, 2006 at 04:14:41PM +0100, Csaba Nagy wrote:
> > Ewe. How expensive is scanning an index compared to the heap? Does
> > anyone have figure on that in terms of I/O and time?
>
> See this post for an example:
> http://archives.postgresql.org/pgsql-performance/2006-02/msg00416.php
>
On Thu, Mar 02, 2006 at 03:19:46PM +0100, Martijn van Oosterhout wrote:
> Note, for this purpose you don't need to keep a bit per page. The
> OS I/O system will load 64k+ (8+ pages) in one go so one bit per 8
> pages would be sufficient.
AFAIK that's entirely dependant on the filesystem and how it
On Thu, Mar 02, 2006 at 10:05:28AM -0500, Tom Lane wrote:
> Hannu Krosing <[EMAIL PROTECTED]> writes:
> > ??hel kenal p??eval, N, 2006-03-02 kell 09:53, kirjutas Zeugswetter
> > Andreas DCP SD:
> >> Ok, we cannot reuse a dead tuple. Maybe we can reuse the space of a dead
> >> tuple by reducing the
Tom Lane wrote:
"Matthew T. O'Connor" writes:
That patch is a step forward if it's deemed OK by the powers that be.
However, autovacuum would still need to be taught to handle simultaneous
vacuums. I suppose that in the interim, you could disable autovacuum
for the problematic queue tabl
"Matthew T. O'Connor" writes:
> That patch is a step forward if it's deemed OK by the powers that be.
> However, autovacuum would still need to be taught to handle simultaneous
> vacuums. I suppose that in the interim, you could disable autovacuum
> for the problematic queue table and have cr
Csaba Nagy wrote:
So he rather needs Hannu Krosing's patch for simultaneous vacuum ...
Well, I guess that would be a good solution to the "queue table"
problem. The problem is that I can't deploy that patch on our production
systems without being fairly sure it won't corrupt any data... an
> > I got the impression that Csaba is looking more for "multiple
> > simultaneous vacuum" more than the partial vacuum.
>
> So he rather needs Hannu Krosing's patch for simultaneous vacuum ...
Well, I guess that would be a good solution to the "queue table"
problem. The problem is that I can't
Matthew T. O'Connor wrote:
> Alvaro Herrera wrote:
> >Csaba Nagy wrote:
> >
> >>Now when the queue tables get 1000 times dead space compared to their
> >>normal size, I get performance problems. So tweaking vacuum cost delay
> >>doesn't buy me anything, as not vacuum per se is the performance
> >
Alvaro Herrera wrote:
Csaba Nagy wrote:
Now when the queue tables get 1000 times dead space compared to their
normal size, I get performance problems. So tweaking vacuum cost delay
doesn't buy me anything, as not vacuum per se is the performance
problem, it's long run time for big tables is.
Tom Lane wrote:
> Alvaro Herrera <[EMAIL PROTECTED]> writes:
> > So for you it would certainly help a lot to be able to vacuum the first
> > X pages of the big table, stop, release locks, create new transaction,
> > continue with the next X pages, lather, rinse, repeat.
>
> > This is perfectly doa
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> So for you it would certainly help a lot to be able to vacuum the first
> X pages of the big table, stop, release locks, create new transaction,
> continue with the next X pages, lather, rinse, repeat.
> This is perfectly doable, it only needs enough mo
Zeugswetter Andreas DCP SD wrote:
>
> > > But what about index clearing? When do you scan each index?
> >
> > At the end of each iteration (or earlier, depending on
> > maintenance_work_mem). So for each iteration you would need
> > to scan the indexes.
> >
> > Maybe we could make maintenanc
> > But what about index clearing? When do you scan each index?
>
> At the end of each iteration (or earlier, depending on
> maintenance_work_mem). So for each iteration you would need
> to scan the indexes.
>
> Maybe we could make maintenance_work_mem be the deciding
> factor; after scanni
> Ewe. How expensive is scanning an index compared to the heap? Does
> anyone have figure on that in terms of I/O and time?
See this post for an example:
http://archives.postgresql.org/pgsql-performance/2006-02/msg00416.php
For my 200 million table, scanning the pk index took ~ 4 hours. And the
Alvaro Herrera wrote:
> Bruce Momjian wrote:
> > Alvaro Herrera wrote:
> > > Csaba Nagy wrote:
> > >
> > > > Now when the queue tables get 1000 times dead space compared to their
> > > > normal size, I get performance problems. So tweaking vacuum cost delay
> > > > doesn't buy me anything, as not
On Fri, Mar 03, 2006 at 11:40:40AM -0300, Alvaro Herrera wrote:
> Csaba Nagy wrote:
>
> > Now when the queue tables get 1000 times dead space compared to their
> > normal size, I get performance problems. So tweaking vacuum cost delay
> > doesn't buy me anything, as not vacuum per se is the perfor
Bruce Momjian wrote:
> Alvaro Herrera wrote:
> > Csaba Nagy wrote:
> >
> > > Now when the queue tables get 1000 times dead space compared to their
> > > normal size, I get performance problems. So tweaking vacuum cost delay
> > > doesn't buy me anything, as not vacuum per se is the performance
> >
Alvaro Herrera wrote:
> Csaba Nagy wrote:
>
> > Now when the queue tables get 1000 times dead space compared to their
> > normal size, I get performance problems. So tweaking vacuum cost delay
> > doesn't buy me anything, as not vacuum per se is the performance
> > problem, it's long run time for
Csaba Nagy wrote:
> Now when the queue tables get 1000 times dead space compared to their
> normal size, I get performance problems. So tweaking vacuum cost delay
> doesn't buy me anything, as not vacuum per se is the performance
> problem, it's long run time for big tables is.
So for you it woul
> Are you running 8.1? If so, you can use autovacuum and set per table
> thresholds (read vacuum aggressivly) and per table cost delay settings
> so that the performance impact is minimal. If you have tried 8.1
> autovacuum and found it unhelpful, I would be curious to find out why.
Yes, I'm
Csaba Nagy wrote
From my POV, there must be a way to speed up vacuums on huge tables and
small percentage of to-be-vacuumed tuples... a 200 million rows table
with frequent updates of the _same_ record is causing me some pain right
now. I would like to have that table vacuumed as often as possibl
"Zeugswetter Andreas DCP SD" <[EMAIL PROTECTED]> writes:
> Why do we not truncate the line pointer array ?
> Is it, that vacuum (not the "full" version) does not move
> rows to other pages or slots ? Of course vacuum full could do it,
> but I see your point.
We can't reassign tuple TIDs safely ex
> I think you must keep the header because the tuple might be
> part of an update chain (cf vacuuming bugs we repaired just a
> few months ago).
> t_ctid is potentially interesting data even in a certainly-dead tuple.
yes, I'd still want to keep the full header.
> Andreas' idea is possibly doa
Csaba Nagy wrote:
> > What bothers me about the TODO item is that if we have to sequentially
> > scan indexes, are we really gaining much by not having to sequentially
> > scan the heap? If the heap is large enough to gain from a bitmap, the
> > index is going to be large too. Is disabling per-in
> What bothers me about the TODO item is that if we have to sequentially
> scan indexes, are we really gaining much by not having to sequentially
> scan the heap? If the heap is large enough to gain from a bitmap, the
> index is going to be large too. Is disabling per-index cleanout for
> express
Christopher Browne wrote:
> What is unclear to me in the discussion is whether or not this is
> invalidating the item on the TODO list...
>
> ---
> Create a bitmap of pages that need vacuuming
>
> Instead of sequentially scanning the entire table, have the background
> writer or s
Tom Lane wrote:
> Christopher Browne <[EMAIL PROTECTED]> writes:
> > What is unclear to me in the discussion is whether or not this is
> > invalidating the item on the TODO list...
>
> No, I don't think any of this is an argument against the
> dirty-page-bitmap idea. The amount of foreground effo
Christopher Browne <[EMAIL PROTECTED]> writes:
> What is unclear to me in the discussion is whether or not this is
> invalidating the item on the TODO list...
No, I don't think any of this is an argument against the
dirty-page-bitmap idea. The amount of foreground effort needed to set a
dirty-pag
Bernd Helmle <[EMAIL PROTECTED]> writes:
> But couldn't such an opportunistic approach be used for
> another lightweight VACUUM mode in such a way, that VACUUM could
> look at a special "Hot Spot" queue, which represents potential
> candidates for freeing?
The proposed dirty-page bit map seems
Hannu Krosing <[EMAIL PROTECTED]> writes:
> Ãhel kenal päeval, N, 2006-03-02 kell 09:53, kirjutas Zeugswetter
> Andreas DCP SD:
>> Ok, we cannot reuse a dead tuple. Maybe we can reuse the space of a dead
>> tuple by reducing the tuple to it's header info.
> I don't even think you need the header
On Thu, Mar 02, 2006 at 08:33:46AM -0500, Christopher Browne wrote:
> What is unclear to me in the discussion is whether or not this is
> invalidating the item on the TODO list...
>
> ---
> Create a bitmap of pages that need vacuuming
I think this is doable, and not invalidated
Centuries ago, Nostradamus foresaw when [EMAIL PROTECTED] (Tom Lane) would
write:
> I thought we had sufficiently destroyed that "reuse a tuple" meme
> yesterday. You can't do that: there are too many aspects of the system
> design that are predicated on the assumption that dead tuples do not
> c
On Wed, Mar 01, 2006 at 12:41:01PM -0500, Tom Lane wrote:
> Peter Eisentraut <[EMAIL PROTECTED]> writes:
> > Tom Lane wrote:
> >> How does an optimistic FSM entry avoid the need to run vacuum?
>
> > It ensures that all freed tuples are already in the FSM.
>
> That has nothing to do with it, becau
[sorry to everyone if that mail arrives multiple times, but i had
some odd problems with my mail gateway yesterday...]
On Wed, Mar 01, 2006 at 12:41:01PM -0500, Tom Lane wrote:
> Peter Eisentraut <[EMAIL PROTECTED]> writes:
> > Tom Lane wrote:
> >> How does an optimistic FSM entry avoid the need t
Ühel kenal päeval, N, 2006-03-02 kell 09:53, kirjutas Zeugswetter
Andreas DCP SD:
> > I thought we had sufficiently destroyed that "reuse a tuple"
> > meme yesterday. You can't do that: there are too many
> > aspects of the system design that are predicated on the
> > assumption that dead tuple
> I thought we had sufficiently destroyed that "reuse a tuple"
> meme yesterday. You can't do that: there are too many
> aspects of the system design that are predicated on the
> assumption that dead tuples do not come back to life. You
> have to do the full vacuuming bit (index entry remova
On Thu, Mar 02, 2006 at 01:01:21AM -0500, Tom Lane wrote:
> > Essentially, we would be folding the "find
> > dead tuples and compress page" logic, which is currently in vacuum, back
> > to insert. IMHO this is unacceptable from a performance PoV.
>
> That's the other problem: it's not apparent wh
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> That has nothing to do with it, because the space isn't actually free
>> for re-use until vacuum deletes the tuple.
> I think the idea is a different "free space map" of sorts, whereby a
> transaction that obsoletes a tuple puts its b
Ühel kenal päeval, T, 2006-02-28 kell 19:47, kirjutas Alvaro Herrera:
> Hannu Krosing wrote:
>
> > Due to current implementation of vacuum,
> > you have to abandon continuous vacuuming during vacuum of bigtable, but
> > i have written and submitted to "patches" list a patch which allows
> > vacuum
Tom Lane wrote:
> Peter Eisentraut <[EMAIL PROTECTED]> writes:
> > Tom Lane wrote:
> >> How does an optimistic FSM entry avoid the need to run vacuum?
>
> > It ensures that all freed tuples are already in the FSM.
>
> That has nothing to do with it, because the space isn't actually free
> for re-
Peter Eisentraut <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> How does an optimistic FSM entry avoid the need to run vacuum?
> It ensures that all freed tuples are already in the FSM.
That has nothing to do with it, because the space isn't actually free
for re-use until vacuum deletes the tup
Tom Lane wrote:
> How does an optimistic FSM entry avoid the need to run vacuum?
It ensures that all freed tuples are already in the FSM.
--
Peter Eisentraut
http://developer.postgresql.org/~petere/
---(end of broadcast)---
TIP 3: Have you checked
Tom Lane wrote:
> Peter Eisentraut <[EMAIL PROTECTED]> writes:
> > I'm not sure if I made myself clear. The idea is that you fill the
> > free-space
> > map early with opportunitistic entries in the hope that most updates and
> > deletes go through "soon". That is, these entries will be invali
Peter Eisentraut <[EMAIL PROTECTED]> writes:
> I'm not sure if I made myself clear. The idea is that you fill the
> free-space
> map early with opportunitistic entries in the hope that most updates and
> deletes go through "soon". That is, these entries will be invalid for a
> short time but
Am Montag, 27. Februar 2006 19:42 schrieb Tom Lane:
> The free-space map is not the hard part of the problem. You still have
> to VACUUM --- that is, wait until the dead tuple is not only committed
> dead but is certainly dead to all onlooker transactions, and then remove
> its index entries as we
Hannu Krosing wrote:
> Due to current implementation of vacuum,
> you have to abandon continuous vacuuming during vacuum of bigtable, but
> i have written and submitted to "patches" list a patch which allows
> vacuums not to block each other out, this is stalled due to Tom's
> "unesyness" about it
Ühel kenal päeval, E, 2006-02-27 kell 19:20, kirjutas Peter Eisentraut:
> Something came to my mind today, I'm not sure if it's feasible but I
> would like to know opinions on it.
>
> We've seen database applications that PostgreSQL simply could not manage
> because one would have to vacuum cont
Peter Eisentraut <[EMAIL PROTECTED]> writes:
> We've seen database applications that PostgreSQL simply could not manage
> because one would have to vacuum continuously. Perhaps in those
> situations one could arrange it that an update (or delete) of a row
> registers the space in the free space
Something came to my mind today, I'm not sure if it's feasible but I
would like to know opinions on it.
We've seen database applications that PostgreSQL simply could not manage
because one would have to vacuum continuously. Perhaps in those
situations one could arrange it that an update (or de
66 matches
Mail list logo