Rafael Martinez wrote:
> Heikki Linnakangas wrote:
>
>> In 8.1, CLUSTER will remove those tuples anyway, but it's actually not
>> correct.
>
> With other words, we have to be very carefull to not run CLUSTER on
> a table been modified inside a transaction if we do not want to lose
> data? .
Heikki Linnakangas wrote:
> Rafael Martinez wrote:
>> DETAIL: 83623 dead row versions cannot be removed yet.
>
> Looks like you have a long-running transaction in the background, so
> VACUUM can't remove all dead tuples. I didn't see that in the vacuum
> verbose outputs you sent earlier. Is there
Scott Marlowe wrote:
On Nov 12, 2007 11:01 AM, Heikki Linnakangas <[EMAIL PROTECTED]> wrote:
Scott Marlowe wrote:
So, between the first and second vacuum you had a long running
transaction that finally ended and let you clean up the dead rows.
No, before 8.3, CLUSTER throws away non-removable
On Nov 12, 2007 11:01 AM, Heikki Linnakangas <[EMAIL PROTECTED]> wrote:
>
> Scott Marlowe wrote:
> > So, between the first and second vacuum you had a long running
> > transaction that finally ended and let you clean up the dead rows.
>
> No, before 8.3, CLUSTER throws away non-removable dead tuple
Scott Marlowe wrote:
On Nov 12, 2007 10:11 AM, Rafael Martinez <[EMAIL PROTECTED]> wrote:
Sending this just in case it can help
Checking all the log files from these vacuum jobs we have been running,
we found one that looks difference from the rest, specially on the
amount of removed page
In response to Heikki Linnakangas <[EMAIL PROTECTED]>:
> Rafael Martinez wrote:
> > DETAIL: 83623 dead row versions cannot be removed yet.
>
> Looks like you have a long-running transaction in the background, so
> VACUUM can't remove all dead tuples. I didn't see that in the vacuum
> verbose o
On Nov 12, 2007 10:11 AM, Rafael Martinez <[EMAIL PROTECTED]> wrote:
> Sending this just in case it can help
>
> Checking all the log files from these vacuum jobs we have been running,
> we found one that looks difference from the rest, specially on the
> amount of removed pages.
>
> We are s
Rafael Martinez wrote:
DETAIL: 83623 dead row versions cannot be removed yet.
Looks like you have a long-running transaction in the background, so
VACUUM can't remove all dead tuples. I didn't see that in the vacuum
verbose outputs you sent earlier. Is there any backends in "Idle in
transac
Rafael Martinez wrote:
>
> We have more information about this 'problem'.
>
Sending this just in case it can help
Checking all the log files from these vacuum jobs we have been running,
we found one that looks difference from the rest, specially on the
amount of removed pages.
We are send
Tom Lane wrote:
> Rafael Martinez <[EMAIL PROTECTED]> writes:
>> Heikki Linnakangas wrote:
>>> On a small table like that you could run VACUUM every few minutes
>>> without much impact on performance. That should keep the table size in
>>> check.
>
>> Ok, we run VACUUM ANALYZE only one time a day,
[EMAIL PROTECTED] (Rafael Martinez) writes:
> Heikki Linnakangas wrote:
>> On a small table like that you could run VACUUM every few minutes
>> without much impact on performance. That should keep the table size in
>> check.
>>
>
> Ok, we run VACUUM ANALYZE only one time a day, every night. But we
Rafael Martinez <[EMAIL PROTECTED]> writes:
> Heikki Linnakangas wrote:
>> On a small table like that you could run VACUUM every few minutes
>> without much impact on performance. That should keep the table size in
>> check.
> Ok, we run VACUUM ANALYZE only one time a day, every night.
There's yo
Heikki Linnakangas wrote:
>
> If the table is already bloated, a VACUUM won't usually shrink it. It
> only makes the space available for reuse, but a sequential scan still
> needs to go through a lot of pages.
>
> CLUSTER on the other hand repacks the tuples and gets rid of all the
> unused space
In response to Rafael Martinez <[EMAIL PROTECTED]>:
> Heikki Linnakangas wrote:
> > Rafael Martinez wrote:
>
> >> The tables with this 'problem' are not big, so CLUSTER finnish very fast
> >> and it does not have an impact in the access because of locking. But we
> >> wonder why this happens.
> >
Rafael Martinez wrote:
Heikki Linnakangas wrote:
On a small table like that you could run VACUUM every few minutes
without much impact on performance. That should keep the table size in
check.
Ok, we run VACUUM ANALYZE only one time a day, every night. But we would
espect the performance to ge
Alvaro Herrera wrote:
> Rafael Martinez wrote:
>> The 'problem' is that performance decrease during the day and the only
>> thing that helps is to run CLUSTER on the table with problems. VACUUM
>> ANALYZE does not help.
>
> Probably because all the live tuples are clustered at the end of the
> ta
Rafael Martinez wrote:
> Hello
>
> This is a question about something we have seen sometimes in the last
> months. It happens with tables with a large amount of updates/selects
> compared with the amount of inserts/deletes. The sizes of these tables
> are small and the amount of rows too.
>
> The
Heikki Linnakangas wrote:
> Rafael Martinez wrote:
>> The tables with this 'problem' are not big, so CLUSTER finnish very fast
>> and it does not have an impact in the access because of locking. But we
>> wonder why this happens.
>
> 2 seconds for seq scanning 12 MB worth of data sounds like a lo
Performance problems with heavily modified tables (UPDATE or DELETE) are
usually caused by not vacuuming. There are two main modes the VACUUM can
run in (plain or full) and the former works in a much more aggressive
way (exclusive locking, etc). Try to run VACUUM FULL VERBOSE on the
table and s
Rafael Martinez wrote:
This is a question about something we have seen sometimes in the last
months. It happens with tables with a large amount of updates/selects
compared with the amount of inserts/deletes. The sizes of these tables
are small and the amount of rows too.
The 'problem' is that pe
Hello
This is a question about something we have seen sometimes in the last
months. It happens with tables with a large amount of updates/selects
compared with the amount of inserts/deletes. The sizes of these tables
are small and the amount of rows too.
The 'problem' is that performance decrease
21 matches
Mail list logo