On Fri, Feb 13, 2009 at 7:24 PM, Scott Marlowe wrote:
> On Fri, Feb 13, 2009 at 7:02 PM, Tom Lane wrote:
>> Scott Marlowe writes:
>>> On Fri, Feb 13, 2009 at 5:02 PM, Michael Monnerie
>>> wrote:
vacuum_cost_delay = 0
That was the trick for me. It was set to 250(ms), where it took 5 ho
On Fri, Feb 13, 2009 at 7:02 PM, Tom Lane wrote:
> Scott Marlowe writes:
>> On Fri, Feb 13, 2009 at 5:02 PM, Michael Monnerie
>> wrote:
>>> vacuum_cost_delay = 0
>>> That was the trick for me. It was set to 250(ms), where it took 5 hours
>>> for a vacuum to run. Now it takes 5-15 minutes.
>
>> W
Scott Marlowe writes:
> On Fri, Feb 13, 2009 at 5:02 PM, Michael Monnerie
> wrote:
>> vacuum_cost_delay = 0
>> That was the trick for me. It was set to 250(ms), where it took 5 hours
>> for a vacuum to run. Now it takes 5-15 minutes.
> Wow!!! 250 ms is HUGE in the scheme of vacuum cost delay.
On Fri, Feb 13, 2009 at 5:02 PM, Michael Monnerie
wrote:
> On Freitag 13 Februar 2009 Roger Ging wrote:
>> I'm running vacuum full analyze verbose on a table with 20million
>> rows and 11 indexes. In top, I'm seeing [pdflush] and postgres:
>> writer process each using diferent cpu cores, with wai
On Freitag 13 Februar 2009 Roger Ging wrote:
> I'm running vacuum full analyze verbose on a table with 20million
> rows and 11 indexes. In top, I'm seeing [pdflush] and postgres:
> writer process each using diferent cpu cores, with wait time well
> above 90% on each of them. The vacuum has been r
On Fri, Feb 13, 2009 at 3:34 PM, Tino Schwarze wrote:
> On Fri, Feb 13, 2009 at 03:13:28PM -0700, Scott Marlowe wrote:
>
> [...]
>
>> Yeah, that's pretty bad. ~2 Million live rows and ~18 Million dead
>> ones is a pretty badly bloated table.
>>
>> Vacuum full is one way to reclaim that lost space
On Fri, Feb 13, 2009 at 03:13:28PM -0700, Scott Marlowe wrote:
[...]
> Yeah, that's pretty bad. ~2 Million live rows and ~18 Million dead
> ones is a pretty badly bloated table.
>
> Vacuum full is one way to reclaim that lost space. You can also dump
> and restore that one table, inside a drop
Hi Roger,
On Fri, Feb 13, 2009 at 01:56:32PM -0800, Roger Ging wrote:
>
Please don't post HTML mails to mailing lists. Thanks.
> I can only answer a couple of the questions at the moment. I had to
> kill the vacuum full and do a regular vacuum, so I can't get the iostat
> and vmstat outputs r
On Fri, Feb 13, 2009 at 2:56 PM, Roger Ging wrote:
Oh yeah, also, any chance of testing your RAID array in RAID-10 some
day? RAID5 anything tends to be pretty slow at writes, especially
random ones, and RAID-10 may give you a lot more bandwidth where you
need it, on the write side of the equatio
On Fri, Feb 13, 2009 at 2:56 PM, Roger Ging wrote:
> Scott,
>
> I can only answer a couple of the questions at the moment. I had to kill
> the vacuum full and do a regular vacuum, so I can't get the iostat and
> vmstat outputs right now. This message is the reason I was trying to run
> vacuum fu
Scott,
I can only answer a couple of the questions at the moment. I had to
kill the vacuum full and do a regular vacuum, so I can't get the iostat
and vmstat outputs right now. This message is the reason I was trying
to run vacuum full:
INFO: "license": found 257 removable, 20265895 nonrem
On Fri, Feb 13, 2009 at 10:20 AM, Roger Ging wrote:
> Hi,
>
> I'm running vacuum full analyze verbose on a table with 20million rows and
> 11 indexes. In top, I'm seeing [pdflush] and postgres: writer process each
> using diferent cpu cores, with wait time well above 90% on each of them.
> The v
Hi,
I'm running vacuum full analyze verbose on a table with 20million rows
and 11 indexes. In top, I'm seeing [pdflush] and postgres: writer
process each using diferent cpu cores, with wait time well above 90% on
each of them. The vacuum has been running for several hours, and the
last thin
13 matches
Mail list logo