Thanks a lot Albe, Gezeala.
I missed the 701995 dead rows cannot be removed yet in the logs (was late)
and will now know to check the toast tables as well if this happens again.
Thank you!
Armand
On Tue, Aug 6, 2013 at 9:04 AM, Albe Laurenz wrote:
> Armand du Plessis wrote:
> > We
After a second manual run it properly reset the relfrozenid on the affected
relation.
Only difference this one was a vacuum analyze but I suspect that was not
the cause.
On Monday, August 5, 2013, Armand du Plessis wrote:
> Hi there,
>
> We're running into a scenario where d
Hi there,
We're running into a scenario where despite doing a manual vacuum as a
superuser the relfrozenxid for one relation now dangerously close to
wraparound is not getting reset.
It's a Postgres 9.2.3 cluster. We shutdown other access to the machine
while running the VACUUM to ensure it could
On Thu, May 30, 2013 at 12:19 PM, Andrew W. Gibbs wrote:
> Going with your first option, a master->slave replication, has the
> added benefit that you build the expertise for doing Continuous Point
> In Time Recovery, and after you do this storage system migration you
> can use that knowledge to p
We're looking into options for the least intrusive way of moving our
pg_data onto faster storage. The basic setup is as follows :
6 disk RAID-0 array of EBS volumes used for primary data storage
2 disk RAID-0 array of EBS volumes used for transaction logs
RAID arrays are xfs
It's the primary data
2013 at 10:37 AM, Armand du Plessis wrote:
>
> On Mon, May 20, 2013 at 3:21 PM, Armand du Plessis wrote:
>
>> On Mon, May 20, 2013 at 3:11 PM, Tom Lane wrote:
>>
>>> Armand du Plessis writes:
>>> > The autovacuum completed (after many hours) however it d
On Mon, May 20, 2013 at 3:21 PM, Armand du Plessis wrote:
> On Mon, May 20, 2013 at 3:11 PM, Tom Lane wrote:
>
>> Armand du Plessis writes:
>> > The autovacuum completed (after many hours) however it didn't seem to
>> have
>> > frozen any old pages a
Hi again,
We've got a production cluster running Postgres 9.2.3(-1.30.amzn1 using the
exact Amazon package version).
I'd like to inspect the buffer_cache usage on this machine but can't get
the exact -contrib version from the Amazon repos.
I've got a different cluster running 9.2.4 with the cont
On Mon, May 20, 2013 at 3:11 PM, Tom Lane wrote:
> Armand du Plessis writes:
> > The autovacuum completed (after many hours) however it didn't seem to
> have
> > frozen any old pages as it just kicks off again right away with the same
> > reason (VACUUM ANALYZ
On Sun, May 19, 2013 at 6:12 PM, Armand du Plessis wrote:
> On Sun, May 19, 2013 at 6:08 PM, Simon Riggs wrote:
>
>> On 19 May 2013 13:54, Armand du Plessis wrote:
>>
>> > We started seeing 1000s of messages like the ones below in our logs
>> starting
>>
On Sun, May 19, 2013 at 6:08 PM, Simon Riggs wrote:
> On 19 May 2013 13:54, Armand du Plessis wrote:
>
> > We started seeing 1000s of messages like the ones below in our logs
> starting
> > last night. There's been no changes but performance has dropped
> > sig
Hi there,
We started seeing 1000s of messages like the ones below in our logs
starting last night. There's been no changes but performance has dropped
significantly.
It's Postgres 9.2.3
2013-05-19 12:50:30.423 UTC,,,1976,,5198ca96.7b8,1,,2013-05-19 12:50:30
UTC,8/0,0,DEBUG,0,"autovacuum: pro
Nevermind :) There was a large backup.sql file on my master from a past
operation.
On Sun, Apr 14, 2013 at 11:11 AM, Armand du Plessis wrote:
> I'm busy preparing a server to act as a streaming replication slave. When
> using the pg_basebackup to get the initial setup done I r
I'm busy preparing a server to act as a streaming replication slave. When
using the pg_basebackup to get the initial setup done I run into the
following error after it's almost complete:
bash-4.1$ pg_basebackup -v -P -c fast -h master -U replica -D
/raiddrive/pgdata
185107020/227631618 kB (100%),
Hi there,
I'm hoping someone on the list can shed some light on an issue I'm having
with our Postgresql cluster. I'm literally tearing out my hair and don't
have a deep enough understanding of Postgres to find the problem.
What's happening is I had severe disk/io issues on our original Postgres
c
15 matches
Mail list logo