I have a Postgres application that must run 24x7. If postgres needs to be
vacuumed periodically, must I take the application offline, or is it enough
to disallow write (INSERT/UPDATE) access while allowing read access?
I hope it is the latter, as I have a large data set and there are
transaction
Hi.
I am running a weekly cron job to vacuum our
production database. Everything seems to look OK,
except for the report around the pg_largeobjects
table. I was wondering if there is some tunning that I
need to do to my database, or if it is normal to have
so many tuples and deletes for pg_largeo
Hi,
I use postgresql 7.2.x on Linux 2.4.18-6mdksmp #1 SMP i686
I have some big postgreSQL databases (4/5 GB at start) on this server.
Every night I erase data and I import a lot of new data.
For optimize my database I operate a vacuum "all" every night.
The problem is that the size of database
Hi All,
I would like to ask your help in understanding vacuum activities.
I have a heavily-updated table with this structure:
colA bigint not null
colB character varying(128) not null
colC character varying(200) not null
colD character varying(200) not null
colE character varying(20)
Indexes:
Hi all;
we have a large table that gets a lot of churn throughout the day.
performance has dropped off a cliff. A vacuum verbose on the table showed us
this:
INFO: "action_rollup_notifier": found 0 removable, 34391214 nonremovable row
versions in 152175 pages
DETAIL: 22424476 dead row vers
"Pascal PEYRE" <[EMAIL PROTECTED]> writes:
> I have some big postgreSQL databases (4/5 GB at start) on this server.
> Every night I erase data and I import a lot of new data.
Exactly how do you erase the old data? If you're zapping the entire
contents of tables, TRUNCATE might be a good answer.
Do a VACUUM FULL on your database. This should be the solution.
Daniel
""Pascal PEYRE"" <[EMAIL PROTECTED]> schrieb im Newsbeitrag
news:[EMAIL PROTECTED]
> Hi,
>
> I use postgresql 7.2.x on Linux 2.4.18-6mdksmp #1 SMP i686
>
> I have some big postgreSQL databases (4/5 GB at start) on this server
Kevin Kempter writes:
> INFO: "action_rollup_notifier": found 0 removable, 34391214 nonremovable row
> versions in 152175 pages
> DETAIL: 22424476 dead row versions cannot be removed yet.
> Anyone have any suggestions per why these rows cannot be removed yet?
You've got an open transaction that
Kevin Kempter wrote:
> INFO: "action_rollup_notifier": found 0 removable, 34391214
> nonremovable row versions in 152175 pages
> DETAIL: 22424476 dead row versions cannot be removed yet.
> There were 0 unused item pointers.
> 2 pages contain useful free space.
> 0 pages are entirely empty.
> An
On Tuesday 18 August 2009 13:37:12 Tom Lane wrote:
> Kevin Kempter writes:
> > INFO: "action_rollup_notifier": found 0 removable, 34391214 nonremovable
> > row versions in 152175 pages
> > DETAIL: 22424476 dead row versions cannot be removed yet.
> >
> > Anyone have any suggestions per why these r
On Tue, Aug 18, 2009 at 2:41 PM, Kevin
Kempter wrote:
> On Tuesday 18 August 2009 13:37:12 Tom Lane wrote:
>> Kevin Kempter writes:
>> > INFO: "action_rollup_notifier": found 0 removable, 34391214 nonremovable
>> > row versions in 152175 pages
>> > DETAIL: 22424476 dead row versions cannot be remo
I have a db which has many tables that are mirrors of data from an
outside source. I download text export files and do weekly updates
of the entire tables using TRUNCATE and COPY. Is there any benefit
to a VACUUM ANALYZE immediately afterwards? There are a number of
indices on these tables. D
12 matches
Mail list logo