Hello,
I wrote a script to make sure all tables are vacuumed and analyzed every
evening. This works very well, but I have a question:
I save in a table the start and end time of a vacuum/analyze. This way I
can measure what tables take a long time to vaccum/analyze, and what tables
are slow. (and
Sabin Coanda wrote:
> Hi there,
>
> I upgrade "PostgreSQL 8.2.4 on i686-pc-mingw32, compiled by GCC gcc.exe
> (GCC) 3.4.2 (mingw-special)" to "PostgreSQL 8.3.5, compiled by Visual C++
> build 1400".
>
> I found the new Postgres version doesn't allowed to run a script file which
> contains multil
Hi there,
I upgrade "PostgreSQL 8.2.4 on i686-pc-mingw32, compiled by GCC gcc.exe
(GCC) 3.4.2 (mingw-special)" to "PostgreSQL 8.3.5, compiled by Visual C++
build 1400".
I found the new Postgres version doesn't allowed to run a script file which
contains multiline statements, which was working at
On Aug 8, 2008, at 12:02 PM, Mark Fenbers wrote:
I haven't the foggiest idea what this means but googled the error
and found a site that seems to suggest that I need to run the
vacuum in "single-user" mode before running VACUUM FULL, but I
cannot find out how to do that, either.
It's in t
A seldom-used database of mine was not recently vacuumed and I've run
into the error: FATAL: database is not accepting commands to avoid
wraparound data loss in database "stop"HINT: Stop the postmaster and
use a standalone backend to vacuum database "stop". In fact, I get this
error while
On Mon, Jan 15, 2007 at 02:44:15PM +, Ezequiel Luis Pellettieri wrote:
> Hi Andrew,
>
> Why don't you try droping all indexes exept but the constraints ones for
> each table.
I don't have the problem, so I don't know whether that will help.
> then you'll free aprox half of DB size. then re-c
Hi Andrew,
Why don't you try droping all indexes exept but the constraints ones for
each table.
then you'll free aprox half of DB size. then re-create indexes, one by one.
and finally you can perform a full vacuum.
that worked for me. :)
cheers
pelle.-
2007/1/15, Andrew Sullivan <[EMAIL PROT
On Mon, Jan 15, 2007 at 06:23:23AM -0800, Mario Behring wrote:
> Hi all,
>
> I've executed a VACUUM FULL on a database 33GB in size. The process
> was going fine until it reached a index (I guess it's an index) and
> there it stopped for more than 30 hours...the whole
> weekend..
It m
Hi all,
I've executed a VACUUM FULL on a database 33GB in size. The process was going
fine until it reached a index (I guess it's an index) and there it stopped for
more than 30 hours...the whole weekend..
I've canceled it but I desperately need to free some space at the server's
d
Another way how to do it with having access to the data in the same time
is to create a new table, named a little bit differently and do an:
insert into [table]2
select *
from[table];
Then switch to the second table.
Then you have to do on the first table the TRUNCATE and DROP.
For getti
Hello Ilya,
you have to check for yourself which method is faster - just test it
with a stopwatch;-)
You have to take care, because when you make VACUUM FULL, then it
vacuums also the system tables, etc. of postgres.
I'm not sure if this is the same way VACUUM goes through all objects,
but I'd m
Greetings,
What advantages I lose, when using dump-truncate-restore (table
or whole DB) instead of performing VACUUM FULL ?
In both cases I have no access to data, but first is much faster
(by subjective estimate).
Thank you,
Ilya A. Kovalenko (mailto:[EMAIL PROTECTED])
patrick ~ <[EMAIL PROTECTED]> writes:
> I noticed that a freshly created db with freshly inserted data (from
> a previous pg_dump) would result in quite fast results. However,
> after running 'vacuum analyze' the very same query slowed down about
> 1250x (Time: 1080688.921 ms vs Time: 864.522 ms
On Wed, Nov 03, 2004 at 07:01:00AM -0500, Andrew Sullivan wrote:
> On Tue, Nov 02, 2004 at 06:50:31PM -0800, patrick ~ wrote:
> > We have a nightly "garbage collection" process that runs and
> > purges any old data. After this process a 'vacuum analyze'
> > is kicked off (regardless of whether or
On Wed, Nov 03, 2004 at 10:31:33AM -0800, patrick ~ wrote:
> Just to clarify, the sliggishness isn't only during the vacuum
> period. There are more more serious issues during the vacuum,
> but i am not touching on those. The sluggishness is persistant
> throughout daily operations.
Then I suspe
TECTED]
Sent: Tue, 2 Nov 2004 18:50:31 -0800 (PST)
Subject: [SQL] vacuum analyze slows sql query
> Greetings pgsql-sql,
>
> I have a very strange problem. Our production database is a fair
> sized db, structure wise, and quite huge data wise. We have a web/php
> based UI for our c
On Tue, Nov 02, 2004 at 06:50:31PM -0800, patrick ~ wrote:
> We have a nightly "garbage collection" process that runs and purges
> any old data. After this process a 'vacuum analyze' is kicked off
> (regardless of whether or not any data was actually purged).
>
> At this point I should mention th
Greetings pgsql-sql,
I have a very strange problem. Our production database is a fair
sized db, structure wise, and quite huge data wise. We have a web/php
based UI for our customer to manage the data in our application db.
The customer complains that the UI is sluggish accessing certain
pages a
"Victor Yegorov" <[EMAIL PROTECTED]> writes:
> May be I'll put my question in a different manner:
> System tables are location-wide (I mean one set of tables for PostgreSQL
> location) or each database has it's own set of system tables?
pg_database, pg_shadow, pg_group are shared, the rest are pe
"Victor Yegorov" <[EMAIL PROTECTED]> writes:
> I'd like to make a script to automatically vacuum all my DBs nightly. And
> I'd like to skip system tables, starting with pg_*.
Um ... what in the world makes you think that's a good idea? System
tables need maintenance too.
> I'd like to make a script to automatically vacuum all my DBs nightly. And
> I'd like to skip system tables, starting with pg_*.
If you run the process as non-super user, it must skip your system tables.
On the other hand, if you run it as a super user, it must be done table
wise from your scr
Hello.
I'd like to make a script to automatically vacuum all my DBs nightly. And
I'd like to skip system tables, starting with pg_*.
I've seen in the list archives somebody's solution using regexps,
something like:
$> vacuum analyze !~ ^pg_;
It doesn't work for me, and I cannot find the origin
On Saturday 01 February 2003 07:45, Carmen Marincu wrote:
> I am using Postgresql 7.1.3 and I have deleted 3000 rows from a table
> (with DELETE).
> Than I used vacuum to actually delete the rows markes as deleted
> by the DELETE command..
> The trouble is that the "counter" for the serial primary
Hello -
I am using Postgresql 7.1.3 and I have deleted 3000 rows from a table
(with DELETE).
Than I used vacuum to actually delete the rows markes as deleted
by the DELETE command..
The trouble is that the "counter" for the serial primary key (ID field)
wasn't reset. So now althought I have only
"Denise Bossarte" <[EMAIL PROTECTED]> writes:
> I have looked at the documentation on locking and VACUUM and have found
> conflicting reports.
> http://www.postgresql.org/idocs/index.php?locking-tables.html states that
> VACUUM (without FULL) acquires a ShareUpdateExclusiveLock and VACUUM with
> F
I am trying to automate VACUUMing several databases and am confused on how
client connections to the database might affect the VACUUM.
I have looked at the documentation on locking and VACUUM and have found
conflicting reports.
The 7.2.1 Documentation
http://www.postgresql.org/idocs/index.php?loc
Kristian Eide wrote:
> Thanks, very helpful, although there does not seem to be much description of
> what the two free space map options in postgresql.conf actually do. Doing a
> VACUUM ANALYZE VERBOSE on my largest table gives me:
>
> NOTICE: Pages 43681: Changed 208, Empty 0; Tup 1921387: Vac
> You should also search the archives for threads on free space maps. You
> most likely need to increase yours. In particular, see:
>http://archives.postgresql.org/pgsql-general/2002-07/msg00972.php
Thanks, very helpful, although there does not seem to be much description of
what the two free
Kristian Eide wrote:
> This is not good as the database is in use 24/7, and without the indexes
> everything comes to a screeching halt. This means I probably will have to
> stop the programs using the database for the time it takes to re-create the
> indexes; this is better than having to dump/re
> In my case, it was the fact that indexes don't release the space of
> indexes of deleted rows. So, if you have a table that has a lot of
> inserts/deletes, your indexes will grow incredibly fast.
>
> The way to see what your biggest items are:
> select * from pg_class order by relpages desc;
Y
I've run into this myself. Tom lane helped me out.
In my case, it was the fact that indexes don't release the space of
indexes of deleted rows. So, if you have a table that has a lot of
inserts/deletes, your indexes will grow incredibly fast.
The way to see what your biggest items are:
selec
Hi, I have a PostgreSQL 7.2.1 database which normally (just after a
pg_restore) takes about 700-800MB of disk space. Now, the problem is that
the database grows quite quickly when in use, although we don't put very
much data in. Granted, there is quite a few records deleted and inserted,
but the t
Title: VACUUM VERBOSE ANALYZE locking up?? Please help!
Ok it doesn't kill the whole backend, but VACUUM... comes
to a halt partway through and it's necessary to ^C out
of the transaction in psql. Sometimes trying again will
work, mostly that doesn't work. Restarting the backend
and then do
Tim Perdue <[EMAIL PROTECTED]> writes:
> We recently upgraded sourceforge to 7.1b6 from our nice old stable november
> cvs snapshot and we hit this error last nite:
> NOTICE: FlushRelationBuffers(groups, 2140): block 2057 is referenced (private 0,
>global 1)
> Error: ERROR: VACUUM (repair_frag
We recently upgraded sourceforge to 7.1b6 from our nice old stable november
cvs snapshot and we hit this error last nite:
NOTICE: FlushRelationBuffers(groups, 2140): block 2057 is referenced (private
0, global 1)
Error: ERROR: VACUUM (repair_frag): FlushRelationBuffers returned -2
I tend to
Gerald Gutierrez wrote:
>
> >There is an undocumented little factoid here: CREATE INDEX will update
> >(some of) the planner stats, but only if it finds some data in the
> >table. CREATE INDEX on an empty table leaves the initial default
> >numbers alone. This may be contributing to your confus
>There is an undocumented little factoid here: CREATE INDEX will update
>(some of) the planner stats, but only if it finds some data in the
>table. CREATE INDEX on an empty table leaves the initial default
>numbers alone. This may be contributing to your confusion, but it was
>deemed necessary
On Thu, 15 Mar 2001, Gerald Gutierrez wrote:
> 1) When I create a empty table, and then immediate create an index on a
> column, I can get /index scans/ when searching on that column. But when I
> then run VACUUM, the same search becomes a /sequential scan/. Shouldn't it
> still be an index s
Gerald Gutierrez <[EMAIL PROTECTED]> writes:
> 1) When I create a empty table, and then immediate create an index on a
> column, I can get /index scans/ when searching on that column. But when I
> then run VACUUM, the same search becomes a /sequential scan/.
VACUUM updates the planner's statist
As additional information that I neglected to include in the first message,
after both tests, the indices clearly still exist and can be seed in the
following commands:
\d t1
\d t2
\di
\d t1_a_ndx
\d t2_a_ndx
The output shows what's expected, e.g:
test1=> \di
List of relations
Name | Type |
I'm confused over two question involving PostgreSQL index scans. I'm using
Linux Mandrake 7.2 on Intel, and PostgreSQL 7.0.3. If someone can explain
what's going on, I'd greatly appreciate it.
-
1) When I create a empty table, and then immediate cre
Does vacuuming and analyzing a database affect the users if they are
currently inserting/deleting rows from a table on the database? How does
it work exactly...
Thanks.
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unr
On Mon, 5 Jun 2000, Fabrice Scemama wrote:
> No core dump could be found within the data/base/*
> directories. The cron is executed by user root,
> but on my system root is a PostgreSQL superuser.
Well, the problem here might be in the fact that root usually has its
ulimit -c (the maximum size o
Fabrice Scemama <[EMAIL PROTECTED]> writes:
> No core dump could be found within the data/base/* directories.
Hm. When I've seen a backend crash without leaving a core file,
it's usually because the backend ran up against the system's
per-process memory limit. (Many kernels seem to choose not t
No core dump could be found within the data/base/*
directories. The cron is executed by user root,
but on my system root is a PostgreSQL superuser.
Tom Lane wrote:
>
> Fabrice Scemama <[EMAIL PROTECTED]> writes:
> > [ vacuum appears to be coredumping ]
>
> That's odd ... not so much that vacuum
Fabrice Scemama <[EMAIL PROTECTED]> writes:
> [ vacuum appears to be coredumping ]
That's odd ... not so much that vacuum could be failing, which is
probably a garden-variety bug; but it sounds like the postmaster is
failing to do the system restart that it should do after one of the
backends fai
Hi people.
For a few days, i've been experiencing some problems.
There's a cron-scheduled vacuum, performed every night.
But, on the morning, my DBs can't be accessed. Error
message says "DB busy, too many connected" (about so).
And, ps ax|grep post shows a lot of backends, waiting
for an unlock
47 matches
Mail list logo