In my development system the file system where $PGDATA resides filled
up.
cp: writing
`/usr/local/pgsql/archlog/ybcdrdbd01/data/000100430076': No space
left on device
could not copy
/usr/local/pgsql/data/pg_xlog/000100430076 to archive
2005-10-23 08:46:29
I forgot to include the specific error
message related to the archival process not finding the file.
From: Sailer, Denis
(YBUSA-CDR) [mailto:[EMAIL PROTECTED]
Sent: Monday, October 24, 2005
8:52 AM
To: pgsql-general@postgresql.org
Subject: cannot stat
`/usr/local/pgsql/data
.
From: Sailer, Denis (YBUSA-CDR)
[mailto:[EMAIL PROTECTED]
Sent: Monday, October 24, 2005
8:52 AM
To: pgsql-general@postgresql.org
Subject: cannot stat
`/usr/local/pgsql/data/pg_xlog/00010043009C': No such file or
directory
In my development system the file system
I posted the following to the performance mailing list on 8/2/2005, but have
not heard any replies. Maybe this should just be a general question. Would
someone be able to help me get pb_dump to run faster for bytea data?
++
Dumping a database which
In the following output the vacuum knows there are 99,612
pages and 1,303,891 rows. However the last line of output during the analyze
only thinks there are 213,627 rows. Is this so far off because the table is
bloated? Version of PostgreSQL is PostgreSQL 7.4.3 on
i686-pc-linux-gnu,
I was trying to create an index on a 37,000,000 row table
and received the following error. Evidently I dont have enough
space in my pg_xlog directory to handle this as a single transaction. The
file system for pg_xlog is allocated 2GB. The following output is
from a psql session directly
There was a posting in the mailing list archives that I can't
find anymore. The web site right now is presenting a list of items from a
search in a reasonable amount of time, but takes 5-10 minutes to retrieve the
detail for each one as they are clicked. Rather frustrating.
This person