I'm investigating a 'could not stat file' error that points to a file
"base/16384/52212305.1". All the data files I've ever seen have names that
are whole numbers; I've never seen a decimal suffix. It occurs to me that
perhaps this is some kind of temp-file, or a system for avoiding duplicate
fil
The software I develop bundles a Postgres 8.3.15 database for storage.
Several users are reporting errors like this:
ProgrammingError: could not stat file "base/16384/52212305.1": Permission
denied
I'm unable to reproduce this myself, but it's clearly a real issue. The
response to all previous
I finally had time to test this further on a variety of systems, and was
unable to reproduce on any non-Windows platform. The dump even works fine
on Windows XP; just not Windows 7.
This prompted me to do a little more research, and this time I found this
thread from Sept. 2011:
http://postgresq
Ah, I see; it looks like when logging_collector = 'off', Postgres logs to
the Windows event log. Is this a bug? Given that log_destination has an
'eventlog' option, it seems weird for it to also be logging there based on
the value of a different option.
Since I don't want logging to files, I gue
I have log_destination = 'stderr', but Postgres (8.3.15) writes messages to
the Windows event log. How can I prevent it from doing this? Any insight
into this behavior would be greatly appreciated. Thanks!
On Mon, Nov 28, 2011 at 9:48 PM, Craig Ringer wrote:
>
> Getting a usable stack trace on Windows isn't actually too hard.
The problem isn't getting the trace - I know how to do that - it's that I
don't have the pdbs for this build, and so the trace wouldn't be very
useful. I may be able to get
I probably can't get a stack trace, but I was able to reproduce it with
just that function. Without the function, pg_dump works fine. I can DROP
the function, pg_dump works, then add it back again and pg_dump crashes.
Here are my steps:
initdb -A md5 --no-locale -E UTF8 -U testuser -D
"C:\Users
Sure; the function is created programmatically as part of schema creation,
by the same user who owns (almost) everything else in the database. The
definition looks like this:
CREATE OR REPLACE FUNCTION datastore_unpack(
data_times TIMESTAMP WITH TIME ZONE[],
data_v
I'm seeing pg_dump [8.3.15 Windows] crash reproducibly against a particular
database. Searching the web, I found [
http://grokbase.com/t/postgresql.org/pgsql-general/2001/02/pg-dump-crash/06ss55h5l35jh4bnnqfigxisy534]
with
a response from Tom Lane suggesting that it was probably due to a bug in
pg
On Wed, Nov 16, 2011 at 6:57 PM, Tom Lane wrote:
> They're used for character set encoding conversions, eg when
> database_encoding = UTF8 and client_encoding = LATIN1 (or any other
> non-identical combination).
Thanks, Tom and Craig; that makes perfect sense. I'd rather not assume
anything ab
I bundle Postgres (8.3.15) with another product as a back-end database. On
Windows, the default build includes a bunch of what appear to be codec
libraries, with names like, utf8_and_cyrillic.dll, ascii_and_mic.dll, etc.
But using Microsoft's dependency walker tool, I see no references to any
of
@Julio Leyva: The table does get vacuumed at the end of the maintenance
tasks; in this case it's not making it that far, of course.
@Scott Marlowe: Truncate isn't an option here, unfortunately.
I'm less concerned with the particular query than with the general question
of when a shutdown could ha
I develop an app that uses a back-end Postgres database, currently 8.3.9.
The database is started when the app starts up, and stopped when it shuts
down. Shutdown uses pg_ctl with -m fast, and waits two minutes for the
process to complete. If it doesn't, it tries -m immediate, and waits two
more
I've been doing more testing on several different machines, but still
haven't found a solution to my problem where VACUUM FULL is running out of
memory. Besides the original case on RHEL4, I've been able to reproduce it
on both Windows and OSX, with 3GB and 5GB RAM, respectively. Interestingly,
i
>
> If you actually expect it to be re-used by the database sometime
> later, I would just stick with normal VACUUM (with adequate fsm
It may or may not be used again. Although disk is cheap, it is a
substantial amount of space, and I'd prefer it wasn't locked-up forever.
For a bit of extra con
On Mon, Dec 14, 2009 at 12:04 PM, Kevin Grittner <
kevin.gritt...@wicourts.gov> wrote:
> I hope you've been following that with a REINDEX every time;
> otherwise you're causing index bloat.
Yes, it REINDEXes afterwards.
Are these inserts happening in the same table(s) each time? If so,
> what
Hello,
I have a weekly task set up to VACUUM FULL a fairly large (~300M row ~50GB)
database. The intent is to free up disk space from especially-large inserts
that aren't covered by the regular reclamation from a daily VACUUM.
Recently, I've been getting the following error:
(OperationalError)
Thank you for the fast response! Your question prompted me to check our
configure options (something I should have done originally). For reasons
unknown to me, we have been building with --disable-largefile on some
systems, including RHEL4. That obviously goes a long way towards explaining
the b
I develop a piece of software that uses PostgreSQL (8.3.5) as a back-end
database. The software can, optionally, use pg_dump to create snapshots of
the database. One user has run into a problem where pg_dump dumps 2GB, then
claims that the archive is too large. I haven't yet found documentation
19 matches
Mail list logo