[EMAIL PROTECTED] wrote:
Is there a way to do a selective pg_dump of a table using a select
where clause? As in ... dump all records where now-col_date 2 or
col_date='01-may-2006' etc.. ?
You can transfer the subset of data to another table, e.g.,
create table my_dump as select * from
On behalf of a previous employer who, due to major downsizing, is left
without an answer to this:
What's the best way to treat impending OID wrap-around in a 7.4.8
database? This DB has been online for about 18 months, and is expected
to hit the dreaded wrap-around in about a month. At an
, so
it gets fooled?)
[And here's the remaining question in my puzzled mind: ANALYZE would not
change the reltuples value, but VACUUM FULL ANALYZE did. Er-wha?]
--
Jeff Boes Vox 269-226-9550 x24
Director of Software Development Fax 269-349
of the vacuumdb command's execution?
--
Jeff Boes vox 269.226.9550 ext 24
Database Engineer fax 269.349.9076
Nexcerpt, Inc. http://www.nexcerpt.com
...Nexcerpt... Extend your Expertise
We have a number of tables from which we expire (delete) rows nightly.
Is there any value to ordering the operations, in terms of either table
physical organization or elapsed time?
E.g.,
DELETE FROM foo WHERE date_expires now();
VACUUM ANALYZE foo;
CLUSTER foo;
REINDEX TABLE foo;
How
the same issue as having a sequence
number on a table, but if I'm interpreting all this correctly, the OID
wrap-around is going to occur a lot sooner than my table sequence number
wrap-around.)
--
Jeff Boes vox 269.226.9550 ext 24
Database Engineer
We had a puzzling situation occur last weekend. Subsequently, I figured out how
to work around it for now, but of course those who sign my checks want to know
how we can nail down forever the possibility that something like that will ever
happen again ...
The OID value for large objects crossed
file /usr/local/pgsql/data/pg_xlog/004100EE (log
file 65, segment 238): No such file or directory
(There are other problems that we are still working to correct, but this is the
one that I don't understand.)
--
| Genius may have its limitations, but stupidity is not
Jeff
see anything that will allow me to do that (other
than dropping and recreating the table).
--
Jeff Boes vox 269.226.9550 ext 24
Database Engineer fax 269.349.9076
Nexcerpt, Inc. http
the large object OID after it is created.
I don't know if I have permission to post or email the script, but if you
contact me offline I should know by then.
jboes at n/e/x/c/e/r/p/t/d/o/t/c/o/m
--
| Genius may have its limitations, but stupidity is not
Jeff Boes | thus
table, and
later tried to analyze it. The table was gone by then, and this caused
the daemon to exit. As this happened on a Sunday morning, my weekend
experiment to see how pg_autovacuum would maintain our test database was
rather spoiled ... 8-(
--
Jeff Boes
What causes the contents of pg_listener to linger long past the lifespan of
the associated database backend? My table has rows that I know must be days old.
--
| Genius may have its limitations, but stupidity is not
Jeff Boes | thus handicapped.
[EMAIL PROTECTED
Tom Lane wrote:
Jeff Boes [EMAIL PROTECTED] writes:
What causes the contents of pg_listener to linger long past the lifespan of
the associated database backend? My table has rows that I know must be days old.
PG version?
7.3.4
In any case, if there is a row for a dead backend
Our largest (by total byte count) table has a nearly-equal flow of data in and
out on a daily basis (that is, we add some 40k rows during the day, and then
every night expire stuff that is timestamped beyond our cutoff, which results in
about 40k deletions).
After the deletions, the table gets
Ah, I should have added: we are using PG 7.3.4.
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
At some point in time, [EMAIL PROTECTED] (Tom Lane) wrote:
Jeff Boes [EMAIL PROTECTED] writes:
The Keep and UnUsed numbers seem high to me, compared to other tables. Can
anyone interpret these and tell me anything about what we could do with this
table to make it look better?
Keep
in tracking down which process was doing what at the time.
--
Jeff Boes vox 269.226.9550 ext 24
Database Engineer fax 269.349.9076
Nexcerpt, Inc. http://www.nexcerpt.com
...Nexcerpt
Tom Lane wrote:
Jeff Boes [EMAIL PROTECTED] writes:
We are experiencing the following error, usually during our nightly
delete-and-vacuum cycle (when there are very few other connections to
the database):
2003-10-30 01:36:59 [25392] LOG: server process (pid 697) was
terminated
SIGALRM is
used, so I'll start looking there. But if you or anyone else thinks of
anything, let me know ...
[How would a plperl function that changes the local behavior of SIGALRM
affect the backend?]
--
Jeff Boes vox 269.226.9550 ext 24
Database Engineer
)?
And, absent pat answers to this, is there anything out there in PG-land
that documents partial indexes, and when to use them?
--
Jeff Boes vox 269.226.9550 ext 24
Database Engineer fax 269.349.9076
Nexcerpt, Inc
One action we have consiously not done is REINDEX on the
table. We want to avoid that as far as possible.
Why? It's usually a very painless step, unless the table has millions and
millions of rows. We reindex tables with multiple indexes and several million
rows on a weekly basis. The only
differences whether you
VACUUM FULL or not?
--
Jeff Boes vox 269.226.9550 ext 24
Database Engineer fax 269.349.9076
Nexcerpt, Inc. http://www.nexcerpt.com
...Nexcerpt... Extend your
that
ReceiveSharedInvalidMessages: cache state reset
has something to do with this idle in transaction state. Could the
incidence of these be used as a measurement?
--
Jeff Boes vox 269.226.9550 ext 24
Database Engineer fax 269.349.9076
it be theoretically faster to do
REINDEX TABLE foo;
or
REINDEX INDEX ix_foo_one;
and
REINDEX INDEX ix_foo_two;
in another session? (The system in question has two CPUs, if that makes
any difference.)
--
Jeff Boes vox 269.226.9550 ext 24
Database Engineer
On Thu, 2003-06-12 at 12:42, Tom Lane wrote:
Jeff Boes [EMAIL PROTECTED] writes:
For large (1 million rows) tables
which have a pretty high turn-over (average life span of a row is 3
days), should there be any query performance differences whether you
VACUUM FULL or not?
How often do
, and it
uses the index.
And of course it stays at that point, even if another 10,000 rows get
inserted with current timestamps, until it's ANALYZEd again. So two or
three ANALYZEs per hour is not excessive, if it will keep the index
usable under the right circumstances.
--
Jeff Boes
references for foreign
keys -- code tables).
--
Jeff Boes vox 616.226.9550 ext 24
Database Engineer fax 616.349.9076
Nexcerpt, Inc. http://www.nexcerpt.com
...Nexcerpt... Extend your
/project/pgavd/projdisplay.php
I started to, but haven't had the time to really dig into it.
--
Jeff Boes vox 616.226.9550 ext 24
Database Engineer fax 616.349.9076
Nexcerpt, Inc. http
equivalents. Thanks
In the HTML documentation possibly installed on your system when
PostgreSQL was installed. Failing there, it's found here:
http://www.postgresql.org/idocs/index.php?catalogs.html
--
Jeff Boes vox 616.226.9550 ext 24
Database Engineer
also do:
\t
SELECT 'Now displaying cities...';
\t
SELECT * FROM cities;
--
Jeff Boes vox 616.226.9550 ext 24
Database Engineer fax 616.349.9076
Nexcerpt, Inc. http://www.nexcerpt.com
On Tue, 2002-10-22 at 17:02, Tom Lane wrote:
[EMAIL PROTECTED] (Jeff Boes) writes:
We have a few tables that seem to have suffered some kind of
corruption in their TOAST files. Symptoms include:
+ Runaway CPU usage during VACUUM ANALYZE (load average
spikes of 20x normal for 10-15
On Mon, 14 Oct 2002 11:13:46 -0400, Jeff Boes wrote:
ERROR: cannot open segment 1 of relation table_D (target block
2337538109): No such file or directory
and for table B:
NOTICE: Child itemid in update-chain marked as unused - can't continue
repair_frag
ERROR: cannot open segment 3
installation running queries against these tables; could an open
read-only transaction cause problems like this?
--
Jeff Boes vox 616.226.9550 ext 24
Database Engineer fax 616.349.9076
Nexcerpt, Inc
files
take less time than 7.1's approach of deleting and re-creating? How can
I determine if I have enough WAL files allocated? Should we consider
moving the pg_xlog directory to another physical disk? (At present it
points to a partition on a RAID-striped group of disks.)
--
Jeff Boes
often enough? Is it
different in 7.1 and 7.2?
--
Jeff Boes vox 616.226.9550
Database Engineer fax 616.349.9076
Nexcerpt, Inc. [EMAIL PROTECTED]
---(end
Our database has about 70 tables. Only a half-dozen or so have more
than 100K rows, and all of these change (99% inserts, 10K-50K rows)
during an average day.
VACUUM ANALYZE takes over an hour, and it's edging up by a couple
minutes per day.
Is there a way to look at the DEBUG stats from the
. If you are running an old kernel you may wish
to upgrade soon.
--
Jeff Boes vox 616.226.9550
Database Engineer fax 616.349.9076
Nexcerpt, Inc. [EMAIL PROTECTED
Does anyone have a pointer to info on what kinds of things cause entries
in pg_trigger? We're trying to understand why some tables have a
half-dozen corresponding entries, and some have 30.
--
Jeff Boes vox 616.226.9550
Database Engineer
? Is there an incompatibility in the binaries from the two
sources of RPMs?
--
Jeff Boes vox 616.226.9550
Database Engineer fax 616.349.9076
Nexcerpt, Inc. [EMAIL PROTECTED
or more of these in a day. Are
these filling up (and possibly messing up) our WALs? We have experienced
a few shared-memory corruption errors, and a full-blown corruption
(although of a different table).
--
Jeff Boes vox 616.226.9550
Database Engineer
In article 9sphq7$k0h$[EMAIL PROTECTED], Jeff Boes [EMAIL PROTECTED]
wrote:
and 2) very long COMMIT times for some long transactions: I'm talking
about upwards of 10-20 MINUTES to commit after doing hundreds of
inserts, updates and deletes in one transaction. The table involved has
some 20K
In article [EMAIL PROTECTED],
Stephan Szabo [EMAIL PROTECTED] wrote:
How many rows are in the table? Have you run vacuum analyze?
Sorry, that information was in the original post, but perhaps you missed
it:
In article 9shhnf$23ks$[EMAIL PROTECTED], Jeff Boes [EMAIL PROTECTED]
wrote:
We
).
--
Jeff Boes vox 616.226.9550
Database Engineer fax 616.349.9076
Nexcerpt, Inc. [EMAIL PROTECTED]
---(end of broadcast)---
TIP 4: Don't
43 matches
Mail list logo