On Thu, 11 Mar 2004, Christopher Kings-Lynne wrote:
You could also consider not using syslog at all: let the postmaster
output to its stderr, and pipe that into a log-rotation program.
I believe some people use Apache's log rotator for this with good
results.
Not an option I'm afraid.
scott.marlowe wrote:
On Thu, 11 Mar 2004, Christopher Kings-Lynne wrote:
You could also consider not using syslog at all: let the postmaster
output to its stderr, and pipe that into a log-rotation program.
I believe some people use Apache's log rotator for this with good
results.
On Mon, 15 Mar 2004, Bruce Momjian wrote:
scott.marlowe wrote:
On Thu, 11 Mar 2004, Christopher Kings-Lynne wrote:
You could also consider not using syslog at all: let the postmaster
output to its stderr, and pipe that into a log-rotation program.
I believe some people use
Arjen van der Meijden [EMAIL PROTECTED] writes:
Of course I wanted to know how long it'd take on postgresql, selecting the
pkey-field only (without the case) took also some 0.7 seconds (the entire table
may have been more).
But the CASE-version took 9026139.201 ms, i.e. over 9000 seconds
Arjen van der Meijden [EMAIL PROTECTED] writes:
Well, I have discarded this type of query as too inefficient and found a
better way
Loading the mapping into a table with an index and doing an update using
from to do a join seems likely to end up being the most efficient method.
Postgres would
Greg Stark wrote:
Arjen van der Meijden [EMAIL PROTECTED] writes:
Was this the select with the CASE, or the update?
It was just the select to see how long it'd take. I already anticipated
it to be possibly a slow query, so I only did the select first.
Best regards,
Arjen van der Meijden
On Fri, Mar 12, 2004 at 06:25:48PM -0500, Anjan Dave wrote:
We upgraded from 8GB to 12GB RAM a month or so ago, but even in the
past, I've never seen the system exhaust on it's system cache (~6GB, in
'top'), while it's swapping.
Some one had mentioned why not have the entire DB in memory?
We're in the throes of an MS SQL to PostgreSQL migration; our databases
include a number of ~5M row tables. We decided to take this opportunity
to clean up and slightly re-normalize our schemas, given what we've
learned about the data over its lifetime and such, else we wouldn't be
experiencing
Arjen van der Meijden [EMAIL PROTECTED] writes:
[ huge CASE is pretty slow ]
I did some profiling of the test case that Arjen was kind enough to send
me. It seems there are two distinct problems. One is that the parser
uses repeated lappend()'s to construct the list of CASE arms; this
makes
Joe Conway wrote:
A few pg_autovacuum questions came out of this:
First, the default vacuum scaling factor is 2, which I think implies
the big table should only get vacuumed every 56 million or so changes.
I didn't come anywhere near that volume in my tests, yet the table did
get
# explain update account.cust set prodid = tempprod.prodid
where tempprod.did = origid;
Merge Join (cost=0.00..232764.69 rows=4731410 width=252)
Merge Cond: ((outer.origid)::text = (inner.did)::text)
- Index Scan using ix_origid on cust (cost=0.00..94876.83
# explain update account.cust set prodid = tempprod.prodid
where tempprod.did = origid;
Merge Join (cost=0.00..232764.69 rows=4731410 width=252)
Merge Cond: ((outer.origid)::text = (inner.did)::text)
- Index Scan using ix_origid on cust (cost=0.00..94876.83
Bulk updates are generally dogs (not just in pg), so I avoid doing them by
doing faster selects and inserts. You can create a new table using 'create
table as' to produce your target results. This is real fast - avoiding the
row iteration in insert, allowing the select optimizer to run and no
On Mon, 2004-03-15 at 16:15, Rosser Schwarz wrote:
# explain update account.cust set prodid = tempprod.prodid
where tempprod.did = origid;
Merge Join (cost=0.00..232764.69 rows=4731410 width=252)
Merge Cond: ((outer.origid)::text = (inner.did)::text)
- Index Scan using
On Tue, 2004-03-16 at 07:28, Matt Davies wrote:
This is the preferred method, but you could create a memory disk if running
linux. This has several caveats, though.
1. You may have to recompile the kernel for support.
2. You must store the database on a hard drive partition during reboots.
You can create a new table using 'create table as' to produce your
target results. This is real fast ...
I often see 2 orders of magnitude improvement doing this, and no
need to vacuum.
Indeed:
Query returned successfully with no result in 582761 ms.
Though I must say, ten minutes is
Rosser Schwarz [EMAIL PROTECTED] writes:
You can create a new table using 'create table as' to produce your
target results. This is real fast ...
I often see 2 orders of magnitude improvement doing this, and no
need to vacuum.
Indeed:
Query returned successfully with no result in 582761 ms.
The original point was about a very slow update of an entire table with a
plan that looped, and over a dozen conditional indices - vs. a 'create as'
in a CPU starved environment. I stand by my statement about observing the
orders of magnitude difference. In theory I agree that the update should be
Matthew T. O'Connor wrote:
I think you understand correctly. A table with 1,000,000 rows should
get vacuumed approx every 2,000,000 changes (assuming default values for
-V ). FYI and insert and a delete count as one change, but and update
counts as two.
Unfortunately, the running with -d2
Joe Conway wrote:
Yeah, I'm sure. Snippets from the log:
[...lots-o-tables...]
[2004-03-14 12:44:48 PM] added table: specdb.public.parametric_states
[2004-03-14 12:49:48 PM] Performing: VACUUM ANALYZE
public.transaction_data
[2004-03-14 01:29:59 PM] Performing: VACUUM ANALYZE
Tom Lane wrote:
Joe Conway [EMAIL PROTECTED] writes:
Any idea where I can get my hands on the latest version. I found the
original post from Tom, but I thought there was a later version with
both number of pages and time to sleep as knobs.
That was as far as I got. I think Jan posted a more
Matthew T. O'Connor wrote:
Strange... I wonder if this is some integer overflow problem. There was
one reported recently and fixed as of CVS head yesterday, you might try
that, however without the -d2 output I'm only guessing at why
pg_autovacuum is vacuuming so much / so often.
I'll see what
Rosser Schwarz wrote:
shared_buffers = 4096
sort_mem = 32768
vacuum_mem = 32768
wal_buffers = 16384
checkpoint_segments = 64
checkpoint_timeout = 1800
checkpoint_warning = 30
commit_delay = 5
effective_cache_size = 131072
You didn't mention the OS so I would take it as either linux/freeBSD.
23 matches
Mail list logo