So, I have tried to run the following command. The command never
finishes
(I gave up after about and hour and a half!).
Did you ever find what was the problem?
Perhaps you needed to run a vacuum full on the table?
Nope.
I just gave up in the end and left it with NULL as the default value.
Hello again.
I have to track user subscriptions to certain mailinglists, and I also
need to track credits users have on those mailinglists. On one side I
have procedures that add credits, on other side I have procedures that
subtract available credits. Add/subtract is pretty intensive, around
Title: pg_dump issue
Good morning,
I have identical postgres installations running on identical machines. Dual Core AMD Opteron(tm) Processor 870 , 16GB RAM, Red Hat Linux 3.2.3-20 and 120GB worth of disk space on two drives.
Recently, I have noticed that my nightly backups take longer on
Mario Splivalo [EMAIL PROTECTED] writes:
Here is the query which gets information on particular user, shows
subscriptions to mailinglists and available credits on those
mailinglists:
SELECT u.subscription_id, u.user_id, u.mailinglist_id, u.valid_from,
u.valid_to, sum(credits.credits_given -
mcelroy, tim [EMAIL PROTECTED] writes:
I have identical postgres installations running on identical machines. Dual
Core AMD Opteron(tm) Processor 870 , 16GB RAM, Red Hat Linux 3.2.3-20 and
120GB worth of disk space on two drives.
Recently, I have noticed that my nightly backups take longer
On Tue, 2006-05-30 at 11:05 -0400, Tom Lane wrote:
Do you have realistic test data? The EXPLAIN shows that this is pulling
275366 of the 826032 rows in the two tables, which seems like rather a
lot for a single user. If it's reasonable that the query needs to fetch
one-third of the data,
Title: RE: [PERFORM] pg_dump issue
Thanks Tom. I have scheduled vacuums as follows and all have run without error.
Mon - Thu after-hours: vacuumdb -z -e -a -v On Fridays I add the -f option vacuumdb -z -e -a -v -f
The du . -h in $PGDATA showed PROD001 at 9.1G and Prod0002 at 8.8G so
Jonathan Blitz writes:
I just gave up in the end and left it with NULL as the default value.
Could you do the updates in batches instead of trying to do them all at
once?
Have you done a vacuum full on this table ever?
There were, in fact, over 2 million rows in the table rather than
Could you do the updates in batches instead of trying to do them all at
once?
Nope. Didn't think it would make any difference.
Have you done a vacuum full on this table ever?
Many times
What hardware?
I have a dual CPU opteron with 4GB of RAM and 8 disks in RAID 10 (SATA).
Doing
Title: RE: [PERFORM] pg_dump issue
I did carry it down to the subdirectory level but only included the total for brevity. I'll paste the complete readout at the end of the email. I'll try the vmstat 1 as you suggest next time the backups run. If the Eng staff finds anything I'll post the
Jonathan Blitz writes:
Nope. Didn't think it would make any difference.
May be worth a try.
I am using a laptop :).
Pentium 4 (not 4M) with 1GB of memory - 2 MHZ
Most laptop drives are only 5,400 RPM which would make a transaction like
you are describing likely take a while.
Must do
On May 29, 2006, at 7:11 AM, Markus Schaber wrote:
One big transaction every 5 minutes using 'COPY FROM' (instead of
inserts).
Are you using COPY table FROM '/path/to/file', having the file
sitting
on the server, or COPY table FROM STDIN or psql /copy, having the
file sitting on the client?
Most laptop drives are only 5,400 RPM which would make a transaction like
you are describing likely take a while.
Not sure what my one is but it is new(ish).
No desktop at home you could try it on?
I think the problem with the laptop is likely it's drive.
I suppose I could do but I need
On Tue, 2006-05-30 at 16:04, Jonathan Blitz wrote:
Most laptop drives are only 5,400 RPM which would make a transaction like
you are describing likely take a while.
Not sure what my one is but it is new(ish).
No desktop at home you could try it on?
I think the problem with the
Most laptop drives are only 5,400 RPM which would make a transaction
like
you are describing likely take a while.
Not sure what my one is but it is new(ish).
If you're doing data intensive operations (like a big update which looks
like what you're doing) it will write many megabytes to
Jonathan Blitz writes:
I suppose I could do but I need to install PostgreSQL there and then copy
over the database.
Maybe I will give it a try.
I really think that is your best bet.
If for whatever reason that will not be an option perhaps you can just let
the process run over the weekend..
Steinar H. Gunderson wrote:
[EMAIL PROTECTED] wrote:
The above query takes 5 seconds to execute!
[...]
Total runtime: 96109.571 ms
It sure doesn't look like it...
Total runtime: 461.907 ms
[...]
Suddenly the query takes only 0.29 seconds!
How are you timing this, really?
Hi I'm a new postgresql user. I wrote ACO (ant colony optimazition) and want to replace it with GEQO in postres/src/backend/optimizer but I don't know how to compile and run the source code :( I installed postgresql-8.1.3 and cygwin but I can not use them to compile the source code. I want
I want to optimize this simple join:
SELECT * FROM huge_table h, tiny_table t WHERE UPPER( h.id ) = UPPER( t.id )
huge_table has about 2.5 million records, can be assumed as fixed, and
has the following index:
CREATE INDEX huge_table_index ON huge_table( UPPER( id ) );
...while tiny_table
[ I had a problem with my mailer when I first sent this. My apologies
for any repeats. ]
I want to optimize this simple join:
SELECT * FROM huge_table h, tiny_table t WHERE UPPER( h.id ) = UPPER( t.id )
huge_table has about 2.5 million records, can be assumed as fixed, and
has the
On 5/24/06, Tom Lane [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] writes:Limit(cost=19676.75..21327.99
rows=6000 width=84)-Hash Join(cost=19676.75..1062244.81 rows=3788315 width=84)Hash Cond: (upper((outer.id)::text) = upper((inner.id)::text))-Seq Scan on huge_table h(cost=
0.00..51292.43
Hi,
I have 2 servers, one of them has a 7.4 postgres and the other has a 8.1
I have this query:
select fagrempr,fagrdocr,fagrserr,fagrparr
from arqcfat
left join arqfagr on fagrorig = 'CFAT' and fagrdocu = cfatdocu and fagrempe
= cfatempe and fagrseri = cfatseri
where cfatdata between
Another little question ... would using any sort of TEMP table help out, i.e. loading the unaggregated data into a TEMP table, aggregating the data via a SELECT INTO another TEMP table, and then finally INSERT ... SELECT into the master, aggregated, triggered table? It seems like this might be a
I've set up something similar the 'recommended' way to merge data into
the DB, i.e.
http://www.postgresql.org/docs/current/static/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING
however I did it with a trigger on insert, i.e. (not my schema :) ):
CREATE TABLE db (a INT PRIMARY KEY, b
Antonio Batovanja wrote:
Laurenz Albe wrote:
Antonio Batovanja wrote:
I'm having trouble understanding, why a specific query on a small
database is taking so long...
Before I try to understand the execution plans:
Have you run ANALYZE on the tables involved before you ran the query?
Hi,
Antonio Batovanja wrote:
(...)
1) the slow query:
EXPLAIN ANALYZE SELECT DISTINCT ldap_entries.id, organization.id,
text('organization') AS objectClass, ldap_entries.dn AS dn FROM
ldap_entries, organization, ldap_entry_objclasses WHERE
organization.id=ldap_entries.keyval AND
Hi,
Is there a command to Insert a record If It does not exists and a
update if It exists?
I do not want to do a select before a insert or update.
I mean the postgres should test if a record exist before insert and if
It exist then the postgres must do an update instead an insert.
Thanks,
On 5/30/06, Waldomiro [EMAIL PROTECTED] wrote:
Is there a command to Insert a record If It does not exists and a update if
It exists?
Sure, it's called MERGE. See http://en.wikipedia.org/wiki/Merge_%28SQL%29
I mean the postgres should test if a record exist before insert and if It
exist
On þri, 2006-05-30 at 10:26 +1000, Anthony Ransley wrote:
Can any one explain why the following query
select f(q) from
(
select * from times
where '2006-03-01 00:00:00'=q and q'2006-03-08 00:00:00'
order by q
) v;
never completes, but splitting up the time span into single
On May 30, 2006, at 3:59 PM, Daniel J. Luke wrote:
I should have gprof numbers on a similarly set up test machine
soon ...
gprof output is available at http://geeklair.net/~dluke/
postgres_profiles/
(generated from CVS HEAD as of today).
Any ideas are welcome.
Thanks!
--
Daniel J. Luke
PostgreSQL does not support MERGE at the moment, sorry.
Issue an UPDATE, and watch the rowcount ; if the rowcount is 0, issue an
INSERT.
Be prepared to retry if another transaction has inserted the row
meanwhile, though.
MERGE would be really useful.
SELECT * FROM huge_table h, tiny_table t WHERE UPPER( h.id ) =
UPPER( t.id )
What about :
SELECT * FROM huge_table h WHERE UPPER(id) IN (SELECT upper(id) FROM
tiny_table t)
Or, try opening a cursor on your original query and using FETCH. It might
result in a different plan.
PFC wrote:
PostgreSQL does not support MERGE at the moment, sorry.
Issue an UPDATE, and watch the rowcount ; if the rowcount is 0,
issue an INSERT.
Be prepared to retry if another transaction has inserted the row
meanwhile, though.
Oh, you mean, like the example
What I do when I'm feeling lazy is execute a delete statement and then
an insert. I only do it when I'm inserting/updating a very small number
of rows, so I've never worried if its optimal for performance. Besides
I've heard that an update in postgres is similar in performance to a
On Tue, 30 May 2006 17:54:00 -0500
Dave Dutcher [EMAIL PROTECTED] wrote:
What I do when I'm feeling lazy is execute a delete statement and then
an insert. I only do it when I'm inserting/updating a very small number
of rows, so I've never worried if its optimal for performance. Besides
I've
What's explain analyze show?
On Fri, May 26, 2006 at 09:04:56AM -0300, [EMAIL PROTECTED] wrote:
Hi,
I have 2 servers, one of them has a 7.4 postgres and the other has a 8.1
I have this query:
select fagrempr,fagrdocr,fagrserr,fagrparr
from arqcfat
left join arqfagr on fagrorig =
Your best bet is to do this as a single, bulk operation if possible.
That way you can simply do an UPDATE ... WHERE EXISTS followed by an
INSERT ... SELECT ... WHERE NOT EXISTS.
On Fri, May 26, 2006 at 02:48:20PM -0400, Worky Workerson wrote:
I've set up something similar the 'recommended' way
I'm executing the queries from phpPgAdmin.
The above are for explain analyse. I was referring to the pure query
execution time.
Does anyone have an idea why the OR-query takes so long?
Any server-side tuning possibilities? I wouldn't like to change the code of
ldap's back-sql...
If you're
[EMAIL PROTECTED] [EMAIL PROTECTED] writes:
Why the plan is worst in postgres 8.1?
(1) you have not actually shown us that the plan is worse. If you are
complaining that the planner is wrong, EXPLAIN output (which contains
only the planner's estimates) is useless for proving your point. Show
39 matches
Mail list logo