Re: [PERFORM] Avoid long-running transactions in a long-runningstored procedure?

2008-02-14 Thread David Crane
Thanks for the prompt replies! It sounds like these are variations of the same approach. In our case, we need to do a lot of comparing against the old data, audit tables and so forth, so the bulk of the work is in the body of the existing loop (already coded). So I think keeping that loop body i

Re: [PERFORM] Avoid long-running transactions in a long-running stored procedure?

2008-02-14 Thread Ow Mun Heng
On Thu, 2008-02-14 at 17:29 -0800, Josh Berkus wrote: > David, > > > Once per quarter, we need to load a lot of data, which causes many > > updates across the database. We have an online transaction > > processing-style application, which we really want to stay up during the > > update job. > Ho

Re: [PERFORM] Avoid long-running transactions in a long-running stored procedure?

2008-02-14 Thread Josh Berkus
David, > Once per quarter, we need to load a lot of data, which causes many > updates across the database. We have an online transaction > processing-style application, which we really want to stay up during the > update job. What you're talking about is "autonomous transactions". There's someo

[PERFORM] Avoid long-running transactions in a long-running stored procedure?

2008-02-14 Thread David Crane
Once per quarter, we need to load a lot of data, which causes many updates across the database. We have an online transaction processing-style application, which we really want to stay up during the update job. The programmer coded a stored procedure which does the job well ... logically. But

Re: [PERFORM] Query slows after offset of 100K

2008-02-14 Thread Tom Lane
Greg Smith <[EMAIL PROTECTED]> writes: > On Thu, 14 Feb 2008, Michael Lorenz wrote: >> When offsetting up to about 90K records, the EXPLAIN ANALYZE is similar to >> the following: >> Limit (cost=15357.06..15387.77 rows=20 width=35) (actual >> time=19.235..19.276 rows=20 loops=1) >> -> Index Sca

Re: [PERFORM] Query slows after offset of 100K

2008-02-14 Thread Greg Smith
On Thu, 14 Feb 2008, Michael Lorenz wrote: When offsetting up to about 90K records, the EXPLAIN ANALYZE is similar to the following: Limit (cost=15357.06..15387.77 rows=20 width=35) (actual time=19.235..19.276 rows=20 loops=1) -> Index Scan using account_objectname on "object" o (cost=0.0

Re: [PERFORM] Join Query Perfomance Issue

2008-02-14 Thread Chris
Nested Loop (cost=0.00..31157.91 rows=3054 width=14) (actual time=0.252..149.557 rows=2769 loops=1) -> Index Scan using messungen_v_dat_2007_11_12_messpunkt_minute_tag_idx on messungen_v_dat_2007_11_12 m (cost=0.00..5134.28 rows=3054 width=4) (actual time=0.085..11.562 rows=2769 loops=1)

Re: [PERFORM] Query slows after offset of 100K

2008-02-14 Thread Tom Lane
Michael Lorenz <[EMAIL PROTECTED]> writes: > Fair enough, and I did think of this as well. However, I didn't think this > was a viable option in my case, since we're currently allowing the user to > randomly access the pages (so $lastkey wouldn't really have any meaning). > The user can choose

Re: [PERFORM] Query slows after offset of 100K

2008-02-14 Thread Mark Lewis
Michael, Our application had a similar problem, and what we did to avoid having people click into the middle of 750k records was to show the first page with forward/back links but no link to go to the middle. So people could manually page forward as far as they want, but nobody is going to sit th

Re: [PERFORM] Query slows after offset of 100K

2008-02-14 Thread Michael Lorenz
Fair enough, and I did think of this as well. However, I didn't think this was a viable option in my case, since we're currently allowing the user to randomly access the pages (so $lastkey wouldn't really have any meaning). The user can choose to sort on object ID, name or modification time,

Re: [PERFORM] Query slows after offset of 100K

2008-02-14 Thread Tom Lane
Michael Lorenz <[EMAIL PROTECTED]> writes: > My query is as follows: > SELECT o.objectid, o.objectname, o.isactive, o.modificationtime > FROMobject o > WHERE ( o.deleted = false OR o.deleted IS NULL ) > AND o.accountid = 111 > ORDER BY 2 > LIMIT 20 OFFSET 1; This is guaranteed t

[PERFORM] Query slows after offset of 100K

2008-02-14 Thread Michael Lorenz
Hi all, I've been reading through the performance list of the last few months, and haven't been able to find a solution to my problem yet, so I'm posting the specifics here now. If anyone can suggest what might work (or point me to where this has been covered before), that would be great. My

Re: [PERFORM] Creating and updating table using function parameter reference

2008-02-14 Thread Albert Cervera Areny
You need the string concatenation operator ||. Take a look at http://www.postgresql.org/docs/8.3/static/functions-string.html By the way, this is off-topic in this list please, post general non-performance questions to pgsql-general. A Dijous 14 Febrer 2008 13:35, Linux Guru va escriure: > I st

Re: [PERFORM] Creating and updating table using function parameter reference

2008-02-14 Thread Linux Guru
I still cannot pass tablename, what is wrong? Is this the right way? CREATE OR REPLACE FUNCTION test ( t1 text,t2 text ) RETURNS numeric AS $$ declare temp1 text; declare temp2 text; declare cmd text; declare t2row RECORD; begin temp1=t1; temp2=t2; cmd='select product, (case when su

Re: [PERFORM] Join Query Perfomance Issue

2008-02-14 Thread Thomas Zaksek
Scott Marlowe schrieb: Yeah, it didn't help. I was expecting the query planner to switch to a more efficient join plan. Try setting it higher for JUST THIS query. i.e. set work_mem=128M; explain analyze select and see how that runs. Then play with it til you've got it down to what

Re: [PERFORM] Anyone using a SAN?

2008-02-14 Thread Greg Stark
Tobias Brox wrote: [Peter Koczan - Wed at 10:56:54AM -0600] The consensus on this list seemed to be that running postgres on SAN is not cost efficiently - one would get better performance for a lower cost if the database host is connected directly to the disks - and also, buying the wrong SAN