Thanks for the prompt replies!
It sounds like these are variations of the same approach. In our case,
we need to do a lot of comparing against the old data, audit tables and
so forth, so the bulk of the work is in the body of the existing loop
(already coded). So I think keeping that loop body i
On Thu, 2008-02-14 at 17:29 -0800, Josh Berkus wrote:
> David,
>
> > Once per quarter, we need to load a lot of data, which causes many
> > updates across the database. We have an online transaction
> > processing-style application, which we really want to stay up during the
> > update job.
> Ho
David,
> Once per quarter, we need to load a lot of data, which causes many
> updates across the database. We have an online transaction
> processing-style application, which we really want to stay up during the
> update job.
What you're talking about is "autonomous transactions". There's someo
Once per quarter, we need to load a lot of data, which causes many
updates across the database. We have an online transaction
processing-style application, which we really want to stay up during the
update job.
The programmer coded a stored procedure which does the job well ...
logically. But
Greg Smith <[EMAIL PROTECTED]> writes:
> On Thu, 14 Feb 2008, Michael Lorenz wrote:
>> When offsetting up to about 90K records, the EXPLAIN ANALYZE is similar to
>> the following:
>> Limit (cost=15357.06..15387.77 rows=20 width=35) (actual
>> time=19.235..19.276 rows=20 loops=1)
>> -> Index Sca
On Thu, 14 Feb 2008, Michael Lorenz wrote:
When offsetting up to about 90K records, the EXPLAIN ANALYZE is similar to the
following:
Limit (cost=15357.06..15387.77 rows=20 width=35) (actual time=19.235..19.276
rows=20 loops=1)
-> Index Scan using account_objectname on "object" o (cost=0.0
Nested Loop (cost=0.00..31157.91 rows=3054 width=14) (actual
time=0.252..149.557 rows=2769 loops=1)
-> Index Scan using
messungen_v_dat_2007_11_12_messpunkt_minute_tag_idx on
messungen_v_dat_2007_11_12 m (cost=0.00..5134.28 rows=3054 width=4)
(actual time=0.085..11.562 rows=2769 loops=1)
Michael Lorenz <[EMAIL PROTECTED]> writes:
> Fair enough, and I did think of this as well. However, I didn't think this
> was a viable option in my case, since we're currently allowing the user to
> randomly access the pages (so $lastkey wouldn't really have any meaning).
> The user can choose
Michael,
Our application had a similar problem, and what we did to avoid having
people click into the middle of 750k records was to show the first page
with forward/back links but no link to go to the middle. So people
could manually page forward as far as they want, but nobody is going to
sit th
Fair enough, and I did think of this as well. However, I didn't think this was
a viable option in my case, since we're currently allowing the user to randomly
access the pages (so $lastkey wouldn't really have any meaning). The user can
choose to sort on object ID, name or modification time,
Michael Lorenz <[EMAIL PROTECTED]> writes:
> My query is as follows:
> SELECT o.objectid, o.objectname, o.isactive, o.modificationtime
> FROMobject o
> WHERE ( o.deleted = false OR o.deleted IS NULL )
> AND o.accountid = 111
> ORDER BY 2
> LIMIT 20 OFFSET 1;
This is guaranteed t
Hi all,
I've been reading through the performance list of the last few months, and
haven't been able to find a solution to my problem yet, so I'm posting the
specifics here now. If anyone can suggest what might work (or point me to
where this has been covered before), that would be great. My
You need the string concatenation operator ||. Take a look at
http://www.postgresql.org/docs/8.3/static/functions-string.html
By the way, this is off-topic in this list please, post general
non-performance questions to pgsql-general.
A Dijous 14 Febrer 2008 13:35, Linux Guru va escriure:
> I st
I still cannot pass tablename, what is wrong?
Is this the right way?
CREATE OR REPLACE FUNCTION test ( t1 text,t2 text ) RETURNS numeric AS $$
declare temp1 text;
declare temp2 text;
declare cmd text;
declare t2row RECORD;
begin
temp1=t1;
temp2=t2;
cmd='select product, (case when su
Scott Marlowe schrieb:
Yeah, it didn't help. I was expecting the query planner to switch to
a more efficient join plan.
Try setting it higher for JUST THIS query. i.e.
set work_mem=128M;
explain analyze select
and see how that runs. Then play with it til you've got it down to
what
Tobias Brox wrote:
[Peter Koczan - Wed at 10:56:54AM -0600]
The consensus on this list seemed to be that running postgres on SAN is
not cost efficiently - one would get better performance for a lower cost
if the database host is connected directly to the disks - and also,
buying the wrong SAN
16 matches
Mail list logo