dforums wrote:
The performance problem is really only on the insertion and even more on the treatment for the aggregation.

To treat the 3000 entrances and to insert, or update the tables it needs 10 minutes.

As I told you I inject 14000 query every 2 minutes, and it needs 10 minutes to treat 3000 of those query.

Sorry - I still don't understand. What is this "treatment" you are doing?

 >
 > OK. I assume you're happy with the plans you are getting on these
 > queries, since you've not provided any information about them.

The plan seems ok as it use index as well.
here is the plan :

explain analyse SELECT "insertUpdateTracks"(137,2605, 852, ('2008-08-06 19:28:54'::text)::date,3,'dailydisplay',2,NULL);
INFO:  method 1
                                     QUERY PLAN
------------------------------------------------------------------------------------ Result (cost=0.00..0.01 rows=1 width=0) (actual time=1.151..1.151 rows=1 loops=1)
 Total runtime: 1.160 ms

There's nothing to do with an index here - this is a function call.

 Has you can see the runtime processs for an update in this table.

multiplying this per 10000, it is too long.

So - are you calling this function 14000 times to inject your data? You're doing this in one transaction, yes?

--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to