On Mon, Mar 10, 2008 at 11:00 AM, Justin <[EMAIL PROTECTED]> wrote:
>
>  That comment was not meant to be an insult or disparaging in any way what
> so ever.  If it was taken as such then i'm sorry.

I am sure it would have been fine in person, I just think over email
it sounded abrasive.

But could you stop topquoting please?

>  It seems the biggest performance hit is copying of the array content from
> one memory variable to another which is happening allot.

Yeah, I think arrays just can't handle a whole lot of data, that is
all.  They are "tricky", and shouldn't be used for heavy lifting (more
than 1k of elements feels like you are asking for trouble).

>  I'm not really against using a temp tables to hold  onto values.  I used to
> do that in Foxpro when i hit the hard limit on its array but other problems
> start popping up.  If we use a temp table keeping track what going with
> other users can make life fun.

I think temp tables have scope, though you should test this, so that
you can use them with impunity in functions and not worry with
multiple users.

>  I really want to figure this out how to speed it up.  I have to write allot
> more aggregate functions to analyze R&D data which will happen latter this
> year.   right now this function will be used in calculating manufacturing
> cost.

I think a combination of aggregate functions along with some more
design would be best.  For example:  can you have a trigger calculate
the normalized weight of a row on insert?  Have triggers keep another
table with summary information updated as you modify the data?  Etc.
There is a lot to PG that would help for this kind of thing.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to