Well,
This answers the questionthat was in the back of my head, but I didn't write it down, The pain is probably not worth the gain!

/Magnus

lou wrote:
In some email I received from Magnus Sundberg <[EMAIL PROTECTED]> on Wed, 02 Jul
2003 10:49:13 +0200, wrote:


<snip>

I beleive Gianni has a point, even though I run mysql.
Let's take the IMAP server as an example.
1. Parse IMAP message
2. Call C-function that defines the IMAP command.
3. The C function calls the stored procedure , when applicable (PostgreSQL) or performs the required SQL commands (MySQL).
4. Return result


Magnus, all this sounds great, that means the codebase will create a barrier between both APIs also a gap which will limit the whole idea for one generic API,
which would save time and resources writing more code and more code and more 
code..

I think the bare optimization limit should be in things like
indeces/constraints/queries/code other things like 'db optimization' should be 
performed
outside this circle, i.e. by the user himself, like tweaking PgSQL conf file 
rather than
writing pl/c functions.



But what is the performance gain? Is it worth the effort. I do beleive this is the more elegant way to implement the database access, but we will get two quite different code bases.


Yes, obviously there might be a performance gain, at least using transactions, I dont agree using stored procedures/functions for a simple reason - portability.


I also beleive it is better to implement these changes, if they are necessay as soon as possible.


Hope this makes some sense.

cheers
_______________________________________________
Dbmail mailing list
Dbmail@dbmail.org
https://mailman.fastxs.nl/mailman/listinfo/dbmail





Reply via email to