Gregory Stark wrote:
"Andrew Dunstan" <[EMAIL PROTECTED]> writes:

Yes, more or less. That's what made me think of it.

OTOH, before we rush out and do it someone needs to show that it's a net win. I
agree with Tom that making tuplestore faster would probably be a much better
investment of time.

I don't think the problem with the tuplestore is a matter of speed. It's a
matter of scalability and flexibility. It limits the types of applications
that can use SRFs and the amount of data they can manipulate before it becomes
impractical.

Consider applications like dblink that have SRFs that read data from a slow
network sources. Or that generate more data than the server can actually store
at any one time. Or that overflow work_mem but are used in queries that could
return quickly based on the first few records.

Unfortunately, I don't think there's a simple fix that'll work for all PLs
using the current interface. Even languages with iterators themselves (python,
I think) probably don't expect to be called externally while an iterator is in
progress.

Just a thought - I believe that there are portable user-space thread implementations that contain little or no machine-specific code. What
if postgres used one of those to switch from the PL into the executor
and back after, say, 1000 rows were returned by the SFR?

What would be needed is basically some enhanced version of setjmp/longjmp
that actually saves the stack, and not just resets the stackpointer.

Since context switching would occur only at two well-defined places
(Some return_next_row function that PLs call when a SFR returns a row,
and in the executor if no more previously returned rows from that SFR
are available), this wouldn't introduce the usual multithreading
headache, but still allow to switch in and out of the PL interpreter.

greetings, Florian Pflug

---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
      subscribe-nomail command to [EMAIL PROTECTED] so that your
      message can get through to the mailing list cleanly

Reply via email to