Hitoshi Harada <umi.tan...@gmail.com> writes:
> On Sat, Oct 29, 2011 at 8:13 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:
>> I have not looked at the code, but ISTM the way that this has to work is
>> that you set up a portal for each active scan.  Then you can fetch a few
>> rows at a time from any one of them.

> Hmm, true. Looking back at the original proposal (neither did I look
> at the code,) there seems to be a cursor mode. ISTM it is hard for fdw
> to know how the whole plan tree looks, so consequently do we always
> cursor regardless of estimated row numbers?

I think we have to.  Even if we estimate that a given scan will return
only a few rows, what happens if we're wrong?  We don't want to blow out
memory on the local server by retrieving gigabytes in one go.

> I haven't had much experiences around cursor myself, but is it as
> efficient as non-cursor?

No, but if you need max efficiency you shouldn't be using foreign tables
in the first place; they're always going to be expensive to access.

It's likely that making use of native protocol portals (instead of
executing a lot of FETCH commands) would help.  But I think we'd be well
advised to do the first pass with just the existing libpq facilities,
and then measure to see where to improve performance.

                        regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to