Yeb Havinga <yebhavi...@gmail.com> writes:
> What if the default operation of e.g. php using libpq would be as 
> follows: set some default fetchsize (e.g. 1000 rows), then just issue 
> getrow. In the php pg handling, a function like getnextrow would wait 
> for the first pgresult with 1000 rows. Then if the pgresult is depleted 
> or almost depleted, request the next pgresult automatically. I see a lot 
> of benefits like less memory requirements in libpq, less new users with 
> why is my query so slow before the first row, and almost no concerns.

You are blithely ignoring the reasons why libpq doesn't do this.  The
main one being that it's impossible to cope sanely with queries that
fail partway through execution.  The described implementation would not
cope tremendously well with nonsequential access to the resultset, either.

                        regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to