Tom Lane wrote:
Yeb Havinga <yebhavi...@gmail.com> writes:
What if the default operation of e.g. php using libpq would be as follows: set some default fetchsize (e.g. 1000 rows), then just issue getrow. In the php pg handling, a function like getnextrow would wait for the first pgresult with 1000 rows. Then if the pgresult is depleted or almost depleted, request the next pgresult automatically. I see a lot of benefits like less memory requirements in libpq, less new users with why is my query so slow before the first row, and almost no concerns.

You are blithely ignoring the reasons why libpq doesn't do this.  The
main one being that it's impossible to cope sanely with queries that
fail partway through execution.
I'm sorry I forgot to add a reference to your post of http://archives.postgresql.org/pgsql-general/2010-02/msg00956.php which is the only reference to queries failing partway that I know of. But blithely is not a good description of me ignoring it. I though about how queries could fail, but can't think of anything else than e.g. memory exhaustion, and that is just one of the things that is improved this way. Maybe a user defined type with an error on certain data values, but then the same arguing could be: why support UDT? And if a query fails during execution, does that mean that the rows returned until that point are wrong?
  The described implementation would not
cope tremendously well with nonsequential access to the resultset, either.
That's why I'm not proposing to replace the current way pgresults are made complete, but just an extra option to enable developers using the libpq libary making the choice themselves.

regards,
Yeb Havinga


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to