#49: Support asynchronous command processing
-------------------------+-----------------
Reporter: cito | Owner:
Type: enhancement | Status: new
Priority: major | Milestone: 4.2
Component: Classic API | Version: 4.1
Keywords: |
-------------------------+-----------------
The attached patch provides the asynchronous operations described in
[http://www.postgresql.org/docs/9.4/static/libpq-async.html section 31.4
of the PostgreSQL manual]. I believe everything described in that section
is available with these exceptions:
* there's no prepared statement support
* I didn't implement `PQsetSingleRowMode()`. This would require a possibly
small change to the way that query results are retrieved that I thought
would go better as a separate change set
* I didn't implement `PQconsumeInput()` or `PQisBusy()`. I don't really
understand the point of these functions, they seem to have marginal
utility outside notification reception, and I wasn't sure exactly how to
document them. It might make sense to have an isbusy() call which calls
both, but I don't really know if that fits anybody's use case
* I seem to have left out `PQflush()`. This is an oversight. In general,
the non-blocking operations are not well tested.
In general, the changes allow the database to be used in an event-driven
application, and for other applications, there are some parallelism
benefits:
* Connections can be completed in the background, which can speed up use
cases where for instance the application needs to connect to several
databases at once
* when multiple semi-colon-delimited queries are run in a single call, the
results to all the queries are returned
* the application can do other work while waiting for queries to complete
* copy and large object operations can use non-blocking IO
Query operations work essentially the same way as they do now, except all
the result codes are no returned by `getresult()`, `dictresult()` or
`namedresult()`, cases where `query()` returns `None`, `getresult()` et al
return `''`, and you have to call `getresult()` et al until they return
None. Also, exceptions raised by bad queries are raised by `getresult()`
et al, not by the query function.
The result member of `pgqueryobject` is changed by each call to
`getresult()` et al, so you can't get the same query result twice when
using asynchronous calls, and functions which depend on the result member
don't work until after a call to `getresult()` et al.
Because of this last point, I had to reorganize `_namedresult()`. That's
the only Python change other than the unit test.
C code changes are:
* some new functions
* change to `connect()` to take a new argument and call
`PQconnectStartParams()` when appropriate
* added code to `getresult()` and `dictresult()` to call `PQgetResult()`
when appropriate. Looking at it now, this block of code has got to be
quite big and maybe should move to its own function
* renamed `pg_query` to `_pg_query` and added a new argument. This is
called from wrapper functions `pg_query()` and `pg_sendquery()`
* moved scalar result processing from `pg_query()` to a new function,
`_check_result_status()`
* changed `pg_query()` to call `PQsendQuery()` or `PQsendQueryParams()`
when appropriate
----
Contributed by Patrick TJ !McPhee via
[https://mail.vex.net/mailman/listinfo.cgi/pygresql PyGreSQL mailing
list], 2015-08-03
--
Ticket URL: <http://trac.pygresql.org:8000/trac/ticket/49>
PyGreSQL <http://www.pygresql.org/>
PyGreSQL Tracker
_______________________________________________
PyGreSQL mailing list
[email protected]
https://mail.vex.net/mailman/listinfo.cgi/pygresql