From: Tomas Vondra <tomas.von...@enterprisedb.com>
> Well, good that we all agree this is a useful feature to have (in
> general). The question is whether postgres_fdw should be doing batching
> on it's onw (per this thread) or rely on some other feature (libpq
> pipelining). I haven't followed the other thread, so I don't have an
> opinion on that.

Well, as someone said in this thread, I think bulk insert is much more common 
than updates/deletes.  Thus, major DBMSs have INSERT VALUES(record1), 
(record2)... and INSERT SELECT.  Oracle has direct path INSERT in addition.  As 
for the comparison of INSERT with multiple records and libpq batching (= 
multiple INSERTs), I think the former is more efficient because the amount of 
data transfer is less and the parsing-planning of INSERT for each record is 
eliminated.

I never deny the usefulness of libpq batch/pipelining, but I'm not sure if app 
developers would really use it.  If they want to reduce the client-server 
round-trips, won't they use traditional stored procedures?  Yes, the stored 
procedure language is very DBMS-specific.  Then, I'd like to know what kind of 
well-known applications are using standard batching API like JDBC's batch 
updates.  (Sorry, I think that should be discussed in libpq batch/pipelining 
thread and this thread should not be polluted.)


> Note however we're doing two things here, actually - we're implementing
> custom batching for postgres_fdw, but we're also extending the FDW API
> to allow other implementations do the same thing. And most of them won't
> be able to rely on the connection library providing that, I believe.

I'm afraid so, too.  Then, postgres_fdw would be an example that other FDW 
developers would look at when they use INSERT with multiple records.


Regards
Takayuki Tsunakawa




Reply via email to