On 5/9/04 9:32 AM, "Tom Lane" <[EMAIL PROTECTED]> wrote:

> Are you sure it is a network problem?  What performance do you get
> if you run the same test program locally on the database machine?
> How about issuing the same sort of FETCH commands via a psql script?

Yes, it is definitely due to the network latency even though that latency is
very small.  Here it is running locally:

05-09-2004.17:49:41  Records read: 10000
05-09-2004.17:49:41  Records read: 20000
05-09-2004.17:49:42  Records read: 30000
05-09-2004.17:49:42  Records read: 40000
05-09-2004.17:49:43  Records read: 50000
05-09-2004.17:49:43  Records read: 60000
05-09-2004.17:49:44  Records read: 70000
05-09-2004.17:49:45  Records read: 80000
05-09-2004.17:49:45  Records read: 90000
05-09-2004.17:49:46  Records read: 100000
05-09-2004.17:49:46  Records read: 110000
05-09-2004.17:49:47  Records read: 120000
05-09-2004.17:49:47  Records read: 130000
05-09-2004.17:49:48  Records read: 140000

My "outside looking in" observations seem to point to the fact that every
row has to be retrieved (or stored) with a separate request.  Network
latency, however small, becomes an issue when the volume is very high.

A Pro*C program I recently ported from Oracle to PostgreSQL showed this
difference.  In Pro*C you can load an array with rows to insert, then issue
a single INSERT request passing it the array.  As far as I can tell, in
PostgreSQL ecpg (or other) you have to execute one request per record.

Is there some way to batch insert/fetch requests?  How else can I improve
upon the performance?  It appears that COPY works like this, but you can't
control what is returned and you have to know the column order.

Wes


---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Reply via email to