Hi,

To measure throughput, I'm timing exec time spent in LibPQ,
against the size of the result-set in bytes, as reported
by PQresultMemorySize().

*EXEC:          7x (  130,867 rows,     54,921,532 bytes) in   0.305s
(171.8 MB/s)*
*EXEC:          8x (  180,079 rows,     95,876,047 bytes) in   0.493s
(185.5 MB/s)*
*EXEC:          9x (  224,253 rows,    371,663,836 bytes) in   2.986s
(118.7 MB/s)*

The problem is that I have only a 1GbE network link, so theoretically,
max-throughput should be around 125MB/s, which the 1st two runs exceed.

These 3 runs access the same schema, doing full scans of a few "real data"
tables,
and the last one accesses more/larger bytea columns. These are plain
SELECTs in
binary mode using normal sync execution (no cursors, COPY, single-row mode,
PIPELINE, etc...)

Obviously from these results, I now realize PQresultMemorySize() returns
something larger than what went across the network. Can someone explain
how so? And if there's a better proxy to programmatically know the network
traffic exchanged on the connection's socket, that's cross-platform?
Obviously
libpq itself knows, but I don't see any way to access that info.

Perhaps tracing might? But will that incur overhead?
I'd appreciate any insight. Thanks, --DD

Reply via email to