(reviving an old thread)

On Thu, 23 Jun 2022 at 13:29, Merlin Moncure <mmonc...@gmail.com> wrote:
> I'll still stand other point I made though; I'd
> really want to see some benchmarks demonstrating benefit over
> competing approaches that work over the current formats.  That should
> frame the argument as to whether this is a good idea.

I tried to use COPY BINARY to copy data recently from one Postgres
server to another and it was much slower than I expected. The backend
process on the receiving side was using close to 100% of a CPU core.
So the COPY command was clearly CPU bound in this case. After doing a
profile it became clear that 50% of the CPU time was spent on parsing
JSON. This seems extremely excessive to me. I'm pretty sure any
semi-decent binary format would be able to outperform this.

FYI: The table being copied contained large JSONB blobs in one of the
columns. These blobs were around 15kB for each row.


Reply via email to