On 22.08.2019 5:40, Kyotaro Horiguchi wrote:
Hello.
At Wed, 21 Aug 2019 19:41:08 +0300, Konstantin Knizhnik <k.knizh...@postgrespro.ru>
wrote in <ed9da20e-01aa-d04b-d085-e6c16b14b...@postgrespro.ru>
Hi, hackers.
One of our customers complains about slow execution of PL/pgSQL
functions comparing with Oracle.
So he wants to compile PL/pgSQL functions (most likely just-in-time
compilation).
Certainly interpreter adds quite large overhead comparing with native
code (~10 times) but
most of PL/pgSQL functions are just running some SQL queues and
iterating through results.
I can not believe that JIT can significantly speed-up such functions.
So I decided to make simple experiment: I created large enough table
and implemented functions
which calculates norm of one column in different languages.
Results are frustrating (at least for me):
PL/pgSQL: 29044.361 ms
C/SPI: 22785.597 ms
С/coreAPI: 2873.072 ms
PL/Lua: 33445.520 ms
SQL: 7397.639 ms (with parallel execution disabled)
The fact that difference between PL/pgSQL and function implemented in
C using SPI is not so large was expected by me.
But why it is more than 3 time slower than correspondent SQL query?
The answer seems to be in the third result: the same function in C
implemented without SPI (usign table_beginscan/heap_getnext)
Looks like SPI adds quite significant overhead.
And as far as almost all PL languages are using SPI, them all suffer
from it.
As far as looking the attached spitest.c, it seems that the
overhead comes from cursor operation, not from SPI. As far as
spitest.c goes, cursor is useless. "SQL" and C/coreAPI seem to
be scanning over the result from *a single* query. If that's
correct, why don't you use SPI_execute() then scan over
SPI_tuptable?
Scanned table is very large and doesn't fir in memory.
This is why I am using SPI cursors.
Please let me know if there is more efficient way to traverse larger
table using SPI.
regards.
--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company