Version.....
PostgreSQL 9.1.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.5.2,
64-bit
Server.....
Server: RX800 S2 (8 x Xeon 7040 3GHz dual-core processors, 32GB memory
O/S: SLES11 SP1 64-bit
Scenario.....
Legacy application with bespoke but very efficient interface to its persistent
data. We're looking to replace the application and use
PostgreSQL to hold the data. Performance measures on the legacy application on
the same server shows that it can perform a particular read operation in ~215
microseconds (averaged) which includes processing the request and getting the
result out.
Question......
I've written an Immutable stored procedure that takes no parameters and returns
a fixed value to try and determine the round trip overhead of a query to
PostgreSQL. Call to sp is made using libpq. We're all local and using UNIX
domain sockets.
Client measures are suggesting ~150-200 microseconds to call sp and get the
answer back
ping to loopback returns in ~20 microseconds (I assume domain sockets are
equivalent).
strace of server process I think confirms time at server to be ~150-200
microsecs. For example:
11:17:50.109936 recvfrom(6, "P\0\0\0'\0SELECT * FROM sp_select_no"..., 8192, 0,
NULL, NULL) = 77 <0.000018>
11:17:50.110098 sendto(6, "1\0\0\0\0042\0\0\0\4T\0\0\0(\0\1sp_select_no_op"...,
86, 0, NULL, 0) = 86 <0.000034>
So it looks like a local no-op overhead of at least 150 microseconds which
would leave us struggling.
Could someone please let me know if this is usual and if so where the time's
spent?
Short of getting a faster server, is there anything I can do to influence this?
Thanks,
Andy