On 19.01.2018 20:01, Pavel Stehule wrote:


2018-01-19 17:53 GMT+01:00 Konstantin Knizhnik <k.knizh...@postgrespro.ru <mailto:k.knizh...@postgrespro.ru>>:



    On 19.01.2018 19:28, Pavel Stehule wrote:


            When I've been thinking about adding a built-in
            connection pool, my
            rough plan was mostly "bgworker doing something like
            pgbouncer" (that
            is, listening on a separate port and proxying everything
            to regular
            backends). Obviously, that has pros and cons, and
            probably would not
            work serve the threading use case well.


        And we will get the same problem as with pgbouncer: one
        process will not be able to handle all connections...
        Certainly it is possible to start several such scheduling
        bgworkers... But in any case it is more efficient to
        multiplex session in backend themselves.


    pgbouncer hold all time client connect. When we implement the
    listeners, then all work can be done by worker processes not by
    listeners.


    Sorry, I do not understand your point.
    In my case pgbench establish connection to the pgbouncer only 
    once at the beginning of the test.
    And pgbouncer spends all time in context switches (CPU usage is
    100% and it is mostly in kernel space: top of profile are kernel
    functions).
    The same picture will be if instead of pgbouncer you will do such
    scheduling in one bgworker.
    For the modern systems are not able to perform more than several
    hundreds of connection switches per second.
    So with single multiplexing thread or process you can not get
    speed more than 100k, while at powerful NUMA system it is possible
    to achieve millions of TPS.
    It is illustrated by the results I have sent in the previous mail:
    by spawning 10 instances of pgbouncer I was able to receive 7
    times bigger speed.


pgbouncer is proxy sw. I don't think so native pooler should be proxy too. So the compare pgbouncer with hypothetical native pooler is not fair, because pgbouncer pass all communication

If we will have separate scheduling bgworker(s) as Tomas proposed, then in any case we will have to do some kind of redirection. It can be done in more efficient way than using Unix sockets (as it is in case of locally installed pgbouncer), but even if we use shared memory queue then performance will be comparable and limited by number of context switches. It is possible to increase it by combining several requests into one parcel. But it even more complicate communication protocol between clients, scheduling proxies and executors.

--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

Reply via email to