On 19/02/2021 14:29, Damir Simunic wrote:

On 11 Feb 2021, at 16:06, Tom Lane <t...@sss.pgh.pa.us> wrote:

Maybe there is some useful thing that can be accomplished here, but
we need to consider the bigger picture rather than believing
(without proof) that a few hook variables will be enough to do
anything.

Pluggable wire protocol is a game-changer on its own.

The bigger picture is that a right protocol choice enables
large-scale architectural simplifications for whole classes of
production applications.

For browser-based applications (lob, saas, e-commerce), having the
database server speak the browser protocol enables architectures
without backend application code. This in turn leads to significant
reductions of latency, complexity, and application development time.
And it’s not just lack of backend code: one also profits from all the
existing infrastructure like per-query compression/format choice,
browser connection management, sse, multiple streams, prioritization,
caching/cdns, etc.

Don’t know if you’d consider it as a proof, yet I am seeing 2x to 4x
latency reduction in production applications from protocol conversion
to http/2. My present solution is a simple connection pooler I built
on top of Nginx transforming the tcp stream as it passes through.

I can see value in supporting different protocols. I don't like the approach discussed in this thread, however.

For example, there has been discussion elsewhere about integrating connection pooling into the server itself. For that, you want to have a custom process that listens for incoming connections, and launches backends independently of the incoming connections. These hooks would not help with that.

Similarly, if you want to integrate a web server into the database server, you probably also want some kind of connection pooling. A one-to-one relationship between HTTP connections and backend processes doesn't seem nice.

With the hooks that exist today, would it possible to write a background worker that listens on a port, instead of postmaster? Can you launch backends from a background worker? And communicate the backend processes using a shared memory message queue (see pqmq.c).

I would recommend this approach: write a separate program that sits between the client and PostgreSQL, speaking custom protocol to the client, and libpq to the backend. And then move that program into a background worker process.

In a recent case, letting the browser talk directly to the database
allowed me to get rid of a ~100k-sloc .net backend and all the
complexity and infrastructure that goes with
coding/testing/deploying/maintaining it, while keeping all the
positives: per-query compression/data conversion, querying multiple
databases over a single connection, session cookies, etc. Deployment
is trivial compared to what was before. Latency is down 2x-4x across
the board.

Querying multiple databases over a single connection is not possible with the approach taken here. Not sure about the others things you listed.

- Heikki


Reply via email to