On Thu, May 16, 2024 at 12:09 PM Jelte Fennema-Nio <m...@jeltef.nl> wrote: > I don't really understand the benefit of your proposal over option 2 > that I proposed. Afaict you're proposing that for e.g. compression we > first set _pq_.supports_compression=1 in the StartupMessage and use > that to do feature detection, and then after we get the response we > send ParameterSet("compression", "gzip"). To me this is pretty much > identical to option 2, except that it introduces an extra round trip > for no benefit (as far as I can see). Why not go for option 2 and send > _pq_.compression=gzip in the StartupMessage directly.
Ugh, it's so hard to communicate clearly about this stuff. I didn't really have any thought that we'd ever try to handle something as complicated as compression using ParameterSet. I tend to agree that for compression I'd like to see the startup packet contain more than _pq_.compression=1, but I'm not sure what would happen after that exactly. If the client asks for _pq_.compression=lz4 and the server tells the client that it doesn't understand _pq_.compression at all, then everybody's on the same page: no compression. But, if the server understands the option but isn't OK with the proposed value, what happens then? Does it send a NegotiateCompressionType message after the NegotiateProtocolVersion, for example? That seems like it could lead to the client having to be prepared for a lot of NegotiateX messages somewhere down the road. I think at some point in the past we had discussed having the client list all the algorithms it supported in the argument to _pq_.compression, and then the server would respond with the algorithm it wanted use, or maybe a list of algorithms that it could allow, and then we'd go from there. But I'm not entirely sure if that's the right idea, either. Changing compression algorithms in mid-stream is tricky, too. If I tell the server "hey, turn on server-to-client compression!" then I need to be able to identify where exactly that happens. Any messages already sent by the server and not yet processed by me, or any messages sent after that but before the server handles my request, are going to be uncompressed. Then, at some point, I'll start getting compressed data. If the compressed data is framed inside some message type created for that purpose, like I get a CompressedMessage message and then I decompress to get the actual message, this is simpler to manage. But even then, it's tricky if the protocol shifts. If I tell the server, you know what, gzip was a bad choice, I want lz4, I'll need to know where the switch happens to be able to decompress properly. I don't know if we want to support changing compression algorithms in mid-stream. I don't think there's any reason we can't, but it might be a bunch of work for something that nobody really cares about. Not sure. -- Robert Haas EDB: http://www.enterprisedb.com