On Mon, Nov 16, 2009 at 9:05 AM, Greg Sabino Mullane >> We still need to decide what to do with queue full situations in >> the proposed listen/notify implementation. I have a new version >> of the patch to allow for a variable payload size. However, the >> whole notification must fit into one page so the payload needs >> to be less than 8K. > > That sounds fine to me, FWIW.
+1! I think this should satisfy everyone. >> I have also added the XID, so that we can write to the queue before >> committing to clog which allows for rollback if we encounter write >> errors (disk full for example). Especially the implications of this >> change make the patch a lot more complicated. > > Can you elaborate on the use case for this? Tom specifically asked for it: "The old implementation was acid so the new one should be to" >> so it won't update its pointer for some time. With the current space we can >> acommodate at least 2147483647 notifications or more, depending on the >> payload length. > > That's a whole lot of notifications. I doubt any program out there is using > anywhere near that number at the moment. In my applications, having a > few hundred notifications active at one time is "a lot" in my book. :) > >> These are the solutions that I currently see: >> >> 1) drop new notifications if the queue is full (silently or with rollback) > > I like this one best, but not with silence of course. While it's not the most > polite thing to do, this is for a super extreme edge case. I'd rather just > throw an exception if the queue is full rather than start messing with the > readers. It's a possible denial of service attack too, but so is the current > implementation in a way - at least I don't think apps would perform very > optimally with 2147483647 entries in the pg_listener table :) > > If you need some real-world use cases involving payloads, let me know, I've > been waiting for this feature for some time and have it all mapped out. me too. Joachim: when I benchmarked the original patch, I was seeing a few log messages that suggested there might be something going inside. In any event, the performance was fantastic. merlin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers