Robert Haas <robertmh...@gmail.com> writes: > On Thu, Dec 16, 2010 at 3:18 PM, Tom Lane <t...@sss.pgh.pa.us> wrote: >> I guess you misunderstood what I said. What I meant was that we cannot >> longjmp *out to the outer level*, ie we cannot take control away from >> the input stack. We could however have a TRY block inside the interrupt >> handler that catches and handles (queues) any errors occurring during >> transaction abort. As long as we eventually return control to openssl >> I think it should work.
> Is there any real advantage to that? Not crashing when something funny happens seems like a real advantage to me. (And an unexpected elog(FATAL) will look like a crash to most users, even if you want to try to define it as not a crash.) > How often do we hit an error > trying to abort a transaction? And how will we report the error > anyway? Queue it up and report it at the next opportunity, as per upthread. > I thought the next thing we'd report would be the recovery > conflict, not any bizarre can't-abort-the-transaction scenario. Well, if we discard it because we're too lazy to implement error message merging, that's OK. Presumably it'll still get into the postmaster log. >> (Hm, but I wonder whether there are any hard >> timing constraints in the ssl protocol ... although hopefully xact abort >> won't ever take long enough that that's a real problem.) > That would be incredibly broken. Think "authentication timeout". I wouldn't be a bit surprised if the remote end would drop the connection if certain events didn't come back reasonably promptly. There might even be security reasons for that, ie, somebody could brute-force a key if you give them long enough. (But this is all speculation; I don't actually know SSL innards.) regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers