On Fri, Jul 19, 2019 at 6:35 PM Robert Haas <robertmh...@gmail.com> wrote: > > On Fri, Jul 19, 2019 at 12:10 AM Amit Kapila <amit.kapil...@gmail.com> wrote: > > We are doing exactly what you have written in the last line of the > > next paragraph "stop the transaction from writing undo when the hash > > table is already too full.". So we will > > never face the problems related to repeated crash recovery. The > > definition of too full is that we stop allowing the new transactions > > that can write undo when we have the hash table already have entries > > equivalent to (UndoRollbackHashTableSize() - MaxBackends). Does this > > make sense? > > Oops, I was looking in the wrong place. Yes, that makes sense, but: > > 1. It looks like the test you added to PrepareUndoInsert has no > locking, and I don't see how that can be right. > > 2. It seems like this is would result in testing for each new undo > insertion that gets prepared, whereas surely we would want to only > test when first attaching to an undo log. If you've already attached > to the undo log, there's no reason not to continue inserting into it, > because doing so doesn't increase the number of transactions (or > transaction-persistence level combinations) that need undo. >
I agree that it should not be for each undo insertion rather whenever any transaction attached to an undo log. > 3. I don't think the test itself is correct. It can fire even when > there's no problem. It is correct (or would be if it said 2 * > MaxBackends) if every other backend in the system is already attached > to an undo log (or two). But if they are not, it will block > transactions from being started for no reason. > Right, we should find a way to know the exact number of transactions that are attached to undo-log at any point in time, then we can have a more precise check. -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com