On 12/24/21 09:04, Kyotaro Horiguchi wrote:
...
So, strictly speaking, that is a violation of the constraint I
mentioned regardless whether the transaction is committed or
not. However we have technical limitations as below.


I don't follow. What violates what?

If the transaction commits (and gets a confirmation from sync
replica), the modified WAL logging prevents duplicate values. It does
nothing for uncommitted transactions. Seems like an improvement to me.

Sorry for the noise. I misunderstand that ROLLBACK is being changed to
rollback sequences.


No problem, this part of the code is certainly rather confusing due to several layers of caching and these WAL-logging optimizations.

No idea. IMHO from the correctness / behavior point of view, the
modified logging is an improvement. The only issue is the additional
overhead, and I think the cache addresses that quite well.

Now I understand the story here.

I agree that the patch is improvment from the current behavior.
I agree that the overhead is eventually-nothing for WAL-emitting workloads.


OK, thanks.

Still, as Fujii-san concerns, I'm afraid that some people may suffer
the degradation the patch causes.  I wonder it is acceptable to get
back the previous behavior by exposing SEQ_LOG_VALS itself or a
boolean to do that, as a 'not-recommended-to-use' variable.


Maybe, but what would such workload look like? Based on the tests I did, such workload probably can't generate any WAL. The amount of WAL added by the change is tiny, the regression is caused by having to flush WAL.

The only plausible workload I can think of is just calling nextval, and the cache pretty much fixes that.

FWIW I plan to explore the idea of looking at sequence page LSN, and flushing up to that position.

regards

--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Reply via email to