The low default value for batch_size_warn_threshold_in_kb is making me
wonder if I'm perhaps approaching the problem of atomicity in a non-ideal
fashion.

With one data set duplicated/denormalized into 5 tables to support queries,
we use batches to ensure inserts make it to all or 0 tables.  This works
fine, but I've had to bump the warn threshold and fail threshold
substantially (8x higher for the warn threshold).  This - in turn - makes
me wonder, with a default setting so low, if I'm not solving this problem
in the canonical/standard way.

Mostly just looking for confirmation that we're not unintentionally doing
something weird...

Reply via email to