On Tue, Mar 21, 2017 at 6:55 PM, Robert Haas <robertmh...@gmail.com> wrote:

>
> I think that very wide columns and highly indexed tables are not
> particularly unrealistic, nor do I think updating all the rows is
> particularly unrealistic.


Ok. But those who update 10M rows in a single transaction, would they
really notice 5-10% variation? I think it probably makes sense to run those
updates in smaller transactions and see if the regression is still visible
(otherwise tweaking synchronous_commit is mute anyways).


> Sure, it's not everything, but it's
> something.  Now, I would agree that all of that PLUS unlogged tables
> with fsync=off is not too realistic.  What kind of regression would we
> observe if we eliminated those last two variables?
>

Hard to say. I didn't find any regression on the machines available to me
even with the original test case that I used, which was pretty bad case to
start with (sure, Mithun tweaked it further to create even worse scenario).
May be the kind of machines he has access to, it might show up even with
those changes.


>
>
> I think that whether the code ends up getting contorted is an
> important consideration here.  For example, if the first of the things
> you mention can be done without making the code ugly, then I think
> that would be worth doing; it's likely to help fairly often in
> real-world cases.  The problem with making the code contorted and
> ugly, as you say that the second idea would require, is that it can
> easily mask bugs.


Agree. That's probably one reason why Alvaro wrote the patch to start with.
I'll give the first of those two options a try.

Thanks,
Pavan

-- 
 Pavan Deolasee                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

Reply via email to