On Tue, Sep 23, 2014 at 1:30 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:

> David Johnston <david.g.johns...@gmail.com> writes:
> > My original concern was things that are rounded to zero now will not be
> in
> > 9.5 if the non-error solution is chosen.  The risk profile is extremely
> > small but it is not theoretically zero.
>
> This is exactly the position I was characterizing as an excessively
> narrow-minded attachment to backwards compatibility.  We are trying to
> make the behavior better (as in less confusing), not guarantee that it's
> exactly the same.


​I am going to assume that the feature designers were focused on wanting to
avoid:

1000 * 60 * 5

to get a 5-minute value set on a millisecond unit parameter.

The designer of the variable, in choosing a unit, has specified the minimum
value that they consider sane.  Attempting to record an insane value should
throw an error.

I do not support throwing an error on all attempts to round but specifying
a value less than 1 in the variable's unit should not be allowed.  If such
a value is proposed the user either made an error OR they misunderstand the
variable they are using.  In either case telling them of their error is
more friendly than allowing them to discover the problem on their own.

If we are only allowed to change the behavior by
> throwing errors in cases where we previously didn't, then we are
> voluntarily donning a straitjacket that will pretty much ensure Postgres
> doesn't improve any further.


​I'm not proposing project-level policy here.

David J.

Reply via email to