On Tue, 20 Sep 2011 08:20:27 -0700 Paul Berry <stereotype...@gmail.com> wrote:
> On 20 September 2011 01:05, Tolga Dalman <tolga.dal...@googlemail.com>wrote: > Tolga, your uneasiness is certainly justified in terms of conventional > wisdom. As someone who used to write numerical analysis software, I cringe > whenever I see floats compared using ==, because I know that this is almost > always a bad idea because of rounding errors. But there are occasional > exceptions, and I think this is one of them, for two reasons: > > (a) most of the time when one is tempted to compare two floats for equality, > it's worse to have a false negative (classifying two floats as unequal when > in fact they're only unequal because of rounding errors) than to have a > false positive (classifying two floats as equal when in fact they're not > quite equal because of rounding errors). But since the is_one() and > is_zero() functions are used to identify opportunities for optimizing the > shader, a false negative just means that we might occasionally generate less > optimal code, and a false positive means the shader would produce incorrect > outputs. In this case a false positive is worse, so we should err on the > side of classifying floats as unequal, even if they are very close. > > (b) the vast majority of the 1.0 and 0.0 values that we need to identify > with the is_one() and is_zero() functions either come from literal 1.0 and > 0.0 values (either in Mesa code or in the user's shaders) or from trivial > algebraic simplifications like 2.0 - 1.0 => 1.0. Since these values can be > represented perfectly in floats, there isn't an opportunity for rounding > errors to accumulate. Fair enough. So the real question here is, which variant produces better results -- I suspect the current behavior might be even better due to your reasoning above. > I also found your proposed alternative a little surprising: > > fabs(imm.f - 1.0) < std::numeric_limits<float>::epsilon() > > When rounding errors accumulate, they typically wind up being much larger > than epsilon(). If your goal is to make the test tolerant of rounding > errors, wouldn't you need a value much larger than epsilon() as your > threshold? Or is your goal to get the same computational effect without the > cringe-worthiness of using "=="? If rounding errors exceed epsilon, the compared objects are no longer equal. The reasoning here is the other way round: applying '==' on ambigous representations of the same value is plain wrong, whereas I believe comparing to epsilon is a more correct way (though more expensive in terms of performance). > In any case I think this discussion may be moot, because AFAICT this patch > doesn't apply on master--brw_vec4.cpp has never had a src_reg_is_zero() or > src_reg_is_one() function. Or am I missing something? It is not my aim to start a discussion leading nowhere. I just wondered in the first place whether my proposal might improve the current code. Thanks for clarification! Tolga Dalman _______________________________________________ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev