Best practice when dealing with floating point is to normalize and Chop. The best practice for dealing with floating point operations is to normalize your data sets before proceeding. All numbers should be -1.0<=x<=1.0. Done properly, after calculations are complete, the data set is easily returned to its original range and domain by reversing the normalization process.
Chop is basically a rounding to a specified number of digits. Often 4 to 8 digits is adequate but each application is different. When dealing with matrix convolutions, after each row operation, chop each number in the matrix. Mathematically it can be shown that the final result is more accurate than allowing the floating point resolution error to propagate. Numbers such as 2.99999 should round to 3.0000 before doing the next round of calculations. This is especially important for results near zero. Typically numbers less than, for instance 0.00001 should be set to zero. This is especially important in matrix operations. All floating point math libraries have round and/or chop functions. In fact, not chopping when doing large data sets will ultimately result in significantly wrong results due to propagating floating point resolution errors. Chopping corrects for these errors. If you remember your High School chemistry or biology class, one of the first lessons is about significant digits. How to determine the proper number of significant digits depends on your application and field of study. By using proper number of significant digits throughout your calculations, the result will be more correct than not doing so. ------------ Scott Doctor scott at scottdoctor.com ------------------ On 9/9/2015 11:47 AM, R.Smith wrote: > > > On 2015-09-09 05:19 PM, Constantine Yannakopoulos wrote: >> On Wed, Sep 9, 2015 at 4:54 PM, Igor Tandetnik >> <igor at tandetnik.org> wrote: >> >>> A comparison like this would not generally be a proper >>> collation. The >>> equivalence relation it induces is not transitive - it's >>> possible to have A >>> == B and B == C but A != C (when A is "close enough" to B >>> and B is "close >>> enough" to C, but A and C are just far enough from each other). >>> >> ?Out of curiosity, doesn't this also apply also to numeric >> (real number) >> comparisons since SQLite3 uses IEEE floating point arithmetic?? > > IEEE Float comparisons do not work this way - you are more > likely to find the opposite: two numbers that seem to be near > perfectly equal might fail an equality test. > > Such confusion might be caused by statements such as: > ...WHERE (5.6 - 3.1) = 2.5 > ...WHERE (14 * 0.4) = 5.6 > > Which might return false if two or more of the constants > cannot be precisely represented. (The second one is a known > problem value). > > Nothing however would "seem" equal to the processor if they > are not exactly equal in binary form - no "almost" matching > happens. > > BTW: In strict Math it can be shown that 0.999... (repeating) > is exactly equal to 1 but in IEEE floats they are not, but > that is just because an 8-byte (64b) float lacks the capacity > to render the repeating nines to sufficiently wide a > representation to find the one-ness of it. > > https://en.wikipedia.org/wiki/0.999... > > IEEE fun in C#: > > Testing 1/3: > f = 0.3333333 > d = 0.333333333333333 > m = 0.3333333333333333333333333333 > f*3 = 1 > d*3 = 1 > m*3 = 0.9999999999999999999999999999 > (double)f*3 = 1.00000002980232 > (decimal)f*3 = 0.9999999 > (decimal)d*3 = 0.999999999999999 > (double)((float)i/3)*3 = 1 > Testing 2/3: > f = 0.6666667 > d = 0.666666666666667 > m = 0.6666666666666666666666666667 > f*3 = 2 > d*3 = 2 > m*3 = 2.0000000000000000000000000001 > (double)f*3 = 2.00000005960464 > (decimal)f*3 = 2.0000001 > (decimal)d*3 = 2.000000000000001 > (double)((float)i/3)*3 = 2 > > Cheers, > Ryan > > _______________________________________________ > sqlite-users mailing list > sqlite-users at mailinglists.sqlite.org > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users >