This discussion seems to have halted without a conclusion. What is going
to happen? Will extended precision numerics change to use an int128 (and
maybe extend maximum precision to 38)? Or will the current solution
remain, that is backed by a Decimal128?
Mark
On 21-6-2019 15:53, Alex Peshkoff via Firebird-devel wrote:
I've compared various possible implementations of high precision
numeric. Except existing in fb4 (decfloat based) were checked native
gcc's __int128 and ttmath (fixed high precision library with pure .h
implementation). Test performed a mix of sum/mult/div operations in a
loop. Native code was compiled w/o opimization - even with -O1 loop was
optimized and test completed at once.
Something like this:
for (int i = 0; i < n; ++i)
{
e += (a / b) + (c * d);
a++;
c--;
}
Results are generally as expected (x64):
gcc - 0.5 sec
ttmath - 1.2 sec
decfloat - 10.5 sec
As an additional bonus internal binary layout of 128-bit integer is the
same for __int128 on x64 (on x86 unsupported), ttmath's 128-bit class on
x64 and ttmath on x86. I could not test other architectures (first of
all bigendians are interesting), but looking at the code I do not expect
bad surprises from that library.
I.e. I suggest to replace high precision numeric's implementation using
decfloat with native 128-bit integer when possible and ttmath in other
cases. That will make it possible to use 128-bit integers in all cases
when 64 is not enough without serious performance penalty. Comments?
Firebird-Devel mailing list, web interface at
https://lists.sourceforge.net/lists/listinfo/firebird-devel
--
Mark Rotteveel
Firebird-Devel mailing list, web interface at
https://lists.sourceforge.net/lists/listinfo/firebird-devel