Re: [PATCH 0/5] GCC _BitInt support [PR102989]

2023-08-01 Thread Jakub Jelinek via Gcc-patches
On Fri, Jul 28, 2023 at 06:03:33PM +, Joseph Myers wrote:
> You could e.g. have a table up to 10^(N-1) for some N, and 10^N, 10^2N 
> etc. up to 10^6144 (or rather up to 10^6111, which can then be multiplied 
> by a 34-digit integer significand), so that only one multiplication is 
> needed to get the power of 10 and then a second multiplication by the 
> significand.  (Or split into three parts at the cost of an extra 
> multiplication, or multiply the significand by 1, 10, 100, 1000 or 1 
> as a multiplication within 128 bits and so only need to compute 10^k for k 
> a multiple of 5, or any number of variations on those themes.)

So, I've done some quick counting, if we want at most one multiplication
to get 10^X for X in 0..6111 (plus another to multiply mantissa by that),
having one table with 10^1..10^(N-1) and another with 10^YN for Y 1..6111/N,
I get for 64-bit limbs
S1 - size of 10^1..10^(N-1) table in bytes
S2 - size of 10^YN table
N   S1  S2  S
20  152 388792  388944
32  344 241848  242192
64  1104121560  122664
128 389660144   64040
255 14472   29320   43792
256 14584   29440   44024
266 15704   28032   43736
384 32072   19192   51264
512 56384   14080   70464
where 266 seems to be the minimum, though the difference from 256 is minimal
and having N a power of 2 seems cheaper.  Though, the above is just counting
the bytes of the 64-bit limb arrays concatenated together, I think it will
be helpful to have also an unsigned short table with the indexes into the
limb array (so another 256*2 + 24*2 bytes).
For something not in libgcc_s.so but in libgcc.a I guess 43.5KiB of .rodata
might be acceptable to make it fast.

Jakub



Re: [PATCH 0/5] GCC _BitInt support [PR102989]

2023-07-28 Thread Joseph Myers
On Fri, 28 Jul 2023, Jakub Jelinek via Gcc-patches wrote:

> I had a brief look at libbid and am totally unimpressed.
> Seems we don't implement {,unsigned} __int128 <-> _Decimal{32,64,128}
> conversions at all (we emit calls to __bid_* functions which don't exist),

That's bug 65833.

> the library (or the way we configure it) doesn't care about exceptions nor
> rounding mode (see following testcase)

And this is related to the never-properly-resolved issue about the split 
of responsibility between libgcc, libdfp and glibc.

Decimal floating point has its own rounding mode, set with fe_dec_setround 
and read with fe_dec_getround (so this test is incorrect).  In some cases 
(e.g. Power), that's a hardware rounding mode.  In others, it needs to be 
implemented in software as a TLS variable.  In either case, it's part of 
the floating-point environment, so should be included in the state 
manipulated by functions using fenv_t or femode_t.  Exceptions are shared 
with binary floating point.

libbid in libgcc has its own TLS rounding mode and exceptions state, but 
the former isn't connected to fe_dec_setround / fe_dec_getround functions, 
while the latter isn't the right way to do things when there's hardware 
exceptions state.

libdfp - https://github.com/libdfp/libdfp - is a separate library, not 
part of libgcc or glibc (and with its own range of correctness bugs) - 
maintained, but not very actively (maybe more so than the DFP support in 
GCC - we haven't had a listed DFP maintainer since 2019).  It has various 
standard DFP library functions - maybe not the full C23 set, though some 
of the TS 18661-2 functions did get added, so it's not just the old TR 
24732 set.  That includes its own version of the libgcc support, which I 
think has some more support for using exceptions and rounding modes.  It 
includes the fe_dec_getround and fe_dec_setround functions.  It doesn't do 
anything to help with the issue of including the DFP rounding state in the 
state manipulated by functions such as fegetenv.

Being a separate library probably in turn means that it's less likely to 
be used (although any code that uses DFP can probably readily enough 
choose to use a separate library if it wishes).  And it introduces issues 
with linker command line ordering, if the user intends to use libdfp's 
copy of the functions but the linker processes -lgcc first.

For full correctness, at least some functionality (such as the rounding 
modes and associated inclusion in fenv_t) would probably need to go in 
glibc.  See 
https://sourceware.org/pipermail/libc-alpha/2019-September/106579.html 
for more discussion.

But if you do put some things in glibc, maybe you still don't want the 
_BitInt conversions there?  Rather, if you keep the _BitInt conversions in 
libgcc (even when the other support is in glibc), you'd have some 
libc-provided interface for libgcc code to get the DFP rounding mode from 
glibc in the case where it's handled in software, like some interfaces 
already present in the soft-float powerpc case to provide access to its 
floating-point state from libc (and something along the lines of 
sfp-machine.h could tell libgcc how to use either that interface or 
hardware instructions to access the rounding mode and exceptions as 
needed).

> and for integral <-> _Decimal32
> conversions implement them as integral <-> _Decimal64 <-> _Decimal32
> conversions.  While in the _Decimal32 -> _Decimal64 -> integral
> direction that is probably ok, even if exceptions and rounding (other than
> to nearest) were supported, the other direction I'm sure can suffer from
> double rounding.

Yes, double rounding would be an issue for converting 64-bit integers to 
_Decimal32 via _Decimal64 (it would be fine to convert 32-bit integers 
like that since they can be exactly represented in _Decimal64; it would be 
fine to convert 64-bit integers via _Decimal128).

> So, wonder if it wouldn't be better to implement these in the soft-fp
> infrastructure which at least has the exception and rounding mode support.
> Unlike DPD, decoding BID seems to be about 2 simple tests of the 4 bits
> below the sign bit and doing some shifts, so not something one needs a 10MB
> of a library for.  Now, sure, 5MB out of that are generated tables in

Note that representations with too-large significand are defined to be 
noncanonical representations of zero, so you need to take care of that in 
decoding BID.

> bid_binarydecimal.c, but unfortunately those are static and not in a form
> which could be directly fed into multiplication (unless we'd want to go
> through conversions to/from strings).
> So, it seems to be easier to guess needed power of 10 from number of binary
> digits or vice versa, have a small table of powers of 10 (say those which
> fit into a limb) and construct larger powers of 10 by multiplicating those
> several times, _Decimal128 has exponent up to 6144 which is ~ 2552 bytes
> or 319 64-bit limbs, but having a table with all the 6144 

Re: [PATCH 0/5] GCC _BitInt support [PR102989]

2023-07-28 Thread Jakub Jelinek via Gcc-patches
On Thu, Jul 27, 2023 at 06:41:44PM +, Joseph Myers wrote:
> On Thu, 27 Jul 2023, Jakub Jelinek via Gcc-patches wrote:
> 
> > - _BitInt(N) bit-fields aren't supported yet (the patch rejects them); I'd 
> > like
> >   to enable those incrementally, but don't really see details on how such
> >   bit-fields should be laid-out in memory nor passed inside of function
> >   arguments; LLVM implements something, but it is a question if that is what
> >   the various ABIs want
> 
> So if the x86-64 ABI (or any other _BitInt ABI that already exists) 
> doesn't specify this adequately then an issue should be filed (at 
> https://gitlab.com/x86-psABIs/x86-64-ABI/-/issues in the x86-64 case).
> 
> (Note that the language specifies that e.g. _BitInt(123):45 gets promoted 
> to _BitInt(123) by the integer promotions, rather than left as a type with 
> the bit-field width.)

Ok, I'll try to investigate in detail what LLVM does and what GCC would do
if I just enabled the bitfield support and report.  Still, I'd like to
handle this only in incremental step after the rest of _BitInt support goes
in.

> > - conversions between large/huge (see later) _BitInt and _Decimal{32,64,128}
> >   aren't support and emit a sorry; I'm not familiar enough with DFP stuff
> >   to implement that
> 
> Doing things incrementally might indicate first doing this only for BID 
> (so sufficing for x86-64), with DPD support to be added when _BitInt 
> support is added for an architecture using DPD, i.e. powerpc / s390.
> 
> This conversion is a mix of base conversion and things specific to DFP 
> types.

I had a brief look at libbid and am totally unimpressed.
Seems we don't implement {,unsigned} __int128 <-> _Decimal{32,64,128}
conversions at all (we emit calls to __bid_* functions which don't exist),
the library (or the way we configure it) doesn't care about exceptions nor
rounding mode (see following testcase) and for integral <-> _Decimal32
conversions implement them as integral <-> _Decimal64 <-> _Decimal32
conversions.  While in the _Decimal32 -> _Decimal64 -> integral
direction that is probably ok, even if exceptions and rounding (other than
to nearest) were supported, the other direction I'm sure can suffer from
double rounding.

So, wonder if it wouldn't be better to implement these in the soft-fp
infrastructure which at least has the exception and rounding mode support.
Unlike DPD, decoding BID seems to be about 2 simple tests of the 4 bits
below the sign bit and doing some shifts, so not something one needs a 10MB
of a library for.  Now, sure, 5MB out of that are generated tables in
bid_binarydecimal.c, but unfortunately those are static and not in a form
which could be directly fed into multiplication (unless we'd want to go
through conversions to/from strings).
So, it seems to be easier to guess needed power of 10 from number of binary
digits or vice versa, have a small table of powers of 10 (say those which
fit into a limb) and construct larger powers of 10 by multiplicating those
several times, _Decimal128 has exponent up to 6144 which is ~ 2552 bytes
or 319 64-bit limbs, but having a table with all the 6144 powers of ten
would be just huge.  In 64-bit limb fit power of ten until 10^19, so we
might need say < 32 multiplications to cover it all (but with the current
575 bits limitation far less).  Perhaps later on write a few selected powers
of 10 as _BitInt to decrease that number.

> For conversion *from _BitInt to DFP*, the _BitInt value needs to be 
> expressed in decimal.  In the absence of optimized multiplication / 
> division for _BitInt, it seems reasonable enough to do this naively 
> (repeatedly dividing by a power of 10 that fits in one limb to determine 
> base 10^N digits from the least significant end, for example), modulo 
> detecting obvious overflow cases up front (if the absolute value is at 

Wouldn't it be cheaper to guess using the 10^3 ~= 2^10 approximation
and instead repeatedly multiply like in the other direction and then just
divide once with remainder?

Jakub
#include 

int
main ()
{
  volatile _Decimal64 d;
  volatile long long l;
  int e;

  feclearexcept (FE_ALL_EXCEPT);
  d = __builtin_infd64 ();
  l = d;
  e = fetestexcept (FE_INVALID);
  feclearexcept (FE_ALL_EXCEPT);
  __builtin_printf ("%016lx %d\n", l, e != 0);
  l = 50LL;
  fesetround (FE_TONEAREST);
  d = l;
  __builtin_printf ("%ld\n", (long long) d);
  fesetround (FE_UPWARD);
  d = l;
  fesetround (FE_TONEAREST);
  __builtin_printf ("%ld\n", (long long) d);
  fesetround (FE_DOWNWARD);
  d = l;
  fesetround (FE_TONEAREST);
  __builtin_printf ("%ld\n", (long long) d);
  l = 01LL;
  fesetround (FE_TONEAREST);
  d = l;
  __builtin_printf ("%ld\n", (long long) d);
  fesetround (FE_UPWARD);
  d = l;
  fesetround (FE_TONEAREST);
  __builtin_printf ("%ld\n", (long long) d);
  fesetround (FE_DOWNWARD);
  d = l;
  fesetround (FE_TONEAREST);
  __builtin_printf ("%ld\n", (long long) d);
}


Re: [PATCH 0/5] GCC _BitInt support [PR102989]

2023-07-27 Thread Joseph Myers
On Thu, 27 Jul 2023, Jakub Jelinek via Gcc-patches wrote:

> - _BitInt(N) bit-fields aren't supported yet (the patch rejects them); I'd 
> like
>   to enable those incrementally, but don't really see details on how such
>   bit-fields should be laid-out in memory nor passed inside of function
>   arguments; LLVM implements something, but it is a question if that is what
>   the various ABIs want

So if the x86-64 ABI (or any other _BitInt ABI that already exists) 
doesn't specify this adequately then an issue should be filed (at 
https://gitlab.com/x86-psABIs/x86-64-ABI/-/issues in the x86-64 case).

(Note that the language specifies that e.g. _BitInt(123):45 gets promoted 
to _BitInt(123) by the integer promotions, rather than left as a type with 
the bit-field width.)

> - conversions between large/huge (see later) _BitInt and _Decimal{32,64,128}
>   aren't support and emit a sorry; I'm not familiar enough with DFP stuff
>   to implement that

Doing things incrementally might indicate first doing this only for BID 
(so sufficing for x86-64), with DPD support to be added when _BitInt 
support is added for an architecture using DPD, i.e. powerpc / s390.

This conversion is a mix of base conversion and things specific to DFP 
types.

For conversion *from DFP to _BitInt*, the DFP value needs to be 
interpreted (hopefully using existing libbid code) as the product of a 
sign, an integer and a power of 10, with appropriate truncation of the 
fractional part if there is one (and appropriate handling of infinity / 
NaN / values where the integer part obviously doesn't fit in the type as 
raising "invalid" and returning an arbitrary result).  Then it's just a 
matter of doing an integer multiplication and producing an appropriately 
signed result (which might itself overflow the range of representable 
values with the given sign, meaning "invalid" should be raised).  
Precomputed tables of powers of 10 in binary might speed up the 
multiplication process (don't know if various existing tables in libbid 
are usable for that).  It's unspecified whether "inexact" is raised for 
non-integer DFP values.

For conversion *from _BitInt to DFP*, the _BitInt value needs to be 
expressed in decimal.  In the absence of optimized multiplication / 
division for _BitInt, it seems reasonable enough to do this naively 
(repeatedly dividing by a power of 10 that fits in one limb to determine 
base 10^N digits from the least significant end, for example), modulo 
detecting obvious overflow cases up front (if the absolute value is at 
least 10^97, conversion to _Decimal32 definitely overflows in all rounding 
modes, for example, so you just need to do an overflowing computation that 
produces a result with the right sign in order to get the correct 
rounding-mode-dependent result and exceptions).  Probably it isn't 
necessary to convert most of those base 10^N digits into base 10 digits.  
Rather, it's enough to find the leading M (= precision of the DFP type in 
decimal digits) base 10 digits, plus to know whether what follows is 
exactly 0, exactly 0.5, between 0 and 0.5, or between 0.5 and 1.

Then adding two appropriate DFP values with the right sign produces the 
final DFP result.  Those DFP values would need to be produced from integer 
digits together with the relevant power of 10.  And there might be 
multiple possible choices for the DFP quantum exponent; the preferred 
exponent for exact results is 0, so the resulting exponent needs to be 
chosen to be as close to 0 as possible (which also produces correct 
results when the result is inexact).  (If the result is 0, note that 
quantum exponent of 0 is not the same as the zero from default 
initialization, which has the least exponent possible.)

-- 
Joseph S. Myers
jos...@codesourcery.com


[PATCH 0/5] GCC _BitInt support [PR102989]

2023-07-27 Thread Jakub Jelinek via Gcc-patches
[PATCH 0/5] GCC _BitInt support [PR102989]

The following patch series introduces support for C23 bit-precise integer
types.  In short, they are similar to other integral types in many ways,
just aren't subject for integral promotions if smaller than int and they can
have even much wider precisions than ordinary integer types.

It is enabled only on targets which have agreed on processor specific
ABI how to lay those out or pass as function arguments/return values,
which currently is just x86-64 I believe, would be nice if target maintainers
helped to get agreement on psABI changes and GCC 14 could enable it on far
more architectures than just one.

C23 says that  defines BITINT_MAXWIDTH macro and that is the
largest supported precision of the _BitInt types, smallest is precision
of unsigned long long (but due to lack of psABI agreement we'll violate
that on architectures which don't have the support done yet).
The following series uses for the time just WIDE_INT_MAX_PRECISION as
that BITINT_MAXWIDTH, with the intent to increase it incrementally later
on.  WIDE_INT_MAX_PRECISION is 575 bits on x86_64, but will be even smaller
on lots of architectures.  This is the largest precision we can support
without changes of wide_int/widest_int representation (to make those non-POD
and allow use of some allocated buffer rather than the included fixed size
one).  Once that would be overcome, there is another internal enforced limit,
INTEGER_CST in current layout allows at most 255 64-bit limbs, which is
16320 bits as another cap.  And if that is overcome, then we have limitation
of TYPE_PRECISION being 16-bit, so 65535 as maximum precision.  Perhaps
we could make TYPE_PRECISION dependent on BITINT_TYPE vs. others and use
32-bit precision in that case later.  Latest Clang/LLVM I think supports
on paper up to 8388608 bits, but is hardly usable even with much shorter
precisions.

Besides this hopefully temporary cap on supported precision and support
only on targets which buy into it, the support has the following limitations:

- _BitInt(N) bit-fields aren't supported yet (the patch rejects them); I'd like
  to enable those incrementally, but don't really see details on how such
  bit-fields should be laid-out in memory nor passed inside of function
  arguments; LLVM implements something, but it is a question if that is what
  the various ABIs want

- conversions between large/huge (see later) _BitInt and _Decimal{32,64,128}
  aren't support and emit a sorry; I'm not familiar enough with DFP stuff
  to implement that

- _Complex _BitInt(N) isn't supported; again mainly because none of the psABIs
  mention how those should be passed/returned; in a limited way they are
  supported internally because the internal functions into which
  __builtin_{add,sub,mul}_overflow{,_p} is lowered return COMPLEX_TYPE as a
  hack to return 2 values without using references/pointers

- vectors of _BitInt(N) aren't supported, both because psABIs don't specify
  how that works and because I'm not really sure it would be useful given
  lack of hw support for anything but bit-precise integers with the same
  bit precision as standard integer types

Because the bit-precise types have different behavior both in the C FE
(e.g. the lack of promotion) and do or can have different behavior in type
layout and function argument passing/returning values, the patch introduces
a new integral type, BITINT_TYPE, so various spots which explicitly check
for INTEGER_TYPE and not say INTEGRAL_TYPE_P macro need to be adjusted.
Also the assumption that all integral types have scalar integer type mode
is no longer true, larger BITINT_TYPEs have BLKmode type.

The patch makes 4 different categories of _BitInt depending on the target hook
decisions and their precision.  The x86-64 psABI says that _BitInt which fit
into signed/unsigned char, short, int, long and long long are laid out and
passed as those types (with padding bits undefined if they don't have mode
precision).  Such smallest precision bit-precise integer types are categorized
as small, the target hook gives for specific precision a scalar integral mode
where a single such mode contains all the bits.  Such small _BitInt types are
generally kept in the IL until expansion into RTL, with minor tweaks during
expansion to avoid relying on the padding bit values.  All larger precision
_BitInt types are supposed to be handled as structure containing an array
of limbs or so, where a limb has some integral mode (for libgcc purposes
best if it has word-size) and the limbs have either little or big endian
ordering in the array.  The padding bits in the most significant limb if any
are either undefined or should be always sign/zero extended (but support for 
this
isn't in yet, we don't know if any psABI will require it).  As mentioned in
some psABI proposals, while currently there is just one limb mode, if the limb
ordering would follow normal target endianity, there is always a possibility
to have two limb modes