https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106165
Vincent Lefèvre changed:
What|Removed |Added
CC||vincent-gcc at vinc17 dot net
---
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106165
--- Comment #5 from xeioex ---
My question is more practical. For example while `-fexcess-precision=standard`
fixes the problem in GCC. But, I am left with the same problem regarding other
compilers. At least am looking for a way to detect
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106165
--- Comment #4 from Andrew Pinski ---
(In reply to xeioex from comment #3)
> Is there a portable (across platforms and compilers) to ensure that double
> values are always 64 bits?
It is still 64bit storage on i686, just uses the excessive
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106165
--- Comment #3 from xeioex ---
Is there a portable (across platforms and compilers) to ensure that double
values are always 64 bits?
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106165
Andrew Pinski changed:
What|Removed |Added
Status|UNCONFIRMED |RESOLVED
Resolution|---
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106165
--- Comment #1 from xeioex ---
TO fix the last confirmations
1) gcc -O2 minified_to_string_radix.i -o 507 -lm && ./507
1e+23.toString(36) = ga894a06ac8
ERROR expected "ga894a06abs"
2) gcc -O1 minified_to_string_radix.i -o 507 -lm &&