`long double` is able to exactly represent all values exactly
representable in `uint64_t`, `int64_t` and `double` (big float
can be used for other languages). Wouldn't it just complicate
matters if you needed to specify whether a number is an integer
or a real value; if there is any need for it, the software can
just check which it best fits. Additionally, I think you would
want to support for arbitrary precision. Again, the software
can just check if arbitrary precision is required, no need to
complicate the syntax. What should a library do with parsed
numbers that are too large or too precise? In most cases, the
program know what size and precision is required.


Regards,
Mattias Andrée

On Sat, 15 Jun 2019 20:37:34 +0200
Wolf <w...@wolfsden.cz> wrote:

> On , sylvain.bertr...@gmail.com wrote:
> > json almost deserves a promotion to suckless format.  
> 
> Except for not putting any limits on sizes of integers. I think it would
> be better to have size the implementation must support to be json
> complient. And also having separate int and float types. Because let's compare
> what happens in ruby:
> 
>       JSON.parse(?9 * 100))
>       => 
> 9999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999
>   
> 
> and in firefox (JavaScript):
> 
>       var x = ''; for (var i = 0; i < 100; ++i) { x += '9'; }; JSON.parse(x);
>       => 1e+100  
> 
> So, yeeeey interoperability I guess?
> 
> W.


Reply via email to