Re: RFC: std.json sucessor

2014-08-22 Thread Christian Manning via Digitalmars-d
It would be nice to have integers treated separately to doubles. 
I know it makes the number parsing simpler to just treat 
everything as double, but still, it could be annoying when you 
expect an integer type.


I'd also like to see some benchmarks, particularly against some 
of the high performance C++ parsers, i.e. rapidjson, gason, 
sajson. Or even some of the not bad performance parsers with 
better APIs, i.e. QJsonDocument, jsoncpp and jsoncons (slow but 
perhaps comparable interface to this proposal?).


Re: RFC: std.json sucessor

2014-08-22 Thread Christian Manning via Digitalmars-d

On Friday, 22 August 2014 at 17:45:03 UTC, Sönke Ludwig wrote:

Am 22.08.2014 19:27, schrieb Marc Schütz schue...@gmx.net:

On Friday, 22 August 2014 at 16:56:26 UTC, Sönke Ludwig wrote:

Am 22.08.2014 18:31, schrieb Christian Manning:
It would be nice to have integers treated separately to 
doubles. I know
it makes the number parsing simpler to just treat everything 
as double,
but still, it could be annoying when you expect an integer 
type.


That's how I've done it for vibe.data.json, too. For the new
implementation, I've just used the number parsing routine from
Andrei's std.jgrandson module. Does anybody have reservations 
about

representing integers as long instead?


It should automatically fall back to double on overflow. Maybe 
even use

BigInt if applicable?


I guess BigInt + exponent would be the only lossless way to 
represent any JSON number. That could then be converted to any 
desired smaller type as required.


But checking for overflow during number parsing would 
definitely have an impact on parsing speed, as well as using a 
BigInt of course, so the question is how we want set up the 
trade off here (or if there is another way that is 
overhead-free).


You could check for a decimal point and a 0 at the front 
(excluding possible - sign), either would indicate a double, 
making the reasonable assumption that anything else will fit in a 
long.


Re: RFC: std.json sucessor

2014-08-22 Thread Christian Manning via Digitalmars-d
Yes, no decimal point + no exponent would work without overhead 
to detect integers, but that wouldn't solve the proposed 
automatic long-double overflow, which is what I meant. My 
current idea is to default to double and optionally support any 
of long, BigInt and Decimal (BigInt+exponent), where integer 
overflow only works for long-BigInt.


Ah I see.

I have to say, if you are going to treat integers and floating 
point numbers differently, then you should store them 
differently. long should be used to store integers, double for 
floating point numbers. 64 bit signed integer (long) is a totally 
reasonable limitation for integers, but even that would lose 
precision stored as a double as you are proposing (if I'm 
understanding right). I don't think BigInt needs to be brought 
into this at all really.


In the case of integers met in the parser which are too 
large/small to fit in long, give an error IMO. Such integers 
should be (and are by other libs IIRC) serialised in the form 
1.234e-123 to force double parsing, perhaps losing precision at 
that stage rather than invisibly inside the library. Size of JSON 
numbers is implementation defined and the whole thing shouldn't 
be degraded in both performance and usability to cover JSON 
serialisers who go beyond common native number types.


Of course, you are free to do whatever you like :)