On Tuesday, 5 August 2014 at 17:17:56 UTC, Andrei Alexandrescu wrote:
I searched around a bit and it seems different libraries have different takes to this numeric matter. A simple reading of the spec suggests that floating point data is the only numeric type. However, many implementations choose to distinguish between floating point and integrals.
The original point of JSON was that it auto-converts to Javascript data. And since Javascript only has one numeric type, of course JSON does too. But I think it's important that a JSON package for a language maps naturally to the types available in that language. D provides both floating point and integer types, each with their own costs and benefits, and so the JSON package should as well. It ends up being a lot easier to deal with than remembering to round from JSON.number or whatever when assigning to an int. In fact, JSON doesn't even impose any precision restrictions on its numeric type, so one could argue that we should be using BigInt and BigFloat. But this would stink most of the time, so... On an unrelated note, while the default encoding for strings is UTF-8, the RFC absolutely allows for UTF-16 surrogate pairs, and this must be supported. Any strings you get from Internet Explorer will be encoded as UTF-16 surrogate pairs regardless of content, presumably since Windows uses 16 bit wide chars for unicode.