Is JSONSerialization somehow related to the upcoming
std.serialization?
I feel that there is a big need for standardizing serialization
in D. There are too many alternatives: dproto, msgpack, JSON,
xml, etc should be made backends to the same frontend named
std.serialization right?
/Per
On Sunday, 17 November 2013 at 21:37:35 UTC, Orvid King wrote:
On 11/17/13, "Nordlöw" <[email protected]> wrote:
In the road to develop a new kind of search engine that caches
types, statistics, etc about files and directories I'm
currently
trying to implement persistent caching of my internal directory
tree using `msgpack-d`:
Why doesn't `msgpack-d` and, from what I can see also,
`std.serialization` (Orange) support implementing *both*
packing
and unpacking through one common template (member) function
overload like **Boost.Serialization** does?. For example
containers can be handled using this concise and elegant syntax
in C++11:
friend class boost::serialization::access;
template<class Ar> void serialize(Ar& ar, const uint
version) {
for (const auto& e : *this) { ar & e; }
}
This halves the code size aswell as removes the risk of making
the `pack` and `unpack` go out of sync.
I would suspect that the biggest reason is the limitations that
that
imposes on the underlying serialization implementation, as it
would
require that the underlying format support a minimum set of
types.
I have something similar(ish) in my serialization framework,
(https://github.com/Orvid/JSONSerialization) that allows you to
implement a custom format for each type, but I implement it as
a pair
of methods, toString and parse, allowing the underlying format
to be
able to support only serializing strings if it really wanted
to. Also,
currently my framework only supports JSON, but it's designed
such that
it would be insanely easy to add support for another format.
It's also
fast, very fast, mostly because I have managed to implement the
JSON
serialization methods entirely with no allocation at all being
required. I'm able to serialize 100k objects in about 90ms on
an i5
running at 1.6ghz, deserialization is a bit slower currently,
420ms to
deserialize those same objects, but that's almost exclusively
allocation time.