It's not hard to imagine they needed something faster and slimmer than
XML. Their motivations over XML are compelling:
"""
Protocol buffers have many advantages over XML for serializing
structured data. Protocol buffers:
* are simpler
* are 3 to 10 times smaller
* are 20 to 100 times faster
* are less ambiguous
* generate data access classes that are easier to use programmatically
"""
Regarding JSON and ASN, I would definitely lean towards them for their
self-description value, but obviously Protocol Buffers are going to be
faster and slimmer.
Regarding storage, the Google docs seem oriented towards RPC although
they do mention the potential for storage. Hopefully, one would only
store temporary data in this format for safe keeping in case a process
fails or has to be restarted. Ideally, non-temporary storage is done
with SQL or whatever else might be appropriate.
Furthermore, their text version for data does include the labels for
the values, so if you stored data in that format it will be similar to
JSON. They mention storing data in BigTable, but it wasn't clear if
people were storing the inscrutable binary version or the readable
text version.
I found http://code.google.com/apis/protocolbuffers/docs/overview.html
rather informative. These are engineers solving the problems of their
company, not pariahs on an NIH binge. They have "48,162 different
message types defined in the Google code tree". It's not hard to
imagine that a custom solution was worthwhile for them to pursue.
Is it really NIH if you get a set of pros and cons that are (a)
different, (b) what you wanted and (c) heavily reused?
I don't think so,
-Chuck
--
http://cobra-language.com/
--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-lpsg