On 03/08/2013 09:39 PM, Robert Haas wrote:
On Thu, Mar 7, 2013 at 2:48 PM, David E. Wheeler <da...@justatheory.com> wrote:
In the spirit of being liberal about what we accept but strict about what we 
store, it seems to me that JSON object key uniqueness should be enforced either 
by throwing an error on duplicate keys, or by flattening so that the latest key 
wins (as happens in JavaScript). I realize that tracking keys will slow parsing 
down, and potentially make it more memory-intensive, but such is the price for 
correctness.
I'm with Andrew.  That's a rathole I emphatically don't want to go
down.  I wrote this code originally, and I had the thought clearly in
mind that I wanted to accept JSON that was syntactically well-formed,
not JSON that met certain semantic constraints.

If it does not meet these "semantic" constraints, then it is not
really JSON - it is merely JSON-like.

this sounds very much like MySQLs decision to support timestamp
"0000-00-00 00:00" - syntactically correct, but semantically wrong.

We could add
functions like json_is_non_stupid(json) so that people can easily add
a CHECK constraint that enforces this if they so desire.  But
enforcing it categorically seems like a bad plan, especially since at
this point it would require a compatibility break with previous
releases
If we ever will support "real" spec-compliant JSON (maybe based
on recursive hstore ?) then there will be a compatibility break
anyway, so why not do it now.

Or do you seriously believe that somebody is using "PostgreSQL JSON"
to store these kind of "json documents"

Cheers
Hannu Krosing



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to