On Fri, Mar 8, 2013 at 4:42 PM, Andrew Dunstan <and...@dunslane.net> wrote:
>> So my order of preference for the options would be:
>>
>> 1. Have the JSON type collapse objects so the last instance of a key wins
>> and is actually stored
>>
>> 2. Throw an error when a JSON type has duplicate keys
>>
>> 3. Have the accessors find the last instance of a key and return that
>> value
>>
>> 4. Let things remain as they are now
>>
>> On second though, I don't like 4 at all. It means that the JSON type
>> things a value is valid while the accessor does not. They contradict one
>> another.
>
> You can forget 1. We are not going to have the parser collapse anything.
> Either the JSON it gets is valid or it's not. But the parser isn't going to
> try to MAKE it valid.

Why not?  Because it's the wrong thing to do, or because it would be slower?

What I think is tricky here is that there's more than one way to
conceptualize what the JSON data type really is.  Is it a key-value
store of sorts, or just a way to store text values that meet certain
minimalist syntactic criteria?  I had imagined it as the latter, in
which case normalization isn't sensible.  But if you think of it the
first way, then normalization is not only sensible, but almost
obligatory.  For example, we don't feel bad about this:

rhaas=# select '1e1'::numeric;
 numeric
---------
      10
(1 row)

I think Andrew and I had envisioned this as basically a text data type
that enforces some syntax checking on its input, hence the current
design.  But I'm not sure that's the ONLY sensible design.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to