Since my alternative is using json, that is heavier (need to store keys in
every row) than composite-types.
Updating an element on a specific composite_type inside an array of them is
done by UPDATE table SET composite[2].x = 24;

So last standing question, is it possible to insert an array of
composite_types by not specifying all of the columns for each
composite_type ?
So if i later add other columns to the composite_type, the insert query
doesn't break ?

Thanks


On Mon, Apr 21, 2014 at 1:46 PM, Dorian Hoxha <dorian.ho...@gmail.com>wrote:

> Maybe the char array link is wrong ? I don't think an array of arrays is
> good for my case. I'll probably go for json or separate table since it
> looks it's not possible to use composite-types.
>
>
> On Mon, Apr 21, 2014 at 4:02 AM, Rob Sargentg <robjsarg...@gmail.com>wrote:
>
>>  Sorry, I should not have top-posted (Dang iPhone).  Continued below:
>>
>> On 04/20/2014 05:54 PM, Dorian Hoxha wrote:
>>
>> Because i always query the whole row, and in the other way(many tables) i
>> will always join + have other indexes.
>>
>>
>> On Sun, Apr 20, 2014 at 8:56 PM, Rob Sargent <robjsarg...@gmail.com>wrote:
>>
>>>  Why do you think you need an array of theType v. a dependent table of
>>> theType. This tack is of course immune to to most future type changess.
>>>
>>> Sent from my iPhone
>>>
>>>     Interesting.  Of course any decent mapper will return "the whole
>> row". And would it be less disk intensive as an array of "struct ( where
>> struct is implemented as an array)".  From other threads [1] [2] I've come
>> to understand the datatype overhead per native type will be applied per
>> type instance per array element.
>>
>> [1] 30K 
>> floats<http://postgresql.1045698.n5.nabble.com/Is-it-reasonable-to-store-double-arrays-of-30K-elements-td5790562.html>
>> [2] char 
>> array<http://postgresql.1045698.n5.nabble.com/COPY-v-java-performance-comparison-tc5798389.html>
>>
>
>

Reply via email to