Re: [GENERAL] Efficient sorting the results of a join, without denormalization

2015-05-30 Thread Tom Lane
"Glen M. Witherington"  writes:
> And here's the query I want to do, efficiently:

> SELECT * FROM c
>   JOIN b ON b.id = c.b_id
>   JOIN a ON a.id = b.a_id
> WHERE a.id = 3
> ORDER BY b.created_at DESC
> LIMIT 10

At least for that dummy data, this seems sufficient:

regression=# create index on b (a_id, created_at);
CREATE INDEX
regression=# explain analyze SELECT * FROM c
  JOIN b ON b.id = c.b_id
  JOIN a ON a.id = b.a_id
WHERE a.id = 3
ORDER BY b.created_at DESC
LIMIT 10;
  QUERY 
PLAN  
--
 Limit  (cost=0.14..21.95 rows=10 width=64) (actual time=0.064..1.176 rows=10 
loops=1)
   ->  Nested Loop  (cost=0.14..436079.81 rows=20 width=64) (actual 
time=0.063..1.173 rows=10 loops=1)
 Join Filter: (b.id = c.b_id)
 Rows Removed by Join Filter: 1218
 ->  Nested Loop  (cost=0.14..9.81 rows=20 width=40) (actual 
time=0.035..0.035 rows=1 loops=1)
   ->  Index Scan Backward using b_a_id_created_at_idx on b  
(cost=0.14..8.49 rows=20 width=24) (actual time=0.019..0.019 rows=1 loops=1)
 Index Cond: (a_id = 3)
   ->  Materialize  (cost=0.00..1.07 rows=1 width=16) (actual 
time=0.013..0.013 rows=1 loops=1)
 ->  Seq Scan on a  (cost=0.00..1.06 rows=1 width=16) 
(actual time=0.009..0.009 rows=1 loops=1)
   Filter: (id = 3)
   Rows Removed by Filter: 2
 ->  Materialize  (cost=0.00..27230.00 rows=100 width=24) (actual 
time=0.008..0.811 rows=1228 loops=1)
   ->  Seq Scan on c  (cost=0.00..16370.00 rows=100 width=24) 
(actual time=0.007..0.310 rows=1228 loops=1)
 Planning time: 0.796 ms
 Execution time: 1.390 ms
(15 rows)

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] date type changing to timestamp without time zone in postgres 9.4

2015-05-30 Thread Rishi Gokhale
When I create a table with a column whose type is date the type gets forced to 
timestamp without timezone after it gets created


ops=# CREATE TABLE test (

ops(# namevarchar(40) NOT NULL,

ops(# start date NOT NULL

ops(# );

CREATE TABLE

ops=# \d test;

   Table "public.test"

 Column |Type | Modifiers

+-+---

 name   | character varying(40)   | not null

 start  | timestamp without time zone | not null


ops=#


The table creation is just a test, my original issue is while restoring a 
backup (pg_dump/pg_restore) from another server also 9.4, where the date types 
on numerous columns get forced to change to timestamp without timezone.


Any help would be appreciated.


Thanks,

Rishi


Re: [GENERAL] Efficient sorting the results of a join, without denormalization

2015-05-30 Thread Glen M. Witherington



On Sat, May 30, 2015, at 11:33 PM, David G. Johnston wrote:
> This is one problem with using made up surrogate keys...
> 
> The PK of A is a component of both the PK of B and the PK of C but you throw 
> that information away by using serial fields for PKs instead.  You should 
> have unique indexes on B and C that incorporate the ID from A

That is quite a strange schema, though isn't it? If you imagine it as
emails:

C = Emails
B = Folder
A = User

Now you're suggesting that even though an email belongs to to a folder,
which belongs to a user ... each email should also contain contain a
reference to a user? I guess that's fine, but seems unideal from a
redundancy perspective

> 
> All that said you really should put indexes on the foreign keys...

Yeah, of course. I purposely left that out, as I was asking which
indexes would need to be created to support that query


Thanks for your help!


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Efficient sorting the results of a join, without denormalization

2015-05-30 Thread David G. Johnston
On Saturday, May 30, 2015, Glen M. Witherington  wrote:

> Sorry about the horrendous subject, let me explain by example:
>
> Let's take this schema:
>
>
> ```
> CREATE TABLE a (
>   id bigserial PRIMARY KEY,
>   created_at timestamp with time zone  NOT NULL  DEFAULT NOW()
> );
>
> CREATE TABLE b(
>   id bigserial PRIMARY KEY,
>   a_id bigint NOT NULL REFERENCES a(id),
>   created_at  timestamp with time zone  NOT NULL  DEFAULT NOW()
> );
>
> CREATE TABLE c(
>   id bigserial PRIMARY KEY,
>   b_id bigint NOT NULL REFERENCES b(id),
>   created_at  timestamp with time zone  NOT NULL  DEFAULT NOW()
> );
> ```
>
> And let's fill it up with some dummy data, that roughly matches the
> distribution of mine:
>
> ```
> INSERT INTO a SELECT FROM generate_series(1, 5);
> INSERT INTO b(a_id) SELECT (i % 5) + 1 FROM generate_series(1, 100) i;
> INSERT INTO c(b_id) SELECT (trunc(random() * 100)+1) FROM
> generate_series(1, 100);
> ```
>
>
> And here's the query I want to do, efficiently:
>
> 
> SELECT * FROM c
>   JOIN b ON b.id = c.b_id
>   JOIN a ON a.id = b.a_id
> WHERE a.id = 3
> ORDER BY b.created_at DESC
> LIMIT 10
> ```
>
> There seems to simply be no index I can put on the data, that allows me
> to run this query efficiently. Four hours of playing with this, the only
> solution I can see is, normalizing table `c` by adding a field "b's
> a_id" and then creating an index on  (bs_a_id, created_at).
>
> But surely there's a better solution?
>
>
This is one problem with using made up surrogate keys...

The PK of A is a component of both the PK of B and the PK of C but you
throw that information away by using serial fields for PKs instead.  You
should have unique indexes on B and C that incorporate the ID from A and
then indeed you will end up with a join sequence that can be executed
against efficiently.

All that said you really should put indexes on the foreign keys...

I haven't run the code to see the actual plan

David J.


[GENERAL] Curious case of huge simple btree indexes bloat.

2015-05-30 Thread Maxim Boguk
Hi,

On the one of databases under my support I found very curious case of the
almost endless index bloat (index size stabilises around 100x of the
original size).
Graph of one index size history attached (other indexes have an similar
time/size graphs).

The table have 5 indexes and they all have the same bloating behaviour
(growth to almost 100x and stabilisation around that amount). An original
index size 4-8Mb (after manual reindex), over time of the 5 days they all
monotonically growth to 300-900MB. In the same time table size staying
pretty constant at 30-50Mb (and amount of rows in the same don't vary
widely and stays between 200k and 500k).

The table have large amount of the inserts/update/deletes, but autovacuum
tuned to be pretty aggressive and I sure that there are no long
transactions (longer then few minutes). Also there are no standby replica
with hot_standby=on and no prepared transactions used, and not batch
deletes/inserts/updates used. The server have plenty of RAM (database fit
into shared buffers), IO and CPU available so there are no visible resource
starvation.

​Background information:
The PostgreSQL version 9.4.2 64 bit on Linux.
Table structure:

\d+ clientsession
 Table
"public.clientsession"
 Column  |   Type
|Modifiers|
Storage  | Stats target | Description
-+--+-+--+--+-
 globalsessionid | bigint   | not null default
nextval('clientsession_globalsessionid_seq'::regclass) | plain
|  |
 deviceuid   | text
| |
extended |  |
 localsessionid  | bigint
| |
plain|  |
 createddate | timestamp with time zone
| |
plain|  |
 lastmodified| timestamp with time zone
| |
plain|  |
 keypairid   | bigint
| |
plain|  |
 sessiondataid   | bigint
| |
plain|  |
Indexes:
"clientsession_pkey" PRIMARY KEY, btree (globalsessionid) CLUSTER
"clientsession_ukey" UNIQUE CONSTRAINT, btree (deviceuid,
localsessionid)
"clientsession_keypairid_key" btree (keypairid)
"clientsession_sessiondataid_key" btree (sessiondataid)
"clientsession_uduid_localid_idx" btree (upper(deviceuid),
localsessionid)
Foreign-key constraints:
"clientsession_keypair_fkey" FOREIGN KEY (keypairid) REFERENCES
keypair(id) ON DELETE CASCADE
"clientsession_sessiondata_id" FOREIGN KEY (sessiondataid) REFERENCES
sessiondata(id) ON DELETE CASCADE
Referenced by:
TABLE "remotecommand" CONSTRAINT "remotecommand_clientsessionid_fkey"
FOREIGN KEY (clientsessionid) REFERENCES clientsession(globalsessionid) ON
DELETE CASCADE
Options: fillfactor=50, autovacuum_vacuum_scale_factor=0.01

Results of pgstatindex for one of bloated indexes:
select * from pgstatindex('clientsession_pkey');
-[ RECORD 1 ]--+--
version| 2
tree_level | 2
index_size | 552640512
root_block_no  | 290
internal_pages | 207
leaf_pages | 67224
empty_pages| 0
deleted_pages  | 29
avg_leaf_density   | 1.08
leaf_fragmentation | 3.02

List of current index sizes (they stabilized 1 day ago):
\di+ clientsession*
 List of relations
 Schema |  Name   | Type  |  Owner  | Table
|  Size  | Description
+-+---+-+---++-
 public | clientsession_keypairid_key | index | phoenix | clientsession
| 545 MB |
 public | clientsession_pkey  | index | phoenix | clientsession
| 527 MB |
 public | clientsession_sessiondataid_key | index | phoenix | clientsession
| 900 MB |
 public | clientsession_uduid_localid_idx | index | phoenix | clientsession
| 254 MB |
 public | clientsession_ukey  | index | phoenix | clientsession
| 254 MB |



I never seen such behaviour on other databases and all my attempts to get
this index bloat under control have no effect.
If anyone have any ideas (even crazy ones) - welcome.


-- 
Maxim Boguk
Senior Postgresql DBA
http://www.postgresql-consulting.ru/ 

Phone RU: +7 910 405 4718
Phone AU: +61 45 218 5678

LinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1b
Skype: maxim.boguk
Jabber: maxim.bo...@gmail.com
МойКруг: http:

[GENERAL] Efficient sorting the results of a join, without denormalization

2015-05-30 Thread Glen M. Witherington
Sorry about the horrendous subject, let me explain by example:

Let's take this schema:


```
CREATE TABLE a (
  id bigserial PRIMARY KEY,
  created_at timestamp with time zone  NOT NULL  DEFAULT NOW()
);

CREATE TABLE b(
  id bigserial PRIMARY KEY,
  a_id bigint NOT NULL REFERENCES a(id),
  created_at  timestamp with time zone  NOT NULL  DEFAULT NOW()
);

CREATE TABLE c(
  id bigserial PRIMARY KEY,
  b_id bigint NOT NULL REFERENCES b(id),
  created_at  timestamp with time zone  NOT NULL  DEFAULT NOW()
);
```

And let's fill it up with some dummy data, that roughly matches the
distribution of mine:

```
INSERT INTO a SELECT FROM generate_series(1, 5);
INSERT INTO b(a_id) SELECT (i % 5) + 1 FROM generate_series(1, 100) i;
INSERT INTO c(b_id) SELECT (trunc(random() * 100)+1) FROM
generate_series(1, 100);
```


And here's the query I want to do, efficiently:


SELECT * FROM c
  JOIN b ON b.id = c.b_id
  JOIN a ON a.id = b.a_id
WHERE a.id = 3
ORDER BY b.created_at DESC
LIMIT 10
```

There seems to simply be no index I can put on the data, that allows me
to run this query efficiently. Four hours of playing with this, the only
solution I can see is, normalizing table `c` by adding a field "b's
a_id" and then creating an index on  (bs_a_id, created_at).

But surely there's a better solution?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] Re: [GENERAL] 9.4.1 -> 9.4.2 problem: could not access status of transaction 1

2015-05-30 Thread Andres Freund
On 2015-05-30 00:52:37 -0300, Alvaro Herrera wrote:
> Andres Freund wrote:
>
> > I considered for a second whether the solution for that could be to not
> > truncate while inconsistent - but I think that doesn't solve anything as
> > then we can end up with directories where every single offsets/member
> > file exists.
>
> Hang on a minute.  We don't need to scan any files to determine the
> truncate point for offsets; we have the valid range for them in
> pg_control, as nextMulti + oldestMulti.  And using those end points, we
> can look for the offsets corresponding to each, and determine the member
> files corresponding to the whole set; it doesn't matter what other files
> exist, we just remove them all.  In other words, maybe we can get away
> with considering truncation separately for offset and members on
> recovery: do it like today for offsets (i.e. at each restartpoint), but
> do it only in TrimMultiXact for members.

Is oldestMulti, nextMulti - 1 really suitable for this? Are both
actually guaranteed to exist in the offsets slru and be valid?  Hm. I
guess you intend to simply truncate everything else, but just in
offsets?

> One argument against this idea is that we may not want to keep a full
> set of member files on standbys (due to disk space usage), but that's
> what will happen unless we truncate during replay.

I think that argument is pretty much the death-knell.=

> > I think at least for 9.5+ we should a) invent proper truncation records
> > for pg_multixact b) start storing oldestValidMultiOffset in pg_control.
> > The current hack of scanning the directories to get knowledge we should
> > have is a pretty bad hack, and we should not continue using it forever.
> > I think we might end up needing to do a) even in the backbranches.
>
> Definitely agree with WAL-logging truncations; also +1 on backpatching
> that to 9.3.  We already have experience with adding extra WAL records
> on minor releases, and it didn't seem to have bitten too hard.

I'm inclined to agree. My only problem is that I'm not sure whether we
can find a way of doing all this without adding a pg_control field. Let
me try to sketch this out:

1) We continue determining the oldest 
SlruScanDirectory(SlruScanDirCbFindEarliest)
   on the master to find the oldest offsets segment to
   truncate. Alternatively, if we determine it to be safe, we could use
   oldestMulti to find that.
2) SlruScanDirCbRemoveMembers is changed to return the range of members
   to remove, instead of doing itself
3) We wal log [oldest offset segment guaranteed to not be alive,
   nextmulti) for offsets, and [oldest members segment guaranteed to not be 
alive,
   nextmultioff), and redo truncations for the entire range during
   recovery.

I'm pretty tired right now, but this sounds doable.

Greetings,

Andres Freund


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] replacing jsonb field value

2015-05-30 Thread Glyn Astill




- Original Message -
> From: Andreas Kretschmer 
> To: pgsql-general@postgresql.org
> Cc: 
> Sent: Saturday, 30 May 2015, 13:10
> Subject: Re: [GENERAL] replacing jsonb field value
> 
> Michael Paquier  wrote:
> 
>> 
>>  Append the new value to it the existing field, jsonb has as property
>>  to enforce key uniqueness, and uses the last value scanned for a given
>>  key.
> 
> can you show a simple example, how to append a jsonb to an jsonb-field?
> Maybe i'm blind, but i can't find how it works.
> 
> Thx.
> 

> 
> Andreas

Prior to 9.5 you can't, I think you have to use something like jsonb_each to 
unwrap it then wrap it back up again.

The jsonbx extension, which I believe is what ended up in 9.5 has a simple 
concatenate function (here: https://github.com/erthalion/jsonbx), I also had a 
go (here: https://github.com/glynastill/pg_jsonb_opx).


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to retrieve Comment text using SQL, not psql?

2015-05-30 Thread Melvin Davidson
>select obj_description('table1'::regclass, 'pg_class');

That will only show the table comment.
My query shows that table comment AND any column comments!


On Sat, May 30, 2015 at 9:37 AM, Adrian Klaver 
wrote:

> On 05/30/2015 04:48 AM, Bob Futrelle wrote:
>
>> Using pgAdmin3 I've tried this and variations on it.   All are rejected.
>>
>> select COMMENT ON TABLE articlestats
>>
>>
>> No answer here,
>>
>> http://www.postgresql.org/docs/9.3/static/sql-comment.html
>>
>>
>> pgAdmin3 had no problem with entering a comment:
>>
>> COMMENT ON TABLE articlestats IS 'Comprehensive data for every
>> article.'
>>
>
>
> http://www.postgresql.org/docs/9.4/interactive/functions-info.html
> Table 9.60 Comment Information Functions
>
>
> So:
>
> test=# comment on table table1 is 'Test comment';
> COMMENT
>
> test=# select obj_description('table1'::regclass, 'pg_class');
>  obj_description
>
> -
>
>  Test comment
>
> (1 row)
>
>
>>
>>   - Bob Futrelle
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] How to retrieve Comment text using SQL, not psql?

2015-05-30 Thread Adrian Klaver

On 05/30/2015 04:48 AM, Bob Futrelle wrote:

Using pgAdmin3 I've tried this and variations on it.   All are rejected.

select COMMENT ON TABLE articlestats


No answer here,

http://www.postgresql.org/docs/9.3/static/sql-comment.html


pgAdmin3 had no problem with entering a comment:

COMMENT ON TABLE articlestats IS 'Comprehensive data for every article.'



http://www.postgresql.org/docs/9.4/interactive/functions-info.html
Table 9.60 Comment Information Functions


So:

test=# comment on table table1 is 'Test comment';
COMMENT

test=# select obj_description('table1'::regclass, 'pg_class');
 obj_description 



- 



 Test comment 



(1 row)




  - Bob Futrelle




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to retrieve Comment text using SQL, not psql?

2015-05-30 Thread Melvin Davidson
This will give the comment on your table and any column:

SELECT DISTINCT ON (c.relname)
   n.nspname as schema,
   c.relname,
   a.rolname as owner,
   0 as col_seq,
   '' as column,
   d.description as comment
  FROM pg_class c
LEFT  JOIN pg_attribute col ON (col.attrelid = c.oid)
LEFT  JOIN pg_description d ON (d.objoid = col.attrelid AND d.objsubid = 0)
  JOIN pg_namespace n ON (n.oid = c.relnamespace)
  JOIN pg_authid a ON ( a.OID = c.relowner )
  WHERE n.nspname NOT LIKE 'information%'
AND relname NOT LIKE 'pg_%'
AND relname NOT LIKE 'information%'
AND relname NOT LIKE 'sql_%'
AND relname = 'articlestats'
AND relkind = 'r'
AND d.description IS NOT NULL
UNION
SELECT n.nspname as schema,
   c.relname,
   '' as owner,
   col.attnum as col_seq,
   col.attname as column,
   d.description
  FROM pg_class c
  JOIN pg_attribute col ON (col.attrelid = c.oid)
  LEFT JOIN pg_description d ON (d.objoid = col.attrelid AND d.objsubid =
col.attnum)
  JOIN pg_namespace n ON (n.oid = c.relnamespace)
  JOIN pg_authid a ON ( a.OID = c.relowner )
WHERE n.nspname NOT LIKE 'information%'
  AND relname NOT LIKE 'pg_%'
  AND relname NOT LIKE 'information%'
  AND relname NOT LIKE 'sql_%'
  AND relname = 'articlestats'
  AND relkind = 'r'
  AND d.description IS NOT NULL
  AND col.attnum >= 0
ORDER BY 1, 2, 4;


Learn the catalogs and you will learn PostgreSQL!

On Sat, May 30, 2015 at 7:58 AM, Pavel Stehule 
wrote:

> Hi
>
> you can call function obj_description
> http://stackoverflow.com/questions/11493978/how-to-retrieve-the-comment-of-a-postgresql-database
>
> http://www.postgresql.org/docs/9.1/static/functions-info.html
>
> For tables
>
> SELECT pg_catalog.obj_description('tablename'::regclass, 'pg_class') as
> "Description;
>
> Regards
>
> Pavel Stehule
>
> 2015-05-30 13:48 GMT+02:00 Bob Futrelle :
>
>> Using pgAdmin3 I've tried this and variations on it.   All are rejected.
>>
>> select COMMENT ON TABLE articlestats
>>
>>
>> No answer here,
>>
>> http://www.postgresql.org/docs/9.3/static/sql-comment.html
>>
>>
>> pgAdmin3 had no problem with entering a comment:
>>
>> COMMENT ON TABLE articlestats IS 'Comprehensive data for every article.'
>>
>>
>>  - Bob Futrelle
>>
>>
>


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] replacing jsonb field value

2015-05-30 Thread Andreas Kretschmer
Michael Paquier  wrote:

> 
> Append the new value to it the existing field, jsonb has as property
> to enforce key uniqueness, and uses the last value scanned for a given
> key.

can you show a simple example, how to append a jsonb to an jsonb-field?
Maybe i'm blind, but i can't find how it works.

Thx.


Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
"If I was god, I would recompile penguin with --enable-fly."   (unknown)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to retrieve Comment text using SQL, not psql?

2015-05-30 Thread Pavel Stehule
Hi

you can call function obj_description
http://stackoverflow.com/questions/11493978/how-to-retrieve-the-comment-of-a-postgresql-database

http://www.postgresql.org/docs/9.1/static/functions-info.html

For tables

SELECT pg_catalog.obj_description('tablename'::regclass, 'pg_class') as
"Description;

Regards

Pavel Stehule

2015-05-30 13:48 GMT+02:00 Bob Futrelle :

> Using pgAdmin3 I've tried this and variations on it.   All are rejected.
>
> select COMMENT ON TABLE articlestats
>
>
> No answer here,
>
> http://www.postgresql.org/docs/9.3/static/sql-comment.html
>
>
> pgAdmin3 had no problem with entering a comment:
>
> COMMENT ON TABLE articlestats IS 'Comprehensive data for every article.'
>
>
>  - Bob Futrelle
>
>


[GENERAL] How to retrieve Comment text using SQL, not psql?

2015-05-30 Thread Bob Futrelle
Using pgAdmin3 I've tried this and variations on it.   All are rejected.

select COMMENT ON TABLE articlestats


No answer here,

http://www.postgresql.org/docs/9.3/static/sql-comment.html


pgAdmin3 had no problem with entering a comment:

COMMENT ON TABLE articlestats IS 'Comprehensive data for every article.'


 - Bob Futrelle


Re: [GENERAL] replacing jsonb field value

2015-05-30 Thread Michael Paquier
On Sat, May 30, 2015 at 11:52 AM, john.tiger  wrote:
> using 9.4.2
>
> suppose we have
> create table test (id serial primary key, data jsonb);
> insert into test (data) values ({"a":1, "b":2})
>
> want to replace "b" with 3
>
> okay, we are retrieving entire record
> res = select * from test where data ->> b = 2
>
> newrec = res
> newrec["b" = 3
>
> delete from test where data ->> b= 2
> insert into test (data) values (newrec)
>
> is this the best way until upsert arrives ?

Append the new value to it the existing field, jsonb has as property
to enforce key uniqueness, and uses the last value scanned for a given
key.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general