Re: [GENERAL] json-patch support?

2016-10-24 Thread Deven Phillips
Finally got around to doing this... The link below points to a complete
implementation of JSONPatch in pure PostgreSQL. It is compatible with
PostgreSQL >= 9.5 (Not tested using earlier versions)

https://gist.github.com/InfoSec812/b830a9db4c9048552f8c51d7987cc4d0

Cheers!

Deven

On Fri, Mar 27, 2015 at 4:16 PM, Deven Phillips <deven.phill...@gmail.com>
wrote:

> OK, then I will look into perhaps implementing it as a pl-python or
> pl-java function. Thanks for the advice!!
>
> Deven
>
> On Fri, Mar 27, 2015 at 2:40 PM, Merlin Moncure <mmonc...@gmail.com>
> wrote:
>
>> On Fri, Mar 27, 2015 at 1:36 PM, Arthur Silva <arthur...@gmail.com>
>> wrote:
>> > On Fri, Mar 27, 2015 at 1:56 PM, Deven Phillips <
>> deven.phill...@gmail.com>
>> > wrote:
>> >>
>> >> Are there any plans or ideas about implement JSON Patch
>> >> (http://jsonpatch.com/) support for PostgreSQL? We deal with some
>> relatively
>> >> large JSON documents for our in-house application and it is often
>> better to
>> >> just send a json-patch update rather than the full document. It would
>> be
>> >> very nice if we could just select for the changes via a trigger and use
>> >> NOTIFY to tell our application about a patch. If nobody has discussed
>> it
>> >> previously, perhaps I will look into implementing it myself.
>> >>
>> >> Thanks in advance,
>> >>
>> >> Deven
>> >
>> >
>> > This could be implemented as an extension.
>> > There're already a few extensions that provide this functionality with
>> plain
>> > functions, so it's just a matter of parsing the json and executing those
>> > functions.
>>
>>
>> Right.  If it was me, I'd shoot for a userland (that is, in sql or
>> pl/pgsql) implementation that wraps the existing json APIs to get the
>> desired result.   From there, could determine if a more optimized
>> version in C was warranted.
>>
>> merlin
>>
>
>


Re: [GENERAL] DB Corruption after running out of disk space?

2016-10-14 Thread Deven Phillips
NVM, I guess that I just didn't wait long enough for the re-index operation
to complete... We're good now...

On Fri, Oct 14, 2016 at 6:45 AM, Deven Phillips <deven.phill...@gmail.com>
wrote:

> Here's hoping someone can help me... Overnight, our primary database
> server (part of a master/slave replication pair) ran out of disk space. I
> have cleaned up space now, but while performing some VACUUM ANALYZE
> commands I find that there is some corruption.
>
> I'm getting messages like: ERROR:  missing chunk number 0 for toast value
> 304627805 in pg_toast_16421
>
> I searched around and previous posts to the mailing list suggested a
> re-index of the pg_toast.pg_toast_16421 table, which I did to no effect.
> Still getting the error.
>
> Any suggestions or ideas would be appreciated.
>
> Thanks in advance!
>
> Deven Phillips
>


[GENERAL] DB Corruption after running out of disk space?

2016-10-14 Thread Deven Phillips
Here's hoping someone can help me... Overnight, our primary database server
(part of a master/slave replication pair) ran out of disk space. I have
cleaned up space now, but while performing some VACUUM ANALYZE commands I
find that there is some corruption.

I'm getting messages like: ERROR:  missing chunk number 0 for toast value
304627805 in pg_toast_16421

I searched around and previous posts to the mailing list suggested a
re-index of the pg_toast.pg_toast_16421 table, which I did to no effect.
Still getting the error.

Any suggestions or ideas would be appreciated.

Thanks in advance!

Deven Phillips


Re: [GENERAL] jsonb_set for nested new item?

2016-09-23 Thread Deven Phillips
Thanks for the confirmation. Unfortunately, I will need to handle more
complex situations. I will look into creating a recursive subroutine to
handle things.

On Sep 23, 2016 5:12 PM, "Vitaly Burovoy" <vitaly.buro...@gmail.com> wrote:

On 9/23/16, Deven Phillips <deven.phill...@gmail.com> wrote:
> On Fri, Sep 23, 2016 at 10:14 AM, Deven Phillips <deven.phill...@gmail.com
>
> wrote:
>
>> Is there a way to set a nested element for which the parent paths do not
>> yet exist?
>>
>> For example, if I have a JSONB value called 'data':
>>
>> {
>> "foo": "bar"
>> }
>>
>> and run
>>
>> jsonb_set(data, {'boo', 'baz'}, 'newvalue')
>>
>> I would expect the output to be:
>>
>> {
>> "foo": "bar",
>> "boo": {
>> "baz": "newvalue"
>>  }
>> }
>>
>> But that does not appear to work..
>>
>> Any suggestions would be appreciated.
>>
>
> Actually, it looks like I have to create all of the parent objects first
> before it would work... Is that correct?
>
> Deven

Yes, you are correct. The documentation[1] says:
> Returns target ... with new_value added if create_missing is true ...
> and the item designated by path does not exist.

There is nothing about a "path", only about a "new_value".
I think it is because of impossibility to understand what intermediate
objects are needed to be created (objects or arrays).

There is no easy way to create variadic intermediate objects, but in
your particular case (only one subobject) it can be like:

SELECT
jsonb_set(
CASE
WHEN DATA ? 'boo'
THEN DATA
ELSE jsonb_set(DATA, array['boo'], '{}')
END,
'{boo,baz}'::text[],
'"newvalue"'
)
FROM (VALUES('{"foo": "bar"}'::jsonb)) AS t(data)


[1] https://www.postgresql.org/docs/devel/static/functions-json.html
--
Best regards,
Vitaly Burovoy


Re: [GENERAL] jsonb_set for nested new item?

2016-09-23 Thread Deven Phillips
Actually, it looks like I have to create all of the parent objects first
before it would work... Is that correct?

Deven

On Fri, Sep 23, 2016 at 10:14 AM, Deven Phillips <deven.phill...@gmail.com>
wrote:

> Is there a way to set a nested element for which the parent paths do not
> yet exist?
>
> For example, if I have a JSONB value called 'data':
>
> {
> "foo": "bar"
> }
>
> and run
>
> jsonb_set(data, {'boo', 'baz'}, 'newvalue')
>
> I would expect the output to be:
>
> {
> "foo": "bar",
> "boo": {
> "baz": "newvalue"
>  }
> }
>
> But that does not appear to work..
>
> Any suggestions would be appreciated.
>
> Deven
>


[GENERAL] jsonb_set for nested new item?

2016-09-23 Thread Deven Phillips
Is there a way to set a nested element for which the parent paths do not
yet exist?

For example, if I have a JSONB value called 'data':

{
"foo": "bar"
}

and run

jsonb_set(data, {'boo', 'baz'}, 'newvalue')

I would expect the output to be:

{
"foo": "bar",
"boo": {
"baz": "newvalue"
 }
}

But that does not appear to work..

Any suggestions would be appreciated.

Deven


Re: [GENERAL] Verifying LISTEN subscriptions?

2016-09-17 Thread Deven Phillips
NVM, a quick search through the output of `\dfS` showed me the
`pg_listening_channels()` function, which returns a TEXT[] of currently
subscribed channels.


Cheers,

Deven

On Sat, Sep 17, 2016 at 8:45 AM, Deven Phillips <deven.phill...@gmail.com>
wrote:

> From a connection which has previously executed a LISTEN 
> statement, is there a query which I can run which will tell me if it is
> still listening on those channels?
>
> Thanks in advance!!
>
> Deven Phillips
>


[GENERAL] Verifying LISTEN subscriptions?

2016-09-17 Thread Deven Phillips
>From a connection which has previously executed a LISTEN 
statement, is there a query which I can run which will tell me if it is
still listening on those channels?

Thanks in advance!!

Deven Phillips


[GENERAL] How to insert/update a bunch of JSOB values?

2016-07-11 Thread Deven Phillips
We need to update a JSONB data structure in our tables to include an
'is_valid' flag. I was able to build a CTE which I use to create a temp
table containing the 'is_valid' value, the path where it needs to be set,
and the join criteria. Let's say that the temp table looks like:

id TEXT,
> time_inserted TIMESTAMPTZ,
> path TEXT[],
> is_valid BOOLEAN
> PRIMARY KEY (id, time_inserted)


Let's say that the table with the data I want to modify has a JSONB
structure which needs to be updated in multiple paths:

{
>"path1": {
>"invalid_data": "here"
> },
> "path2: {
> "valid_data": "here",
> },...
> }


For each path needing updates, I have a single entry in the temp table...
How do I structure the WITH..UPDATE..FROM query to apply all changes from
the temp table?

I've tried to use a simple case:

UPDATE target AS d SET data=jsonb_set(d.data, n.path, is_valid::JSONB, true)
> FROM temp_table n
> WHERE d.time_inserted=n.time_inserted AND d.data->>'id'=n.id


But this results in just the first update for a given PK being applied.

Any assistance would be appreciated!!!

Deven Phillips


Re: [GENERAL] Appending key-value to JSONB tree

2016-02-18 Thread Deven Phillips
Answering my own question here... The gist is that if you need to add a new
key-value pair, you use *jsonb_set* on the non-existent key and then
provide the value as the final parameter.. The new stored procedure looks
like:

CREATE OR REPLACE FUNCTION set_user_token(email TEXT, expiration INTERVAL)
RETURNS TEXT AS $

WITH newtoken AS (

SELECT random_string(32) token, (now()+INTERVAL '6 months') expiry

),
updated AS (

SELECT

jsonb_set(data::jsonb, (SELECT ARRAY['tokens', token] FROM newtoken),
(SELECT to_jsonb(expiry) FROM newtoken)) newdata

FROM

users

WHERE

email=$1

),
updatecomplete AS (

UPDATE

users

SET

data=(SELECT newdata FROM updated)

WHERE

email=$1

)
SELECT jsonb_pretty(token) FROM newtoken $

LANGUAGE SQL;

The difficult part for me was figuring out how to build the array which
makes of the *path* parameter for *jsonb_set*...

Hope this helps others!!!

Deven

On Wed, Feb 17, 2016 at 10:47 AM, Deven Phillips <deven.phill...@gmail.com>
wrote:

> I have a "user" document with a key "tokens" and I would like to write a
> stored procedure for adding new token key-value pairs to the "tokens" part
> of the tree without removing the old values. I have figured out how to
> replace the existing value in the "tokens", but I cannot seem to wrap my
> head around appending a new key-value pair. Could someone suggest an
> approach (using PostgreSQL 9.5 BTW)...
>
> Here's my existing stored proc:
>
> CREATE OR REPLACE FUNCTION set_user_token(email TEXT, expiration INTERVAL)
> RETURNS TEXT AS $$
>
> WITH newtoken AS (
>
> SELECT
>
> jsonb_build_object(random_string(32), (now()+$2)) token
>
> ),
> updated AS (
>
> SELECT
>
> jsonb_set(data::jsonb, '{"tokens"}', (SELECT token FROM newtoken)) newdata
>
> FROM
>
> users
>
> WHERE
>
> email=$1
>
> ),
> updatecomplete AS (
>
> UPDATE
>
> cc_users
>
> SET
>
> data=(SELECT newdata FROM updated)
>
> WHERE
>
> email=$1
>
> )
> SELECT jsonb_pretty(token) FROM newtoken $$
>
> LANGUAGE SQL;
>
> Thanks in advance!!!
>
> Deven Phillips
>


[GENERAL] Appending key-value to JSONB tree

2016-02-17 Thread Deven Phillips
I have a "user" document with a key "tokens" and I would like to write a
stored procedure for adding new token key-value pairs to the "tokens" part
of the tree without removing the old values. I have figured out how to
replace the existing value in the "tokens", but I cannot seem to wrap my
head around appending a new key-value pair. Could someone suggest an
approach (using PostgreSQL 9.5 BTW)...

Here's my existing stored proc:

CREATE OR REPLACE FUNCTION set_user_token(email TEXT, expiration INTERVAL)
RETURNS TEXT AS $$

WITH newtoken AS (

SELECT

jsonb_build_object(random_string(32), (now()+$2)) token

),
updated AS (

SELECT

jsonb_set(data::jsonb, '{"tokens"}', (SELECT token FROM newtoken)) newdata

FROM

users

WHERE

email=$1

),
updatecomplete AS (

UPDATE

cc_users

SET

data=(SELECT newdata FROM updated)

WHERE

email=$1

)
SELECT jsonb_pretty(token) FROM newtoken $$

LANGUAGE SQL;

Thanks in advance!!!

Deven Phillips


[GENERAL] Question on how to use to_timestamp()

2016-02-13 Thread Deven Phillips
I'm trying to convert a series of ISO8601 strings into TIMESTAMPs for use
with a function:

CREATE OR REPLACE FUNCTION v1_nexus_vlan_count(id TEXT, start_time
TIMESTAMP, end_time TIMESTAMP)
RETURNS TEXT AS $$
SELECT jsonb_pretty(jsonb_agg(row_to_json(datapoints))) AS data_array FROM (
SELECT
data->>'timestamp' AS collection_time,
data->'data'->'vlans'->>'available' AS available,
data->'data'->'vlans'->>'total' AS total,
data->'data'->'vlans'->>'used' AS used
FROM
gathered_data
WHERE
data->>'id'=$1 AND
to_timestamp(data->>'timestamp', '-MM-DDTHH24:MI:SSZ')>=$2 AND
to_timestamp(data->>'timetsamp', '-MM-DDTHH24:MI:SSZ')<=$3
ORDER BY
to_timestamp(data->>'timestamp', '-MM-DDTHH24:MI:SSZ')) AS
datapoints $$
LANGUAGE SQL;

The conversions for to_timestamp() seems to be my problem. I keep getting
an error:

# SELECT to_timestamp('2016-01-01T00:00:00Z', '-MM-DDTHH24:MI:SSZ');
ERROR:  invalid value ":0" for "MI"
DETAIL:  Value must be an integer.
Time: 1.016 ms

Could anyone suggest what it is that I might be doing wrong here?

Thanks in advance!!!

Deven


Re: [GENERAL] Question on how to use to_timestamp()

2016-02-13 Thread Deven Phillips
Thanks all!
On Feb 13, 2016 11:06 PM, "Tom Lane" <t...@sss.pgh.pa.us> wrote:

> Vitaly Burovoy <vitaly.buro...@gmail.com> writes:
> > On 2/13/16, Deven Phillips <deven.phill...@gmail.com> wrote:
> >> I'm trying to convert a series of ISO8601 strings into TIMESTAMPs for
> use
> >> with a function: ...
>
> > If your data is already in a correct ISO8601 format, you can use a
> > direct cast to timestamptz type:
>
> Yeah.  95% of the time, the answer to "how to use to_timestamp()" is
> "don't".  The native input converter for the date/timestamp/timestamptz
> data types is perfectly capable of parsing most common date formats,
> with a lot less muss and fuss than to_timestamp.  At worst you might have
> to give it a hint about DMY vs. MDY field ordering via the DateStyle
> setting.  If your input is YMD order then you don't have to worry about
> that at all.
>
> regards, tom lane
>


Re: [GENERAL] PostgreSQL 9.5, mysql_fdw, and IMPORT FOREIGN SCHEMA question

2016-01-08 Thread Deven Phillips
Apparently not, though I have done so in the past on PostgreSQL 9.4. It
appears to be related to the "schema" with which the foreign table is
associated:

mydb=# CREATE FOREIGN TABLE customer (
id BIGINT,
name VARCHAR(150),
parent_id BIGINT,
oracle_id BIGINT,
last_updated_time TIMESTAMP,
created_time TIMESTAMP) SERVER mysql;
CREATE FOREIGN TABLE
mydb=# SELECT * FROM customer;
ERROR:  failed to prepare the MySQL query:
Table 'public.customer' doesn't exist

Any suggestions would be greatly appreciated!

Deven


On Fri, Jan 8, 2016 at 10:26 AM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 01/08/2016 07:04 AM, Deven Phillips wrote:
>
>> Hi all,
>>
>>  I installed the newly released PostgreSQL 9.5 this morning and
>> compiled the latest mysql_fdw extension from EnterpriseDB. I was able to
>> create the SERVER and USER MAPPING, but I cannot seem to get IMPORT
>> FOREIGN SCHEMA to do anything. The command executes without error, but
>> none of the table schemas are imported from the MySQL DB. Does anyone
>> have any advice, links, documentation which might be of help?
>>
>
> Can you CREATE FOREIGN TABLE and use it?
>
>
>> Thanks in advance!
>>
>> Deven
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: [GENERAL] PostgreSQL 9.5, mysql_fdw, and IMPORT FOREIGN SCHEMA question

2016-01-08 Thread Deven Phillips
I DID get a foreign table to work using the following:

CREATE FOREIGN TABLE customer (
id BIGINT,
name VARCHAR(150),
parent_id BIGINT,
oracle_id BIGINT,
last_updated_time TIMESTAMP,
created_time TIMESTAMP) SERVER mysql OPTIONS (dbname 'mydb', table_name
'customer');

And I was subsequently able to query that table from PostgreSQL..

I tried to add the "OPTIONS" to the IMPORT FOREIGN SCHEMA and got an error
that "dbname" is not a valid parameter.

Thanks,

Deven

On Fri, Jan 8, 2016 at 11:38 PM, Deven Phillips <deven.phill...@gmail.com>
wrote:

> Apparently not, though I have done so in the past on PostgreSQL 9.4. It
> appears to be related to the "schema" with which the foreign table is
> associated:
>
> mydb=# CREATE FOREIGN TABLE customer (
> id BIGINT,
> name VARCHAR(150),
> parent_id BIGINT,
> oracle_id BIGINT,
> last_updated_time TIMESTAMP,
> created_time TIMESTAMP) SERVER mysql;
> CREATE FOREIGN TABLE
> mydb=# SELECT * FROM customer;
> ERROR:  failed to prepare the MySQL query:
> Table 'public.customer' doesn't exist
>
> Any suggestions would be greatly appreciated!
>
> Deven
>
>
> On Fri, Jan 8, 2016 at 10:26 AM, Adrian Klaver <adrian.kla...@aklaver.com>
> wrote:
>
>> On 01/08/2016 07:04 AM, Deven Phillips wrote:
>>
>>> Hi all,
>>>
>>>  I installed the newly released PostgreSQL 9.5 this morning and
>>> compiled the latest mysql_fdw extension from EnterpriseDB. I was able to
>>> create the SERVER and USER MAPPING, but I cannot seem to get IMPORT
>>> FOREIGN SCHEMA to do anything. The command executes without error, but
>>> none of the table schemas are imported from the MySQL DB. Does anyone
>>> have any advice, links, documentation which might be of help?
>>>
>>
>> Can you CREATE FOREIGN TABLE and use it?
>>
>>
>>> Thanks in advance!
>>>
>>> Deven
>>>
>>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com
>>
>
>


Re: [GENERAL] PostgreSQL 9.5, mysql_fdw, and IMPORT FOREIGN SCHEMA question

2016-01-08 Thread Deven Phillips
Additional details. The MySQL server I am targeting is running
version 5.1.73. Perhaps it's too old of a version to support foreign schema
import?

Deven

On Fri, Jan 8, 2016 at 11:45 PM, Deven Phillips <deven.phill...@gmail.com>
wrote:

> I DID get a foreign table to work using the following:
>
> CREATE FOREIGN TABLE customer (
> id BIGINT,
> name VARCHAR(150),
> parent_id BIGINT,
> oracle_id BIGINT,
> last_updated_time TIMESTAMP,
> created_time TIMESTAMP) SERVER mysql OPTIONS (dbname 'mydb', table_name
> 'customer');
>
> And I was subsequently able to query that table from PostgreSQL..
>
> I tried to add the "OPTIONS" to the IMPORT FOREIGN SCHEMA and got an error
> that "dbname" is not a valid parameter.
>
> Thanks,
>
> Deven
>
> On Fri, Jan 8, 2016 at 11:38 PM, Deven Phillips <deven.phill...@gmail.com>
> wrote:
>
>> Apparently not, though I have done so in the past on PostgreSQL 9.4. It
>> appears to be related to the "schema" with which the foreign table is
>> associated:
>>
>> mydb=# CREATE FOREIGN TABLE customer (
>> id BIGINT,
>> name VARCHAR(150),
>> parent_id BIGINT,
>> oracle_id BIGINT,
>> last_updated_time TIMESTAMP,
>> created_time TIMESTAMP) SERVER mysql;
>> CREATE FOREIGN TABLE
>> mydb=# SELECT * FROM customer;
>> ERROR:  failed to prepare the MySQL query:
>> Table 'public.customer' doesn't exist
>>
>> Any suggestions would be greatly appreciated!
>>
>> Deven
>>
>>
>> On Fri, Jan 8, 2016 at 10:26 AM, Adrian Klaver <adrian.kla...@aklaver.com
>> > wrote:
>>
>>> On 01/08/2016 07:04 AM, Deven Phillips wrote:
>>>
>>>> Hi all,
>>>>
>>>>  I installed the newly released PostgreSQL 9.5 this morning and
>>>> compiled the latest mysql_fdw extension from EnterpriseDB. I was able to
>>>> create the SERVER and USER MAPPING, but I cannot seem to get IMPORT
>>>> FOREIGN SCHEMA to do anything. The command executes without error, but
>>>> none of the table schemas are imported from the MySQL DB. Does anyone
>>>> have any advice, links, documentation which might be of help?
>>>>
>>>
>>> Can you CREATE FOREIGN TABLE and use it?
>>>
>>>
>>>> Thanks in advance!
>>>>
>>>> Deven
>>>>
>>>
>>>
>>> --
>>> Adrian Klaver
>>> adrian.kla...@aklaver.com
>>>
>>
>>
>


Re: [GENERAL] PostgreSQL 9.5, mysql_fdw, and IMPORT FOREIGN SCHEMA question

2016-01-08 Thread Deven Phillips
I think that I may have found the problem. It looks like the mysql_fdw uses
the following query to gather information about the foreign schema:

SELECT
 t.TABLE_NAME,
 c.COLUMN_NAME,
 CASE
   WHEN c.DATA_TYPE = 'enum' THEN LOWER(CONCAT(c.COLUMN_NAME, '_t'))
   WHEN c.DATA_TYPE = 'tinyint' THEN 'smallint'
   WHEN c.DATA_TYPE = 'mediumint' THEN 'integer'
   WHEN c.DATA_TYPE = 'tinyint unsigned' THEN 'smallint'
   WHEN c.DATA_TYPE = 'smallint unsigned' THEN 'integer'
   WHEN c.DATA_TYPE = 'mediumint unsigned' THEN 'integer'
   WHEN c.DATA_TYPE = 'int unsigned' THEN 'bigint'
   WHEN c.DATA_TYPE = 'bigint unsigned' THEN 'numeric(20)'
   WHEN c.DATA_TYPE = 'double' THEN 'double precision'
   WHEN c.DATA_TYPE = 'float' THEN 'real'
   WHEN c.DATA_TYPE = 'datetime' THEN 'timestamp'
   WHEN c.DATA_TYPE = 'longtext' THEN 'text'
   WHEN c.DATA_TYPE = 'mediumtext' THEN 'text'
   WHEN c.DATA_TYPE = 'blob' THEN 'bytea'
   ELSE c.DATA_TYPE
 END,
 c.COLUMN_TYPE,
 IF(c.IS_NULLABLE = 'NO', 't', 'f'),
 c.COLUMN_DEFAULT
FROM
 information_schema.TABLES AS t
JOIN
 information_schema.COLUMNS AS c
ON
 t.TABLE_CATALOG = c.TABLE_CATALOG AND t.TABLE_SCHEMA = c.TABLE_SCHEMA AND
t.TABLE_NAME = c.TABLE_NAME
WHERE
 t.TABLE_SCHEMA = '%s'

When I poked around inside of MySQL that t.TABLE_CATALOG and
c.TABLE_CATALOG values are NULL. Trying to compare NULLs in MySQL using an
equals sign (=) results in a "FALSE" and thus the "JOIN" does not provide
an actual linkage. So, the query returns 0 tables and 0 columns to be
imported.

Deven

On Fri, Jan 8, 2016 at 11:50 PM, Deven Phillips <deven.phill...@gmail.com>
wrote:

> Additional details. The MySQL server I am targeting is running
> version 5.1.73. Perhaps it's too old of a version to support foreign schema
> import?
>
> Deven
>
> On Fri, Jan 8, 2016 at 11:45 PM, Deven Phillips <deven.phill...@gmail.com>
> wrote:
>
>> I DID get a foreign table to work using the following:
>>
>> CREATE FOREIGN TABLE customer (
>> id BIGINT,
>> name VARCHAR(150),
>> parent_id BIGINT,
>> oracle_id BIGINT,
>> last_updated_time TIMESTAMP,
>> created_time TIMESTAMP) SERVER mysql OPTIONS (dbname 'mydb', table_name
>> 'customer');
>>
>> And I was subsequently able to query that table from PostgreSQL..
>>
>> I tried to add the "OPTIONS" to the IMPORT FOREIGN SCHEMA and got an
>> error that "dbname" is not a valid parameter.
>>
>> Thanks,
>>
>> Deven
>>
>> On Fri, Jan 8, 2016 at 11:38 PM, Deven Phillips <deven.phill...@gmail.com
>> > wrote:
>>
>>> Apparently not, though I have done so in the past on PostgreSQL 9.4. It
>>> appears to be related to the "schema" with which the foreign table is
>>> associated:
>>>
>>> mydb=# CREATE FOREIGN TABLE customer (
>>> id BIGINT,
>>> name VARCHAR(150),
>>> parent_id BIGINT,
>>> oracle_id BIGINT,
>>> last_updated_time TIMESTAMP,
>>> created_time TIMESTAMP) SERVER mysql;
>>> CREATE FOREIGN TABLE
>>> mydb=# SELECT * FROM customer;
>>> ERROR:  failed to prepare the MySQL query:
>>> Table 'public.customer' doesn't exist
>>>
>>> Any suggestions would be greatly appreciated!
>>>
>>> Deven
>>>
>>>
>>> On Fri, Jan 8, 2016 at 10:26 AM, Adrian Klaver <
>>> adrian.kla...@aklaver.com> wrote:
>>>
>>>> On 01/08/2016 07:04 AM, Deven Phillips wrote:
>>>>
>>>>> Hi all,
>>>>>
>>>>>  I installed the newly released PostgreSQL 9.5 this morning and
>>>>> compiled the latest mysql_fdw extension from EnterpriseDB. I was able
>>>>> to
>>>>> create the SERVER and USER MAPPING, but I cannot seem to get IMPORT
>>>>> FOREIGN SCHEMA to do anything. The command executes without error, but
>>>>> none of the table schemas are imported from the MySQL DB. Does anyone
>>>>> have any advice, links, documentation which might be of help?
>>>>>
>>>>
>>>> Can you CREATE FOREIGN TABLE and use it?
>>>>
>>>>
>>>>> Thanks in advance!
>>>>>
>>>>> Deven
>>>>>
>>>>
>>>>
>>>> --
>>>> Adrian Klaver
>>>> adrian.kla...@aklaver.com
>>>>
>>>
>>>
>>
>


Re: [GENERAL] PostgreSQL 9.5, mysql_fdw, and IMPORT FOREIGN SCHEMA question

2016-01-08 Thread Deven Phillips
OK, that did it! I submitted 2 PRs to the EnterpriseDB/mysql_fdw GitHub
project which should resolve all outstanding issues for me.

https://github.com/EnterpriseDB/mysql_fdw/pull/81

https://github.com/EnterpriseDB/mysql_fdw/pull/82

Isn't it great when Open Source works like it's supposed to!!!

Deven

On Sat, Jan 9, 2016 at 12:06 AM, Deven Phillips <deven.phill...@gmail.com>
wrote:

> I think that I may have found the problem. It looks like the mysql_fdw
> uses the following query to gather information about the foreign schema:
>
> SELECT
>  t.TABLE_NAME,
>  c.COLUMN_NAME,
>  CASE
>WHEN c.DATA_TYPE = 'enum' THEN LOWER(CONCAT(c.COLUMN_NAME, '_t'))
>WHEN c.DATA_TYPE = 'tinyint' THEN 'smallint'
>WHEN c.DATA_TYPE = 'mediumint' THEN 'integer'
>WHEN c.DATA_TYPE = 'tinyint unsigned' THEN 'smallint'
>WHEN c.DATA_TYPE = 'smallint unsigned' THEN 'integer'
>WHEN c.DATA_TYPE = 'mediumint unsigned' THEN 'integer'
>WHEN c.DATA_TYPE = 'int unsigned' THEN 'bigint'
>WHEN c.DATA_TYPE = 'bigint unsigned' THEN 'numeric(20)'
>WHEN c.DATA_TYPE = 'double' THEN 'double precision'
>WHEN c.DATA_TYPE = 'float' THEN 'real'
>WHEN c.DATA_TYPE = 'datetime' THEN 'timestamp'
>WHEN c.DATA_TYPE = 'longtext' THEN 'text'
>WHEN c.DATA_TYPE = 'mediumtext' THEN 'text'
>WHEN c.DATA_TYPE = 'blob' THEN 'bytea'
>ELSE c.DATA_TYPE
>  END,
>  c.COLUMN_TYPE,
>  IF(c.IS_NULLABLE = 'NO', 't', 'f'),
>  c.COLUMN_DEFAULT
> FROM
>  information_schema.TABLES AS t
> JOIN
>  information_schema.COLUMNS AS c
> ON
>  t.TABLE_CATALOG = c.TABLE_CATALOG AND t.TABLE_SCHEMA = c.TABLE_SCHEMA AND
> t.TABLE_NAME = c.TABLE_NAME
> WHERE
>  t.TABLE_SCHEMA = '%s'
>
> When I poked around inside of MySQL that t.TABLE_CATALOG and
> c.TABLE_CATALOG values are NULL. Trying to compare NULLs in MySQL using an
> equals sign (=) results in a "FALSE" and thus the "JOIN" does not provide
> an actual linkage. So, the query returns 0 tables and 0 columns to be
> imported.
>
> Deven
>
> On Fri, Jan 8, 2016 at 11:50 PM, Deven Phillips <deven.phill...@gmail.com>
> wrote:
>
>> Additional details. The MySQL server I am targeting is running
>> version 5.1.73. Perhaps it's too old of a version to support foreign schema
>> import?
>>
>> Deven
>>
>> On Fri, Jan 8, 2016 at 11:45 PM, Deven Phillips <deven.phill...@gmail.com
>> > wrote:
>>
>>> I DID get a foreign table to work using the following:
>>>
>>> CREATE FOREIGN TABLE customer (
>>> id BIGINT,
>>> name VARCHAR(150),
>>> parent_id BIGINT,
>>> oracle_id BIGINT,
>>> last_updated_time TIMESTAMP,
>>> created_time TIMESTAMP) SERVER mysql OPTIONS (dbname 'mydb', table_name
>>> 'customer');
>>>
>>> And I was subsequently able to query that table from PostgreSQL..
>>>
>>> I tried to add the "OPTIONS" to the IMPORT FOREIGN SCHEMA and got an
>>> error that "dbname" is not a valid parameter.
>>>
>>> Thanks,
>>>
>>> Deven
>>>
>>> On Fri, Jan 8, 2016 at 11:38 PM, Deven Phillips <
>>> deven.phill...@gmail.com> wrote:
>>>
>>>> Apparently not, though I have done so in the past on PostgreSQL 9.4. It
>>>> appears to be related to the "schema" with which the foreign table is
>>>> associated:
>>>>
>>>> mydb=# CREATE FOREIGN TABLE customer (
>>>> id BIGINT,
>>>> name VARCHAR(150),
>>>> parent_id BIGINT,
>>>> oracle_id BIGINT,
>>>> last_updated_time TIMESTAMP,
>>>> created_time TIMESTAMP) SERVER mysql;
>>>> CREATE FOREIGN TABLE
>>>> mydb=# SELECT * FROM customer;
>>>> ERROR:  failed to prepare the MySQL query:
>>>> Table 'public.customer' doesn't exist
>>>>
>>>> Any suggestions would be greatly appreciated!
>>>>
>>>> Deven
>>>>
>>>>
>>>> On Fri, Jan 8, 2016 at 10:26 AM, Adrian Klaver <
>>>> adrian.kla...@aklaver.com> wrote:
>>>>
>>>>> On 01/08/2016 07:04 AM, Deven Phillips wrote:
>>>>>
>>>>>> Hi all,
>>>>>>
>>>>>>  I installed the newly released PostgreSQL 9.5 this morning and
>>>>>> compiled the latest mysql_fdw extension from EnterpriseDB. I was able
>>>>>> to
>>>>>> create the SERVER and USER MAPPING, but I cannot seem to get IMPORT
>>>>>> FOREIGN SCHEMA to do anything. The command executes without error, but
>>>>>> none of the table schemas are imported from the MySQL DB. Does anyone
>>>>>> have any advice, links, documentation which might be of help?
>>>>>>
>>>>>
>>>>> Can you CREATE FOREIGN TABLE and use it?
>>>>>
>>>>>
>>>>>> Thanks in advance!
>>>>>>
>>>>>> Deven
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Adrian Klaver
>>>>> adrian.kla...@aklaver.com
>>>>>
>>>>
>>>>
>>>
>>
>


[GENERAL] PostgreSQL 9.5, mysql_fdw, and IMPORT FOREIGN SCHEMA question

2016-01-08 Thread Deven Phillips
Hi all,

I installed the newly released PostgreSQL 9.5 this morning and compiled
the latest mysql_fdw extension from EnterpriseDB. I was able to create the
SERVER and USER MAPPING, but I cannot seem to get IMPORT FOREIGN SCHEMA to
do anything. The command executes without error, but none of the table
schemas are imported from the MySQL DB. Does anyone have any advice, links,
documentation which might be of help?

Thanks in advance!

Deven


Re: [GENERAL] INSERT ... ON CONFLICT DO UPDATE

2015-07-20 Thread Deven Phillips
For this purpose, I have seen it recommended to use a UUID instead of a
randomly generated integer. I do this myself for production applications
and over millions of records I have yet to log a conflict. Also, as stated
above, you could create a plpgsql function which would achieve exactly what
you want (retry insert until it succeeds).

Just my 2 cents,

Deven

On Sun, Jul 19, 2015 at 9:47 AM, Rafal Pietrak ra...@ztk-rp.eu wrote:

 Hi,

 W dniu 19.07.2015 o 14:10, Geoff Winkless pisze:
  On 19 July 2015 at 11:30, Rafal Pietrak ra...@ztk-rp.eu
  mailto:ra...@ztk-rp.eu wrote:
 
  when I have to invent/introduce additional
  features/columns/attributes (like a key in addition to a sequence),
  which are not required by the design, but necessary for
 implementation)
  is a workaround (almost by definition).
 
 
  I'm sorry that you feel defensive about this, and apologies for
  repeating myself, but the fact that the random key can be duplicated
  means it should not be used as a primary key, so using a sequence as a
  primary key is not a workaround, it's a correction to the design.

 OK. I think I need to apology myself, too. I hope my defense wasn't too
 fierce.

 But I need to clearify one thing:

 Although a random can duplicate its previous values, my random(s)
 (which are created for this application purpose) cannot be duplicated
 when it's stored in the database as live active data. I understand,
 that UNIQUE constraint is precisely the RDBMS tool to guarantee that.

 Naturally, if I put a UNIQUE constraint on that column, or make it a PK,
 is just a matter of choice here. That shouldn't rise concern. I just use
 tools RDBMS provides for semantics the application needs.


 
  Notwithstanding that, the reason UPSERT is required is because it's
  possible that two competing transactions can end up fighting over an
  INSERT and the workarounds that are required are either highly complex
  or not 100% successful (eg
  http://www.depesz.com/2012/06/10/why-is-upsert-so-complicated/).
 

 I knew that Depesz publication before.

 Actually it was the reason I've brought up my usage scenario here now.
 I'm not as competent as Depesz, so:

 1. I worry, that while restarting a failed INSERT transaction at
 application level I miss something important (you people know by heart)
 and unwillingly corrupt and/or suboptimise my application/data. (much
 to the point Depesz described).

 2. But, since the majority of the hard work of postgresql UPSERT
 implementation is already done; I wanted to check out if the usage
 scenario I point out falls into it as a case, or is covered by it by
 some indiomatic SQL sequence, or otherwise. From current discussion I
 gather: its otherwise - it isn't considered as applicable. (so I
 concluded: I'll live with manual re-attempt of failed insert)

 -R


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Muti-table join and roll-up aggregate data into nested JSON?

2015-03-31 Thread Deven Phillips
OK, I figured out this part and came up with:

SELECT
row.snt_code AS snt_code,
row.vdc AS vdc,
row.uuid AS uuid,
row_to_json(row, true)::json AS json
FROM (
SELECT
vm.*,
array_agg(vi),
CONCAT('https://mysite.mydomain.tld/v3/customer/', vm.snt_code,
'/vdc/', vm.vdc, '/vm/', vm.uuid) AS self,
'cc.v3.vm' AS type
FROM virtual_machines vm
LEFT JOIN virtual_interfaces vi ON vm.vmid=vi.vmid
GROUP BY vm.snt_code, vm.vdc, vm.vmid, vm.uuid, vm.name, vm.os,
vm.service_type, vm.template_name
) row;

Now, the next step is that virtual_interfaces and virtual_machines are
actually views I defined. I would like to break those out into joined
tables and still aggregate the data into an array. The problem I am having
is that I cannot put the results of multiple tables into a single
array_add() call. How can I aggregate multiple joined tables into a single
array?

Thanks again for the help!!

Deven

On Mon, Mar 30, 2015 at 10:25 PM, Deven Phillips deven.phill...@gmail.com
wrote:

 I have already attempted a similar approach and I could not find a way to
 pass the outer value of the VM ID to the inner SELECT. For example:

 SELECT
 row.snt_code AS snt_code,
 row.vdc AS vdc,
 row.uuid AS uuid,
 row_to_json(row, true)::json AS json
 FROM (
 SELECT
 vm.*,
 CONCAT('https://int.cloudcontrol.sgns.net/v3/customer/',
 vm.snt_code, '/vdc/', vm.vdc, '/vm/', vm.uuid) AS self,
 'cc.v3.sungardas.vm' AS type,
 (SELECT DISTINCT * FROM virtual_interfaces WHERE vmid=*vm.id
 http://vm.id*) as interfaces
 FROM virtual_machines vm
 ) row;

 Placing the vm.id value there for the WHERE clause gives the error:

 SQL Error [42703]: ERROR: column vm.id does not exist
   Position: 351
   ERROR: column vm.id does not exist
   Position: 351

 Is there some way to make that value available to the inner select?

 Thanks in advance!

 Deven

 On Mon, Mar 30, 2015 at 5:46 PM, Merlin Moncure mmonc...@gmail.com
 wrote:

 On Mon, Mar 30, 2015 at 4:22 PM, Deven Phillips
 deven.phill...@gmail.com wrote:
  I'm using PostgreSQL 9.4.1 on Ubuntu 14.10.
 
  The function does the following:
 
  DROP FUNCTION get_vm_with_interfaces(vm_id BIGINT);
 
  CREATE OR REPLACE FUNCTION get_virtual_interfaces(vm_id BIGINT) RETURNS
  jsonb AS $$
  DECLARE
  res jsonb;
  BEGIN
  SELECT array_to_json(array_agg(row_to_json(i, true)), true)
  FROM (
  SELECT DISTINCT * FROM virtual_interfaces WHERE vmid=vm_id)
 i
  INTO res;
  RETURN res;
  END;
  $$ LANGUAGE PLPGSQL;

 please, try to refrain from top posting.  particularly with emails
 like this where the context of the question is important.  Anyways,
 your inner function could be trivially inlined as so:

 SELECT row_to_json(row) AS json
 FROM (
 SELECT
 c.snt_code AS snt_code,
 vdc.id AS vdc,
 vm.uuid AS uuid,
 vm.name AS name,
 vm.os AS os,
 vm.service_type AS service_type,
 vm.template_name AS template_name,
 ( -- get_vm_with_interfaces(vm.id)
   SELECT array_to_json(array_agg(row_to_json(i, true)), true)
   FROM (
 SELECT DISTINCT * FROM virtual_interfaces vi WHERE vmid=vm_id
 ) i
   ) as interfaces
 FROM liquorstore_customer c
 LEFT JOIN liquorstore_virtualdatacenter vdc ON c.id=vdc.customer_id
 LEFT JOIN liquorstore_virtualmachine vm ON vm.virtual_data_center_id=
 vdc.id
 WHERE c.snt_code='abcd' AND vdc.id=111 AND
 vm.uuid='422a141f-5e46-b0f2-53b8-e31070c883ed'
 ) row

 I would personally simplify the subquery portion to:
 ( -- get_vm_with_interfaces(vm.id)
   SELECT array_agg(i)
   FROM (
 SELECT DISTINCT * FROM virtual_interfaces vi WHERE vmid=vm_id
 ) i

 , allowing for the outer 'to_json' to  handle the final
 transformation.  I'm not going to do it for you, but you could
 probably simplify the query even further by moving the aggregation out
 of a correlated subquery and into the basic field list, which would be
 faster for certain distributions of data.

 Also, a note about jsonb, which you used inside the inner function.
 jsonb is much better than type 'json' for any case involving
 manipulation of the json, searching, or repeated sub-document
 extraction.  However, for serialization to an application, it is
 basically pessimal as it involves building up internal structures that
 the vanilla json type does not involve. The basic rule of thumb is:
 serialization, json, everything else, jsonb.

 merlin





[GENERAL] Muti-table join and roll-up aggregate data into nested JSON?

2015-03-30 Thread Deven Phillips
I have a multi-table join which will return 1 row per interface and I
would like to aggregate the interfaces as a nested section on the resulting
JSON output. Could someone help me to figure out how to accomplish that?

Example:

SELECT row_to_json(row) AS json
 FROM (
 SELECT
 c.snt_code AS snt_code,
 vdc.id AS vdc,
 vm.uuid AS uuid,
 vm.name AS name,
 vm.os AS os,
 vm.service_type AS service_type,
 vm.template_name AS template_name,
 vi.virtual_machine_id AS vmid,
 vi.mac_address AS mac_address,
 vi.order AS sort,
 ip.address AS ip_address,
 ip.is_reserved AS is_reserved,
 ip.is_gateway AS is_gateway,
 vlan.vlan_id AS vlan,
 false AS is_backend
 FROM customer c
 LEFT JOIN virtualdatacenter vdc ON c.id=vdc.customer_id
 LEFT JOIN virtualmachine vm ON vm.virtual_data_center_id=vdc.id
 LEFT JOIN virtualinterface vi ON vm.id=vi.virtual_machine_id
 INNER JOIN ipaddress ip ON vi.id=ip.virtual_interface_id
 INNER JOIN virtuallan vlan ON ip.network_id=vlan.id
 WHERE c.snt_code='abcd' AND vdc.id=111 AND
 vm.uuid='422a141f-5e46-b0f2-53b8-e31070c883ed'
 ) row


The output is 2 rows of JSON data, but I would like to roll up those 2 rows
so that the 2 virtualinterfaces are in a nested JSON field called
interfaces... The only way I have found to accomplish this so far is to
use a function to grab the joined interface data like:

SELECT row_to_json(row) AS json
FROM (
SELECT
c.snt_code AS snt_code,
vdc.id AS vdc,
vm.uuid AS uuid,
vm.name AS name,
vm.os AS os,
vm.service_type AS service_type,
vm.template_name AS template_name,
(get_vm_with_interfaces(vm.id)) as interfaces
FROM liquorstore_customer c
LEFT JOIN liquorstore_virtualdatacenter vdc ON c.id=vdc.customer_id
LEFT JOIN liquorstore_virtualmachine vm ON vm.virtual_data_center_id=
vdc.id
WHERE c.snt_code='abcd' AND vdc.id=111 AND
vm.uuid='422a141f-5e46-b0f2-53b8-e31070c883ed'
) row


Is there a way to do this in a single join?

Thank in advance!

Deven


Re: [GENERAL] Foreign Data Wrapper for remote view?

2015-03-30 Thread Deven Phillips
Nevermind, it works exactly as I would have expected and it is the exact
speed improvement I needed!

Deven

On Mon, Mar 30, 2015 at 10:12 AM, Deven Phillips deven.phill...@gmail.com
wrote:

 To prevent multiple foreign table scans during queries, I would like to be
 able to create an FDW against a VIEW in a foreign MySQL database. Is this
 possible?

 Thanks in advance!

 Deven



[GENERAL] Foreign Data Wrapper for remote view?

2015-03-30 Thread Deven Phillips
To prevent multiple foreign table scans during queries, I would like to be
able to create an FDW against a VIEW in a foreign MySQL database. Is this
possible?

Thanks in advance!

Deven


[GENERAL] JSON pretty and selecting nested JSON fields

2015-03-30 Thread Deven Phillips
Hi all,

I have a query which selects several rows of data, and contained in one
of those rows is some aggregated JSON data. I am using row_to_json() to
make the whole output JSON and I am providing true for pretty formatting
of the JSON. The problem that I am seeing is that they nested JSON block is
not being prettified along with the outer JSON.

Example:

I have a function which takes a single key param and returns a JSON array:

CREATE OR REPLACE FUNCTION get_virtual_interfaces(vm_id BIGINT) RETURNS
jsonb AS $$
DECLARE
res jsonb;
BEGIN
SELECT array_to_json(array_agg(row_to_json(i, true)), true)
FROM (
SELECT DISTINCT * FROM virtual_interfaces WHERE vmid=vm_id) i
INTO res;
RETURN res;
END;
$$ LANGUAGE PLPGSQL;

That function is then used in another query to provide a nested JSON
containing the array:

SELECT
row.snt_code AS snt_code,
row.vdc AS vdc,
row.uuid AS uuid,
row_to_json(row, true) AS json
FROM (
SELECT
vm.*,
CONCAT('https://int.cloudcontrol.sgns.net/v3/customer/',
vm.snt_code, '/vdc/', vm.vdc, '/vm/', vm.uuid) AS self,
'cc.v3.sungardas.vm' AS type,
(get_virtual_interfaces(vm.vmid)) as interfaces
FROM virtual_machines vm
) row;

The outer level of JSON is pretty printed, but the content of the array
from the function is NOT, even though I have specified that it should be.
Any suggestions of how to address this?

Thanks in advance!

Deven


Re: [GENERAL] JSON pretty and selecting nested JSON fields

2015-03-30 Thread Deven Phillips
Here's an example of the JSON output I am getting:

{customer_code:abcd,
 vdc:1241,
 vmid:114778,
 uuid:421ea391-b292-ca2e-9a3a-6da3037748c8,
 name:vmname,
 os:Red Hat Enterprise Linux 6 (64-bit),
 service_type:CU,
 template_name:,
 self:
https://mysite.mydomain.tld/v3/customer/3mcc/vdc/1241/vm/421ea391-b292-ca2e-9a3a-6da3037748c8
,
 type:cc.v3.sungardas.vm,
 interfaces:[{vlan: null, vmid: 114778, order: 1, ip_address:
10.129.114.45, is_backend: true, is_gateway: false, is_reserved:
false, mac_address: 00:50:56:9e:25:40}, {vlan: null, vmid: 114778,
order: 0, ip_address: 10.137.154.212, is_backend: true,
is_gateway: false, is_reserved: false, mac_address:
00:50:56:9e:25:3d}]}

I would expect it to be:

{customer_code:abcd,
 vdc:1241,
 vmid:114778,
 uuid:421ea391-b292-ca2e-9a3a-6da3037748c8,
 name:vmname,
 os:Red Hat Enterprise Linux 6 (64-bit),
 service_type:CU,
 template_name:,
 self:https://mysite.mydomain.tld
/v3/customer/3mcc/vdc/1241/vm/421ea391-b292-ca2e-9a3a-6da3037748c8,
 type:cc.v3.vm,
 interfaces:[
  {vlan: null,
   vmid: 114778,
   order: 1,
   ip_address: 10.129.114.45,
   is_backend: true,
   is_gateway: false,
   is_reserved: false,
   mac_address: 00:50:56:9e:25:40
  }, {vlan: null,
   vmid: 114778,
   order: 0,
   ip_address: 10.137.154.212,
   is_backend: true,
   is_gateway: false,
   is_reserved: false,
   mac_address: 00:50:56:9e:25:3d}]}

On Mon, Mar 30, 2015 at 1:54 PM, Deven Phillips deven.phill...@gmail.com
wrote:

 Hi all,

 I have a query which selects several rows of data, and contained in
 one of those rows is some aggregated JSON data. I am using row_to_json() to
 make the whole output JSON and I am providing true for pretty formatting
 of the JSON. The problem that I am seeing is that they nested JSON block is
 not being prettified along with the outer JSON.

 Example:

 I have a function which takes a single key param and returns a JSON array:

 CREATE OR REPLACE FUNCTION get_virtual_interfaces(vm_id BIGINT) RETURNS
 jsonb AS $$
 DECLARE
 res jsonb;
 BEGIN
 SELECT array_to_json(array_agg(row_to_json(i, true)), true)
 FROM (
 SELECT DISTINCT * FROM virtual_interfaces WHERE vmid=vm_id) i
 INTO res;
 RETURN res;
 END;
 $$ LANGUAGE PLPGSQL;

 That function is then used in another query to provide a nested JSON
 containing the array:

 SELECT
 row.snt_code AS snt_code,
 row.vdc AS vdc,
 row.uuid AS uuid,
 row_to_json(row, true) AS json
 FROM (
 SELECT
 vm.*,
 CONCAT('https://int.cloudcontrol.sgns.net/v3/customer/',
 vm.snt_code, '/vdc/', vm.vdc, '/vm/', vm.uuid) AS self,
 'cc.v3.sungardas.vm' AS type,
 (get_virtual_interfaces(vm.vmid)) as interfaces
 FROM virtual_machines vm
 ) row;

 The outer level of JSON is pretty printed, but the content of the array
 from the function is NOT, even though I have specified that it should be.
 Any suggestions of how to address this?

 Thanks in advance!

 Deven



Re: [GENERAL] Muti-table join and roll-up aggregate data into nested JSON?

2015-03-30 Thread Deven Phillips
Sorry, typo!!

CREATE OR REPLACE FUNCTION get_vm_with_interfaces(vm_id BIGINT) RETURNS
jsonb AS $$
DECLARE
res jsonb;
BEGIN
SELECT array_to_json(array_agg(row_to_json(i, true)), true)
FROM (
SELECT DISTINCT * FROM virtual_interfaces WHERE vmid=vm_id) i
INTO res;
RETURN res;
END;
$$ LANGUAGE PLPGSQL;

On Mon, Mar 30, 2015 at 5:22 PM, Deven Phillips deven.phill...@gmail.com
wrote:

 I'm using PostgreSQL 9.4.1 on Ubuntu 14.10.

 The function does the following:

 DROP FUNCTION get_vm_with_interfaces(vm_id BIGINT);

 CREATE OR REPLACE FUNCTION get_virtual_interfaces(vm_id BIGINT) RETURNS
 jsonb AS $$
 DECLARE
 res jsonb;
 BEGIN
 SELECT array_to_json(array_agg(row_to_json(i, true)), true)
 FROM (
 SELECT DISTINCT * FROM virtual_interfaces WHERE vmid=vm_id) i
 INTO res;
 RETURN res;
 END;
 $$ LANGUAGE PLPGSQL;

 Thanks for any help!

 Deven

 On Mon, Mar 30, 2015 at 3:25 PM, Merlin Moncure mmonc...@gmail.com
 wrote:

 On Mon, Mar 30, 2015 at 6:27 AM, Deven Phillips
 deven.phill...@gmail.com wrote:
  I have a multi-table join which will return 1 row per interface and I
  would like to aggregate the interfaces as a nested section on the
 resulting
  JSON output. Could someone help me to figure out how to accomplish that?
 
  Example:
 
  SELECT row_to_json(row) AS json
  FROM (
  SELECT
  c.snt_code AS snt_code,
  vdc.id AS vdc,
  vm.uuid AS uuid,
  vm.name AS name,
  vm.os AS os,
  vm.service_type AS service_type,
  vm.template_name AS template_name,
  vi.virtual_machine_id AS vmid,
  vi.mac_address AS mac_address,
  vi.order AS sort,
  ip.address AS ip_address,
  ip.is_reserved AS is_reserved,
  ip.is_gateway AS is_gateway,
  vlan.vlan_id AS vlan,
  false AS is_backend
  FROM customer c
  LEFT JOIN virtualdatacenter vdc ON c.id=vdc.customer_id
  LEFT JOIN virtualmachine vm ON vm.virtual_data_center_id=vdc.id
  LEFT JOIN virtualinterface vi ON vm.id=vi.virtual_machine_id
  INNER JOIN ipaddress ip ON vi.id=ip.virtual_interface_id
  INNER JOIN virtuallan vlan ON ip.network_id=vlan.id
  WHERE c.snt_code='abcd' AND vdc.id=111 AND
  vm.uuid='422a141f-5e46-b0f2-53b8-e31070c883ed'
  ) row
 
 
  The output is 2 rows of JSON data, but I would like to roll up those 2
 rows
  so that the 2 virtualinterfaces are in a nested JSON field called
  interfaces... The only way I have found to accomplish this so far is
 to
  use a function to grab the joined interface data like:
 
  SELECT row_to_json(row) AS json
  FROM (
  SELECT
  c.snt_code AS snt_code,
  vdc.id AS vdc,
  vm.uuid AS uuid,
  vm.name AS name,
  vm.os AS os,
  vm.service_type AS service_type,
  vm.template_name AS template_name,
  (get_vm_with_interfaces(vm.id)) as interfaces
  FROM liquorstore_customer c
  LEFT JOIN liquorstore_virtualdatacenter vdc ON c.id=vdc.customer_id
  LEFT JOIN liquorstore_virtualmachine vm ON
  vm.virtual_data_center_id=vdc.id
  WHERE c.snt_code='abcd' AND vdc.id=111 AND
  vm.uuid='422a141f-5e46-b0f2-53b8-e31070c883ed'
  ) row
 
 
  Is there a way to do this in a single join?

 sure! what is get_vm_with_interfaces doing?

 also, postgres version is extremely relevant here. It's possible to do
 it in 9.2+, but the solution in 9.4 is very different due to the
 presence of json_build().

 At a very high level, you can aggregate arbitrary records into arrays
 and those arrays will automatically be converted into json arrays by
 row_to_json.   The three basic mechanisms of making arrays are
 array_agg(), array(), and array[] -- array[] however is pretty much
 only useful when dealing with a fixed set of values.

 For example, here is a query that makes an internal nested array:

 select row_to_json(q)
 from
 (
   select v % 3 as k, array_agg(v)
   from (select generate_series(1,10) v) q group by 1
 ) q;


 merlin





Re: [GENERAL] Muti-table join and roll-up aggregate data into nested JSON?

2015-03-30 Thread Deven Phillips
I'm using PostgreSQL 9.4.1 on Ubuntu 14.10.

The function does the following:

DROP FUNCTION get_vm_with_interfaces(vm_id BIGINT);

CREATE OR REPLACE FUNCTION get_virtual_interfaces(vm_id BIGINT) RETURNS
jsonb AS $$
DECLARE
res jsonb;
BEGIN
SELECT array_to_json(array_agg(row_to_json(i, true)), true)
FROM (
SELECT DISTINCT * FROM virtual_interfaces WHERE vmid=vm_id) i
INTO res;
RETURN res;
END;
$$ LANGUAGE PLPGSQL;

Thanks for any help!

Deven

On Mon, Mar 30, 2015 at 3:25 PM, Merlin Moncure mmonc...@gmail.com wrote:

 On Mon, Mar 30, 2015 at 6:27 AM, Deven Phillips
 deven.phill...@gmail.com wrote:
  I have a multi-table join which will return 1 row per interface and I
  would like to aggregate the interfaces as a nested section on the
 resulting
  JSON output. Could someone help me to figure out how to accomplish that?
 
  Example:
 
  SELECT row_to_json(row) AS json
  FROM (
  SELECT
  c.snt_code AS snt_code,
  vdc.id AS vdc,
  vm.uuid AS uuid,
  vm.name AS name,
  vm.os AS os,
  vm.service_type AS service_type,
  vm.template_name AS template_name,
  vi.virtual_machine_id AS vmid,
  vi.mac_address AS mac_address,
  vi.order AS sort,
  ip.address AS ip_address,
  ip.is_reserved AS is_reserved,
  ip.is_gateway AS is_gateway,
  vlan.vlan_id AS vlan,
  false AS is_backend
  FROM customer c
  LEFT JOIN virtualdatacenter vdc ON c.id=vdc.customer_id
  LEFT JOIN virtualmachine vm ON vm.virtual_data_center_id=vdc.id
  LEFT JOIN virtualinterface vi ON vm.id=vi.virtual_machine_id
  INNER JOIN ipaddress ip ON vi.id=ip.virtual_interface_id
  INNER JOIN virtuallan vlan ON ip.network_id=vlan.id
  WHERE c.snt_code='abcd' AND vdc.id=111 AND
  vm.uuid='422a141f-5e46-b0f2-53b8-e31070c883ed'
  ) row
 
 
  The output is 2 rows of JSON data, but I would like to roll up those 2
 rows
  so that the 2 virtualinterfaces are in a nested JSON field called
  interfaces... The only way I have found to accomplish this so far is to
  use a function to grab the joined interface data like:
 
  SELECT row_to_json(row) AS json
  FROM (
  SELECT
  c.snt_code AS snt_code,
  vdc.id AS vdc,
  vm.uuid AS uuid,
  vm.name AS name,
  vm.os AS os,
  vm.service_type AS service_type,
  vm.template_name AS template_name,
  (get_vm_with_interfaces(vm.id)) as interfaces
  FROM liquorstore_customer c
  LEFT JOIN liquorstore_virtualdatacenter vdc ON c.id=vdc.customer_id
  LEFT JOIN liquorstore_virtualmachine vm ON
  vm.virtual_data_center_id=vdc.id
  WHERE c.snt_code='abcd' AND vdc.id=111 AND
  vm.uuid='422a141f-5e46-b0f2-53b8-e31070c883ed'
  ) row
 
 
  Is there a way to do this in a single join?

 sure! what is get_vm_with_interfaces doing?

 also, postgres version is extremely relevant here. It's possible to do
 it in 9.2+, but the solution in 9.4 is very different due to the
 presence of json_build().

 At a very high level, you can aggregate arbitrary records into arrays
 and those arrays will automatically be converted into json arrays by
 row_to_json.   The three basic mechanisms of making arrays are
 array_agg(), array(), and array[] -- array[] however is pretty much
 only useful when dealing with a fixed set of values.

 For example, here is a query that makes an internal nested array:

 select row_to_json(q)
 from
 (
   select v % 3 as k, array_agg(v)
   from (select generate_series(1,10) v) q group by 1
 ) q;


 merlin



Re: [GENERAL] Muti-table join and roll-up aggregate data into nested JSON?

2015-03-30 Thread Deven Phillips
I have already attempted a similar approach and I could not find a way to
pass the outer value of the VM ID to the inner SELECT. For example:

SELECT
row.snt_code AS snt_code,
row.vdc AS vdc,
row.uuid AS uuid,
row_to_json(row, true)::json AS json
FROM (
SELECT
vm.*,
CONCAT('https://int.cloudcontrol.sgns.net/v3/customer/',
vm.snt_code, '/vdc/', vm.vdc, '/vm/', vm.uuid) AS self,
'cc.v3.sungardas.vm' AS type,
(SELECT DISTINCT * FROM virtual_interfaces WHERE vmid=*vm.id
http://vm.id*) as interfaces
FROM virtual_machines vm
) row;

Placing the vm.id value there for the WHERE clause gives the error:

SQL Error [42703]: ERROR: column vm.id does not exist
  Position: 351
  ERROR: column vm.id does not exist
  Position: 351

Is there some way to make that value available to the inner select?

Thanks in advance!

Deven

On Mon, Mar 30, 2015 at 5:46 PM, Merlin Moncure mmonc...@gmail.com wrote:

 On Mon, Mar 30, 2015 at 4:22 PM, Deven Phillips
 deven.phill...@gmail.com wrote:
  I'm using PostgreSQL 9.4.1 on Ubuntu 14.10.
 
  The function does the following:
 
  DROP FUNCTION get_vm_with_interfaces(vm_id BIGINT);
 
  CREATE OR REPLACE FUNCTION get_virtual_interfaces(vm_id BIGINT) RETURNS
  jsonb AS $$
  DECLARE
  res jsonb;
  BEGIN
  SELECT array_to_json(array_agg(row_to_json(i, true)), true)
  FROM (
  SELECT DISTINCT * FROM virtual_interfaces WHERE vmid=vm_id) i
  INTO res;
  RETURN res;
  END;
  $$ LANGUAGE PLPGSQL;

 please, try to refrain from top posting.  particularly with emails
 like this where the context of the question is important.  Anyways,
 your inner function could be trivially inlined as so:

 SELECT row_to_json(row) AS json
 FROM (
 SELECT
 c.snt_code AS snt_code,
 vdc.id AS vdc,
 vm.uuid AS uuid,
 vm.name AS name,
 vm.os AS os,
 vm.service_type AS service_type,
 vm.template_name AS template_name,
 ( -- get_vm_with_interfaces(vm.id)
   SELECT array_to_json(array_agg(row_to_json(i, true)), true)
   FROM (
 SELECT DISTINCT * FROM virtual_interfaces vi WHERE vmid=vm_id
 ) i
   ) as interfaces
 FROM liquorstore_customer c
 LEFT JOIN liquorstore_virtualdatacenter vdc ON c.id=vdc.customer_id
 LEFT JOIN liquorstore_virtualmachine vm ON vm.virtual_data_center_id=
 vdc.id
 WHERE c.snt_code='abcd' AND vdc.id=111 AND
 vm.uuid='422a141f-5e46-b0f2-53b8-e31070c883ed'
 ) row

 I would personally simplify the subquery portion to:
 ( -- get_vm_with_interfaces(vm.id)
   SELECT array_agg(i)
   FROM (
 SELECT DISTINCT * FROM virtual_interfaces vi WHERE vmid=vm_id
 ) i

 , allowing for the outer 'to_json' to  handle the final
 transformation.  I'm not going to do it for you, but you could
 probably simplify the query even further by moving the aggregation out
 of a correlated subquery and into the basic field list, which would be
 faster for certain distributions of data.

 Also, a note about jsonb, which you used inside the inner function.
 jsonb is much better than type 'json' for any case involving
 manipulation of the json, searching, or repeated sub-document
 extraction.  However, for serialization to an application, it is
 basically pessimal as it involves building up internal structures that
 the vanilla json type does not involve. The basic rule of thumb is:
 serialization, json, everything else, jsonb.

 merlin



[GENERAL] json-patch support?

2015-03-27 Thread Deven Phillips
Are there any plans or ideas about implement JSON Patch (
http://jsonpatch.com/) support for PostgreSQL? We deal with some relatively
large JSON documents for our in-house application and it is often better to
just send a json-patch update rather than the full document. It would be
very nice if we could just select for the changes via a trigger and use
NOTIFY to tell our application about a patch. If nobody has discussed it
previously, perhaps I will look into implementing it myself.

Thanks in advance,

Deven


Re: [GENERAL] json-patch support?

2015-03-27 Thread Deven Phillips
OK, then I will look into perhaps implementing it as a pl-python or pl-java
function. Thanks for the advice!!

Deven

On Fri, Mar 27, 2015 at 2:40 PM, Merlin Moncure mmonc...@gmail.com wrote:

 On Fri, Mar 27, 2015 at 1:36 PM, Arthur Silva arthur...@gmail.com wrote:
  On Fri, Mar 27, 2015 at 1:56 PM, Deven Phillips 
 deven.phill...@gmail.com
  wrote:
 
  Are there any plans or ideas about implement JSON Patch
  (http://jsonpatch.com/) support for PostgreSQL? We deal with some
 relatively
  large JSON documents for our in-house application and it is often
 better to
  just send a json-patch update rather than the full document. It would be
  very nice if we could just select for the changes via a trigger and use
  NOTIFY to tell our application about a patch. If nobody has discussed it
  previously, perhaps I will look into implementing it myself.
 
  Thanks in advance,
 
  Deven
 
 
  This could be implemented as an extension.
  There're already a few extensions that provide this functionality with
 plain
  functions, so it's just a matter of parsing the json and executing those
  functions.


 Right.  If it was me, I'd shoot for a userland (that is, in sql or
 pl/pgsql) implementation that wraps the existing json APIs to get the
 desired result.   From there, could determine if a more optimized
 version in C was warranted.

 merlin



[GENERAL] Alias field names in foreign data wrapper?

2015-03-27 Thread Deven Phillips
I am trying out some ideas using FDW, and I have added some FDW tables
which access a backend MySQL DB... Unfortunately, I am getting some errors
because of fields names with reserved words. I was wondering if there is a
way to alias a field name when creating the foreign table?

Thanks in advance!!!

Deven


Re: [GENERAL] Alias field names in foreign data wrapper?

2015-03-27 Thread Deven Phillips
Yes. Here's the MySQL schema:

CREATE TABLE `liquorstore_backendipaddress` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `backend_network_id` int(11) DEFAULT NULL,
  `backend_virtual_interface_id` int(11) DEFAULT NULL,
  `address` varchar(15) NOT NULL,
  `is_gateway` tinyint(1) NOT NULL,
  `is_reserved` tinyint(1) NOT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `backend_network_id` (`backend_network_id`,`address`),
  KEY `liquorstore_backendipaddress_4a648124` (`backend_network_id`),
  KEY `liquorstore_backendipaddress_27235e4d`
(`backend_virtual_interface_id`),
  CONSTRAINT `backend_network_id_refs_id_1d869576` FOREIGN KEY
(`backend_network_id`) REFERENCES `liquorstore_backendnetwork` (`id`),
  CONSTRAINT `backend_virtual_interface_id_refs_id_b058eaeb` FOREIGN KEY
(`backend_virtual_interface_id`) REFERENCES
`liquorstore_backendvirtualinterface` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=72184 DEFAULT CHARSET=latin1

On Fri, Mar 27, 2015 at 5:19 PM, Adrian Klaver adrian.kla...@aklaver.com
wrote:

 On 03/27/2015 01:55 PM, Deven Phillips wrote:

 Better example of the problem... My FDW table schema is:

 CREATE FOREIGN TABLE liquorstore_backendipaddress (
  id bigint NOT NULL,
  backend_network_id bigint,
  backend_virtual_interface_id bigint,
  address character varying(15) NOT NULL,
  is_gateway boolean NOT NULL,
  is_reserved boolean NOT NULL
 )
 SERVER edison
 OPTIONS (
  dbname 'edison',
  table_name 'liquorstore_backendvirtualinterface'
 );


 But when I run the following query:

 SELECT
  *
 FROM liquorstore_backendipaddress

 I get an error:

 ccedison=# SELECT * FROM liquorstore_backendipaddress;
 ERROR:  failed to prepare the MySQL query:
 Unknown column 'backend_network_id' in 'field list'


 So is 'backend_network_id' in the MySQL table?


 I still cannot figure out what the problem might be so any help would be
 GREATLY appreciated.

 Thanks,

 Deven

 On Fri, Mar 27, 2015 at 4:15 PM, Deven Phillips
 deven.phill...@gmail.com mailto:deven.phill...@gmail.com wrote:

 I am trying out some ideas using FDW, and I have added some FDW
 tables which access a backend MySQL DB... Unfortunately, I am
 getting some errors because of fields names with reserved words. I
 was wondering if there is a way to alias a field name when
 creating the foreign table?

 Thanks in advance!!!

 Deven




 --
 Adrian Klaver
 adrian.kla...@aklaver.com



Re: [GENERAL] Alias field names in foreign data wrapper?

2015-03-27 Thread Deven Phillips
Better example of the problem... My FDW table schema is:

CREATE FOREIGN TABLE liquorstore_backendipaddress (
id bigint NOT NULL,
backend_network_id bigint,
backend_virtual_interface_id bigint,
address character varying(15) NOT NULL,
is_gateway boolean NOT NULL,
is_reserved boolean NOT NULL
)
SERVER edison
OPTIONS (
dbname 'edison',
table_name 'liquorstore_backendvirtualinterface'
);


But when I run the following query:

SELECT
*
FROM liquorstore_backendipaddress

I get an error:

ccedison=# SELECT * FROM liquorstore_backendipaddress;
ERROR:  failed to prepare the MySQL query:
Unknown column 'backend_network_id' in 'field list'

I still cannot figure out what the problem might be so any help would be
GREATLY appreciated.

Thanks,

Deven

On Fri, Mar 27, 2015 at 4:15 PM, Deven Phillips deven.phill...@gmail.com
wrote:

 I am trying out some ideas using FDW, and I have added some FDW tables
 which access a backend MySQL DB... Unfortunately, I am getting some errors
 because of fields names with reserved words. I was wondering if there is a
 way to alias a field name when creating the foreign table?

 Thanks in advance!!!

 Deven



Re: [GENERAL] Alias field names in foreign data wrapper?

2015-03-27 Thread Deven Phillips
OK, one of the devs on the GitHub page for that FDW helped me realize that
I had accidentally mapped the wrong table! Doh!

Deven

On Fri, Mar 27, 2015 at 5:31 PM, Adrian Klaver adrian.kla...@aklaver.com
wrote:

 On 03/27/2015 02:20 PM, Deven Phillips wrote:

 Yes. Here's the MySQL schema:

 CREATE TABLE `liquorstore_backendipaddress` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`backend_network_id` int(11) DEFAULT NULL,
`backend_virtual_interface_id` int(11) DEFAULT NULL,
`address` varchar(15) NOT NULL,
`is_gateway` tinyint(1) NOT NULL,
`is_reserved` tinyint(1) NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `backend_network_id` (`backend_network_id`,`address`),
KEY `liquorstore_backendipaddress_4a648124` (`backend_network_id`),
KEY `liquorstore_backendipaddress_27235e4d`
 (`backend_virtual_interface_id`),
CONSTRAINT `backend_network_id_refs_id_1d869576` FOREIGN KEY
 (`backend_network_id`) REFERENCES `liquorstore_backendnetwork` (`id`),
CONSTRAINT `backend_virtual_interface_id_refs_id_b058eaeb` FOREIGN
 KEY (`backend_virtual_interface_id`) REFERENCES
 `liquorstore_backendvirtualinterface` (`id`)
 ) ENGINE=InnoDB AUTO_INCREMENT=72184 DEFAULT CHARSET=latin1


 I refer you to here:

 http://dev.mysql.com/doc/refman/5.0/en/identifier-case-sensitivity.html

 When I have had problems with MySQL not seeing names I knew to be there it
 usually had something to do with the settings above.

 --
 Adrian Klaver
 adrian.kla...@aklaver.com