Re: [GENERAL] Muti-table join and roll-up aggregate data into nested JSON?

2015-03-30 Thread Deven Phillips
I have already attempted a similar approach and I could not find a way to
pass the outer value of the VM ID to the inner SELECT. For example:

SELECT
row.snt_code AS "snt_code",
row.vdc AS "vdc",
row.uuid AS "uuid",
row_to_json(row, true)::json AS "json"
FROM (
SELECT
vm.*,
CONCAT('https://int.cloudcontrol.sgns.net/v3/customer/',
vm.snt_code, '/vdc/', vm.vdc, '/vm/', vm.uuid) AS "self",
'cc.v3.sungardas.vm' AS "type",
(SELECT DISTINCT * FROM virtual_interfaces WHERE vmid=*vm.id
*) as interfaces
FROM virtual_machines vm
) row;

Placing the vm.id value there for the WHERE clause gives the error:

SQL Error [42703]: ERROR: column vm.id does not exist
  Position: 351
  ERROR: column vm.id does not exist
  Position: 351

Is there some way to make that value available to the inner select?

Thanks in advance!

Deven

On Mon, Mar 30, 2015 at 5:46 PM, Merlin Moncure  wrote:

> On Mon, Mar 30, 2015 at 4:22 PM, Deven Phillips
>  wrote:
> > I'm using PostgreSQL 9.4.1 on Ubuntu 14.10.
> >
> > The function does the following:
> >
> > DROP FUNCTION get_vm_with_interfaces(vm_id BIGINT);
> >
> > CREATE OR REPLACE FUNCTION get_virtual_interfaces(vm_id BIGINT) RETURNS
> > jsonb AS $$
> > DECLARE
> > res jsonb;
> > BEGIN
> > SELECT array_to_json(array_agg(row_to_json(i, true)), true)
> > FROM (
> > SELECT DISTINCT * FROM virtual_interfaces WHERE vmid=vm_id) i
> > INTO res;
> > RETURN res;
> > END;
> > $$ LANGUAGE PLPGSQL;
>
> please, try to refrain from top posting.  particularly with emails
> like this where the context of the question is important.  Anyways,
> your inner function could be trivially inlined as so:
>
> SELECT row_to_json(row) AS json
> FROM (
> SELECT
> c.snt_code AS "snt_code",
> vdc.id AS "vdc",
> vm.uuid AS "uuid",
> vm.name AS "name",
> vm.os AS "os",
> vm.service_type AS "service_type",
> vm.template_name AS "template_name",
> ( -- get_vm_with_interfaces(vm.id)
>   SELECT array_to_json(array_agg(row_to_json(i, true)), true)
>   FROM (
> SELECT DISTINCT * FROM virtual_interfaces vi WHERE vmid=vm_id
> ) i
>   ) as interfaces
> FROM liquorstore_customer c
> LEFT JOIN liquorstore_virtualdatacenter vdc ON c.id=vdc.customer_id
> LEFT JOIN liquorstore_virtualmachine vm ON vm.virtual_data_center_id=
> vdc.id
> WHERE c.snt_code='abcd' AND vdc.id=111 AND
> vm.uuid='422a141f-5e46-b0f2-53b8-e31070c883ed'
> ) row
>
> I would personally simplify the subquery portion to:
> ( -- get_vm_with_interfaces(vm.id)
>   SELECT array_agg(i)
>   FROM (
> SELECT DISTINCT * FROM virtual_interfaces vi WHERE vmid=vm_id
> ) i
>
> , allowing for the outer 'to_json' to  handle the final
> transformation.  I'm not going to do it for you, but you could
> probably simplify the query even further by moving the aggregation out
> of a correlated subquery and into the basic field list, which would be
> faster for certain distributions of data.
>
> Also, a note about jsonb, which you used inside the inner function.
> jsonb is much better than type 'json' for any case involving
> manipulation of the json, searching, or repeated sub-document
> extraction.  However, for serialization to an application, it is
> basically pessimal as it involves building up internal structures that
> the vanilla json type does not involve. The basic rule of thumb is:
> serialization, json, everything else, jsonb.
>
> merlin
>


Re: [GENERAL] unrecognized configuration parameter "bdr.connections"

2015-03-30 Thread Ian Barwick
Hi

On 15/03/31 4:59, negrus wrote:
> Hi  , all the installation of the BDR  according to the documentation:
> 
> https://wiki.postgresql.org/wiki/BDR_Packages
> 
> I have set the following parameters:
> 
>   max_replication_slots = 3
>   max_wal_senders = 4
>   wal_level = 'logical'
>   track_commit_timestamp = on
>   shared_preload_libraries = 'bdr'
>   max_worker_processes = 10
>   bdr.connections = 'master2'
>   bdr.master2_dsn = 'dbname = master user = postgres port = 5432'
> 
> with this package :
> 
> postgresql-bdr94-server-9.4.1_bdr2-1_2ndQuadrant.el7.centos.x86_64
> postgresql-bdr94-2ndquadrant-redhat-1.0-2.noarch
> postgresql-bdr94-9.4.1_bdr2-1_2ndQuadrant.el7.centos.x86_64
> postgresql-bdr94-contrib-9.4.1_bdr2-1_2ndQuadrant.el7.centos.x86_64
> postgresql-bdr94-libs-9.4.1_bdr2-1_2ndQuadrant.el7.centos.x86_64
> postgresql-bdr94-bdr-0.9.0-1_2ndQuadrant.el7.centos.x86_64
>
> 
> But when try startup , not workin the BDR :
> 
> < 2015-03-30 15:57:53.957 PYT >LOG:  registering background worker "bdr
> supervisor"
> < 2015-03-30 15:57:53.957 PYT >WARNING:  unrecognized configuration
> parameter "bdr.connections"
> < 2015-03-30 15:57:53.957 PYT >WARNING:  unrecognized configuration
> parameter "bdr.master2_dsn"
> < 2015-03-30 15:57:53.978 PYT >LOG:  redirecting log output to logging
> collector process

As of release 0.9.0 the connection parameters are no longer defined in
postgresql.conf; if present the parameters will be ignored and a warning
issued, like you saw here.

See the documentation for further details, particularly:

  http://bdr-project.org/docs/0.9/release-0.9.0.html
  http://bdr-project.org/docs/0.9.0/quickstart-editing.html
  http://bdr-project.org/docs/0.9.0/quickstart-enabling.html

Regards

Ian Barwick

-- 
 Ian Barwick   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, RemoteDBA, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Link Office Word form document with data from PostgreSQL

2015-03-30 Thread Stephen Davies
Open Office etc can populate fields via JDBC from any database including 
PostgreSQL.


Cheers,
Stephen

On 30/03/15 19:52, Tim Clarke wrote:

Two options that I know of:
1) Use Java and the Apache POI project
2) Write a file containing the data in some other format like HTML then
open that with Word.

Tim Clarke

On 30/03/15 09:20, avpro avpro wrote:


Hi all,

Have you ever experienced how to populate fields from a MS Office Word
document with a PostgreSQL data available either in a table or view?

I haven’t seen anything on the web; only possible with MS products, VS
or Access.

Thank you for your input








--
=
Stephen Davies Consulting P/L Phone: 08-8177 1595
Adelaide, South Australia.Mobile:040 304 0583
Records & Collections Management.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [SQL] Re: [GENERAL] Link Office Word form document with data from PostgreSQL

2015-03-30 Thread Ken Tanzer
On Mon, Mar 30, 2015 at 1:51 PM, avpro avpro  wrote:

> thank you all for your responses.
>
> I think to easiest option for me would be to use the ODBC connection. Now
> my problem is where to start with the VBA code; but that's for another list.
> cheers
>
> On 30 March 2015 at 19:51, Vincent Veyron  wrote:
>
>> On Mon, 30 Mar 2015 17:37:28 +0200
>> Vincenzo Campanella  wrote:
>> Stirling Newberry  wrote:
>>
>> >post the code
>>
>> Here you go :
>>
>> http://pastebin.com/eVXZiFws
>>
>>
>> This is a mod_perl module used in the HeaderParser phase of the Apache
>> request cycle; it first grabs the sql associated with the model, and passes
>> parameters to it; this sql is used to retrieve the required values for the
>> model, and the placeholders in the document are replaced with those values.
>> The user is then redirected to the newly created document
>>
>> You can see it in action in the demo account of the site referenced in my
>> signature : open any file (those are legal files/insurance claims), click
>> on the 'Contributors' tab (third tab from left; add a contributor if none
>> is present), and click on 'Templates'. Click on the template to generate a
>> document
>>
>>
>> --
>> Salutations, Vincent Veyron
>>
>> https://legalcase.libremen.com/
>> Legal case, contract and insurance claim management software
>>
>
>
If PHP is an option, you might want to look at the Tiny But Strong template
system (http://www.tinybutstrong.com/), along with the OpenTBS plugin (
http://www.tinybutstrong.com/plugins/opentbs/demo/demo.html).

We used it to replace some homegrown code similar to what Vincent described
that directly manipulated the document.  There are enough idiosyncracies
 among the file formats that I've been much happier using the TBS code.  I
think this is especially true if you are merging multiple records into a
single, repeating template document as opposed to doing a single set of
replacements.

Cheers,
Ken

-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] JSON "pretty" and selecting nested JSON fields

2015-03-30 Thread Merlin Moncure
On Mon, Mar 30, 2015 at 3:30 PM, Adrian Klaver
 wrote:
> On 03/30/2015 10:54 AM, Deven Phillips wrote:
>>
>> Hi all,
>>
>>  I have a query which selects several rows of data, and contained in
>> one of those rows is some aggregated JSON data. I am using row_to_json()
>> to make the whole output JSON and I am providing "true" for pretty
>> formatting of the JSON. The problem that I am seeing is that they nested
>> JSON block is not being prettified along with the outer JSON.
>>
>> Example:
>>
>> I have a function which takes a single key param and returns a JSON array:
>>
>> CREATE OR REPLACE FUNCTION get_virtual_interfaces(vm_id BIGINT) RETURNS
>> jsonb AS $$
>> DECLARE
>>  res jsonb;
>> BEGIN
>> SELECT array_to_json(array_agg(row_to_json(i, true)), true)
>>  FROM (
>>  SELECT DISTINCT * FROM virtual_interfaces WHERE vmid=vm_id)
>> i INTO res;
>>  RETURN res;
>> END;
>> $$ LANGUAGE PLPGSQL;
>>
>> That function is then used in another query to provide a nested JSON
>> containing the array:
>>
>> SELECT
>>  row.snt_code AS "snt_code",
>>  row.vdc AS "vdc",
>>  row.uuid AS "uuid",
>>  row_to_json(row, true) AS "json"
>> FROM (
>>  SELECT
>>  vm.*,
>>  CONCAT('https://int.cloudcontrol.sgns.net/v3/customer/',
>> vm.snt_code, '/vdc/', vm.vdc, '/vm/', vm.uuid) AS "self",
>>  'cc.v3.sungardas.vm' AS "type",
>>  (get_virtual_interfaces(vm.vmid)) as interfaces
>>  FROM virtual_machines vm
>> ) row;
>>
>> The outer level of JSON is "pretty printed", but the content of the
>> array from the function is NOT, even though I have specified that it
>> should be. Any suggestions of how to address this?
>
>
> Well it is documented:
>
> http://www.postgresql.org/docs/9.4/interactive/functions-json.html
>
> row_to_json(record [, pretty_bool]) Returns the row as a JSON object.
> Line feeds will be added between level-1 elements if
>  ^^^
> pretty_bool is true.
>
> I would say post a feature request on --hackers or at ask if work is being
> done on this.


Yeah, also, the OP's problem was made worse by using 'jsonb' inside
the function; jsonb ignores any whitespace formatting (as opposed to
json).

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Muti-table join and roll-up aggregate data into nested JSON?

2015-03-30 Thread Merlin Moncure
On Mon, Mar 30, 2015 at 4:22 PM, Deven Phillips
 wrote:
> I'm using PostgreSQL 9.4.1 on Ubuntu 14.10.
>
> The function does the following:
>
> DROP FUNCTION get_vm_with_interfaces(vm_id BIGINT);
>
> CREATE OR REPLACE FUNCTION get_virtual_interfaces(vm_id BIGINT) RETURNS
> jsonb AS $$
> DECLARE
> res jsonb;
> BEGIN
> SELECT array_to_json(array_agg(row_to_json(i, true)), true)
> FROM (
> SELECT DISTINCT * FROM virtual_interfaces WHERE vmid=vm_id) i
> INTO res;
> RETURN res;
> END;
> $$ LANGUAGE PLPGSQL;

please, try to refrain from top posting.  particularly with emails
like this where the context of the question is important.  Anyways,
your inner function could be trivially inlined as so:

SELECT row_to_json(row) AS json
FROM (
SELECT
c.snt_code AS "snt_code",
vdc.id AS "vdc",
vm.uuid AS "uuid",
vm.name AS "name",
vm.os AS "os",
vm.service_type AS "service_type",
vm.template_name AS "template_name",
( -- get_vm_with_interfaces(vm.id)
  SELECT array_to_json(array_agg(row_to_json(i, true)), true)
  FROM (
SELECT DISTINCT * FROM virtual_interfaces vi WHERE vmid=vm_id
) i
  ) as interfaces
FROM liquorstore_customer c
LEFT JOIN liquorstore_virtualdatacenter vdc ON c.id=vdc.customer_id
LEFT JOIN liquorstore_virtualmachine vm ON vm.virtual_data_center_id=vdc.id
WHERE c.snt_code='abcd' AND vdc.id=111 AND
vm.uuid='422a141f-5e46-b0f2-53b8-e31070c883ed'
) row

I would personally simplify the subquery portion to:
( -- get_vm_with_interfaces(vm.id)
  SELECT array_agg(i)
  FROM (
SELECT DISTINCT * FROM virtual_interfaces vi WHERE vmid=vm_id
) i

, allowing for the outer 'to_json' to  handle the final
transformation.  I'm not going to do it for you, but you could
probably simplify the query even further by moving the aggregation out
of a correlated subquery and into the basic field list, which would be
faster for certain distributions of data.

Also, a note about jsonb, which you used inside the inner function.
jsonb is much better than type 'json' for any case involving
manipulation of the json, searching, or repeated sub-document
extraction.  However, for serialization to an application, it is
basically pessimal as it involves building up internal structures that
the vanilla json type does not involve. The basic rule of thumb is:
serialization, json, everything else, jsonb.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Muti-table join and roll-up aggregate data into nested JSON?

2015-03-30 Thread Deven Phillips
I'm using PostgreSQL 9.4.1 on Ubuntu 14.10.

The function does the following:

DROP FUNCTION get_vm_with_interfaces(vm_id BIGINT);

CREATE OR REPLACE FUNCTION get_virtual_interfaces(vm_id BIGINT) RETURNS
jsonb AS $$
DECLARE
res jsonb;
BEGIN
SELECT array_to_json(array_agg(row_to_json(i, true)), true)
FROM (
SELECT DISTINCT * FROM virtual_interfaces WHERE vmid=vm_id) i
INTO res;
RETURN res;
END;
$$ LANGUAGE PLPGSQL;

Thanks for any help!

Deven

On Mon, Mar 30, 2015 at 3:25 PM, Merlin Moncure  wrote:

> On Mon, Mar 30, 2015 at 6:27 AM, Deven Phillips
>  wrote:
> > I have a multi-table join which will return 1 row per "interface" and I
> > would like to aggregate the interfaces as a nested section on the
> resulting
> > JSON output. Could someone help me to figure out how to accomplish that?
> >
> > Example:
> >
> >> SELECT row_to_json(row) AS json
> >> FROM (
> >> SELECT
> >> c.snt_code AS "snt_code",
> >> vdc.id AS "vdc",
> >> vm.uuid AS "uuid",
> >> vm.name AS "name",
> >> vm.os AS "os",
> >> vm.service_type AS "service_type",
> >> vm.template_name AS "template_name",
> >> vi.virtual_machine_id AS "vmid",
> >> vi.mac_address AS "mac_address",
> >> vi."order" AS "sort",
> >> ip.address AS "ip_address",
> >> ip.is_reserved AS "is_reserved",
> >> ip.is_gateway AS "is_gateway",
> >> vlan.vlan_id AS "vlan",
> >> false AS "is_backend"
> >> FROM customer c
> >> LEFT JOIN virtualdatacenter vdc ON c.id=vdc.customer_id
> >> LEFT JOIN virtualmachine vm ON vm.virtual_data_center_id=vdc.id
> >> LEFT JOIN virtualinterface vi ON vm.id=vi.virtual_machine_id
> >> INNER JOIN ipaddress ip ON vi.id=ip.virtual_interface_id
> >> INNER JOIN virtuallan vlan ON ip.network_id=vlan.id
> >> WHERE c.snt_code='abcd' AND vdc.id=111 AND
> >> vm.uuid='422a141f-5e46-b0f2-53b8-e31070c883ed'
> >> ) row
> >
> >
> > The output is 2 rows of JSON data, but I would like to roll up those 2
> rows
> > so that the 2 "virtualinterfaces" are in a nested JSON field called
> > "interfaces"... The only way I have found to accomplish this so far is to
> > use a function to grab the joined interface data like:
> >
> > SELECT row_to_json(row) AS json
> > FROM (
> > SELECT
> > c.snt_code AS "snt_code",
> > vdc.id AS "vdc",
> > vm.uuid AS "uuid",
> > vm.name AS "name",
> > vm.os AS "os",
> > vm.service_type AS "service_type",
> > vm.template_name AS "template_name",
> > (get_vm_with_interfaces(vm.id)) as interfaces
> > FROM liquorstore_customer c
> > LEFT JOIN liquorstore_virtualdatacenter vdc ON c.id=vdc.customer_id
> > LEFT JOIN liquorstore_virtualmachine vm ON
> > vm.virtual_data_center_id=vdc.id
> > WHERE c.snt_code='abcd' AND vdc.id=111 AND
> > vm.uuid='422a141f-5e46-b0f2-53b8-e31070c883ed'
> > ) row
> >
> >
> > Is there a way to do this in a single join?
>
> sure! what is "get_vm_with_interfaces" doing?
>
> also, postgres version is extremely relevant here. It's possible to do
> it in 9.2+, but the solution in 9.4 is very different due to the
> presence of json_build().
>
> At a very high level, you can aggregate arbitrary records into arrays
> and those arrays will automatically be converted into json arrays by
> row_to_json.   The three basic mechanisms of making arrays are
> array_agg(), array(), and array[] -- array[] however is pretty much
> only useful when dealing with a fixed set of values.
>
> For example, here is a query that makes an internal nested array:
>
> select row_to_json(q)
> from
> (
>   select v % 3 as k, array_agg(v)
>   from (select generate_series(1,10) v) q group by 1
> ) q;
>
>
> merlin
>


Re: [GENERAL] Muti-table join and roll-up aggregate data into nested JSON?

2015-03-30 Thread Deven Phillips
Sorry, typo!!

CREATE OR REPLACE FUNCTION get_vm_with_interfaces(vm_id BIGINT) RETURNS
jsonb AS $$
DECLARE
res jsonb;
BEGIN
SELECT array_to_json(array_agg(row_to_json(i, true)), true)
FROM (
SELECT DISTINCT * FROM virtual_interfaces WHERE vmid=vm_id) i
INTO res;
RETURN res;
END;
$$ LANGUAGE PLPGSQL;

On Mon, Mar 30, 2015 at 5:22 PM, Deven Phillips 
wrote:

> I'm using PostgreSQL 9.4.1 on Ubuntu 14.10.
>
> The function does the following:
>
> DROP FUNCTION get_vm_with_interfaces(vm_id BIGINT);
>
> CREATE OR REPLACE FUNCTION get_virtual_interfaces(vm_id BIGINT) RETURNS
> jsonb AS $$
> DECLARE
> res jsonb;
> BEGIN
> SELECT array_to_json(array_agg(row_to_json(i, true)), true)
> FROM (
> SELECT DISTINCT * FROM virtual_interfaces WHERE vmid=vm_id) i
> INTO res;
> RETURN res;
> END;
> $$ LANGUAGE PLPGSQL;
>
> Thanks for any help!
>
> Deven
>
> On Mon, Mar 30, 2015 at 3:25 PM, Merlin Moncure 
> wrote:
>
>> On Mon, Mar 30, 2015 at 6:27 AM, Deven Phillips
>>  wrote:
>> > I have a multi-table join which will return 1 row per "interface" and I
>> > would like to aggregate the interfaces as a nested section on the
>> resulting
>> > JSON output. Could someone help me to figure out how to accomplish that?
>> >
>> > Example:
>> >
>> >> SELECT row_to_json(row) AS json
>> >> FROM (
>> >> SELECT
>> >> c.snt_code AS "snt_code",
>> >> vdc.id AS "vdc",
>> >> vm.uuid AS "uuid",
>> >> vm.name AS "name",
>> >> vm.os AS "os",
>> >> vm.service_type AS "service_type",
>> >> vm.template_name AS "template_name",
>> >> vi.virtual_machine_id AS "vmid",
>> >> vi.mac_address AS "mac_address",
>> >> vi."order" AS "sort",
>> >> ip.address AS "ip_address",
>> >> ip.is_reserved AS "is_reserved",
>> >> ip.is_gateway AS "is_gateway",
>> >> vlan.vlan_id AS "vlan",
>> >> false AS "is_backend"
>> >> FROM customer c
>> >> LEFT JOIN virtualdatacenter vdc ON c.id=vdc.customer_id
>> >> LEFT JOIN virtualmachine vm ON vm.virtual_data_center_id=vdc.id
>> >> LEFT JOIN virtualinterface vi ON vm.id=vi.virtual_machine_id
>> >> INNER JOIN ipaddress ip ON vi.id=ip.virtual_interface_id
>> >> INNER JOIN virtuallan vlan ON ip.network_id=vlan.id
>> >> WHERE c.snt_code='abcd' AND vdc.id=111 AND
>> >> vm.uuid='422a141f-5e46-b0f2-53b8-e31070c883ed'
>> >> ) row
>> >
>> >
>> > The output is 2 rows of JSON data, but I would like to roll up those 2
>> rows
>> > so that the 2 "virtualinterfaces" are in a nested JSON field called
>> > "interfaces"... The only way I have found to accomplish this so far is
>> to
>> > use a function to grab the joined interface data like:
>> >
>> > SELECT row_to_json(row) AS json
>> > FROM (
>> > SELECT
>> > c.snt_code AS "snt_code",
>> > vdc.id AS "vdc",
>> > vm.uuid AS "uuid",
>> > vm.name AS "name",
>> > vm.os AS "os",
>> > vm.service_type AS "service_type",
>> > vm.template_name AS "template_name",
>> > (get_vm_with_interfaces(vm.id)) as interfaces
>> > FROM liquorstore_customer c
>> > LEFT JOIN liquorstore_virtualdatacenter vdc ON c.id=vdc.customer_id
>> > LEFT JOIN liquorstore_virtualmachine vm ON
>> > vm.virtual_data_center_id=vdc.id
>> > WHERE c.snt_code='abcd' AND vdc.id=111 AND
>> > vm.uuid='422a141f-5e46-b0f2-53b8-e31070c883ed'
>> > ) row
>> >
>> >
>> > Is there a way to do this in a single join?
>>
>> sure! what is "get_vm_with_interfaces" doing?
>>
>> also, postgres version is extremely relevant here. It's possible to do
>> it in 9.2+, but the solution in 9.4 is very different due to the
>> presence of json_build().
>>
>> At a very high level, you can aggregate arbitrary records into arrays
>> and those arrays will automatically be converted into json arrays by
>> row_to_json.   The three basic mechanisms of making arrays are
>> array_agg(), array(), and array[] -- array[] however is pretty much
>> only useful when dealing with a fixed set of values.
>>
>> For example, here is a query that makes an internal nested array:
>>
>> select row_to_json(q)
>> from
>> (
>>   select v % 3 as k, array_agg(v)
>>   from (select generate_series(1,10) v) q group by 1
>> ) q;
>>
>>
>> merlin
>>
>
>


Re: [GENERAL] Slow trigger on insert: alternative solutions?

2015-03-30 Thread Adrian Klaver

On 03/30/2015 01:43 PM, Guillaume Drolet wrote:

Thanks Adrian,

In option 2, when you say "this is automated in an external Python
script", do you mean that you use something like psycopg2 to perform the
queries the database (e.g. for comparing data in the holding table with
the older table)?


Yes. Basically I use the dbf module I mentioned previously to read the 
DBF files, output the data I need, writing it into an in memory CSV file 
which I then use the psycopg2 COPY functions to dump into the Postgres 
database. The files I am working with are not as the large the ones you 
mention, so doing all this in memory is workable. The script then calls 
a series of user functions in Postgres to do the comparing and 
manipulating.  At the time I did this plpythonu was less featured then 
it is now, so to do what I wanted made more sense in an external script. 
Also the script pulls the DBF files from elsewhere and I felt more 
comfortable doing that outside the database then in.


Though more and more I seem to be using pandas(pandas.pydata.org) to do 
data conversions. Saves a lot of the steps in the above. In this case 
you would still need to get the data out of the DBF files.




Thanks.

2015-03-30 9:53 GMT-04:00 Adrian Klaver mailto:adrian.kla...@aklaver.com>>:

On 03/30/2015 06:04 AM, Guillaume Drolet wrote:

Hello,

I need your help speeding up the procedure I will explain below.
I am
looking for improvements to my method or different
approaches/ideas to
would help in this matter.

I have a set of DBF files that I load into my database using a
plpython
function and a call to ogr2ogr
(http://www.gdal.org/drv_pg.__html
). Once
in a while, I'll have to load updated versions of these tables
to get
the latest additions and possible corrections to older versions.

In my plpython script, if a table is loaded for the first time,
I first
load it empty, then I create a trigger function on insert
(execute on
row) that will check for duplicates on each insert. Depending on the
type of data I load, my trigger first checks for equality in a
subset of
columns (between 1 and 3 columns that would be like my primary
key(s))
and if true, I check if all columns are equal between NEW and the
matching row from my table. When this condition is true, I
return null,
else I store rows (i.e. NEW.* and matching row(s) in a new table
called
"duplicate" for further manual investigation. Here's an example
for one
table:

CREATE OR REPLACE FUNCTION check_naipf_insert()
RETURNS trigger AS
' BEGIN
 IF EXISTS (SELECT 1
  FROMpublic.naipf
  WHERE id_pet_mes IS NOT DISTINCT FROM
NEW.id_pet_mes
  AND etage IS NOT DISTINCT FROM NEW.etage) THEN
IF EXISTS (SELECT 1
 FROM public.naipf
 WHERE id_pet_mes IS NOT DISTINCT FROM
NEW.id_pet_mes
 AND etage IS NOT DISTINCT FROM NEW.etage
 AND type_couv IS NOT DISTINCT FROM
NEW.type_couv
 AND densite IS NOT DISTINCT FROM
NEW.densite
 AND hauteur IS NOT DISTINCT FROM
NEW.hauteur
 AND cl_age IS NOT DISTINCT FROM
NEW.cl_age) THEN
   RETURN NULL;
 ELSE
   INSERT INTO public.duplic_naipf SELECT NEW.*;
   INSERT INTO public.duplic_naipf (SELECT *
 FROM
public.naipf
 WHERE
id_pet_mes IS NOT DISTINCT FROM  NEW.id_pet_mes

  AND etage
IS NOT DISTINCT FROM NEW.etage );
   RETURN NULL;
 END IF;
   END IF;
   RETURN NEW;
   END;  '
   LANGUAGE plpgsql VOLATILE COST 100;

CREATE TRIGGER check_insert_naipf
BEFORE INSERT
ON public.pet4_naipf
FOR EACH ROW
EXECUTE PROCEDURE check_naipf_insert();

(in this case, duplicate rows that need investigation are rows
that may
have changed relative to older version of the DBF file, but that
have no
change in what I call their primary keys although they are not
really
PKs since I don't want to raise errors at loading)

Once this is done, ogr2ogr is called a second time to load the
data. It
is quite fast for small tables (tens of thousands of r

Re: [SQL] Re: [GENERAL] Link Office Word form document with data from PostgreSQL

2015-03-30 Thread avpro avpro
thank you all for your responses.

I think to easiest option for me would be to use the ODBC connection. Now
my problem is where to start with the VBA code; but that's for another list.
cheers

On 30 March 2015 at 19:51, Vincent Veyron  wrote:

> On Mon, 30 Mar 2015 17:37:28 +0200
> Vincenzo Campanella  wrote:
> Stirling Newberry  wrote:
>
> >post the code
>
> Here you go :
>
> http://pastebin.com/eVXZiFws
>
>
> This is a mod_perl module used in the HeaderParser phase of the Apache
> request cycle; it first grabs the sql associated with the model, and passes
> parameters to it; this sql is used to retrieve the required values for the
> model, and the placeholders in the document are replaced with those values.
> The user is then redirected to the newly created document
>
> You can see it in action in the demo account of the site referenced in my
> signature : open any file (those are legal files/insurance claims), click
> on the 'Contributors' tab (third tab from left; add a contributor if none
> is present), and click on 'Templates'. Click on the template to generate a
> document
>
>
> --
> Salutations, Vincent Veyron
>
> https://legalcase.libremen.com/
> Legal case, contract and insurance claim management software
>


Re: [GENERAL] Slow trigger on insert: alternative solutions?

2015-03-30 Thread Guillaume Drolet
Thanks Adrian,

In option 2, when you say "this is automated in an external Python script",
do you mean that you use something like psycopg2 to perform the queries the
database (e.g. for comparing data in the holding table with the older
table)?

Thanks.

2015-03-30 9:53 GMT-04:00 Adrian Klaver :

> On 03/30/2015 06:04 AM, Guillaume Drolet wrote:
>
>> Hello,
>>
>> I need your help speeding up the procedure I will explain below. I am
>> looking for improvements to my method or different approaches/ideas to
>> would help in this matter.
>>
>> I have a set of DBF files that I load into my database using a plpython
>> function and a call to ogr2ogr (http://www.gdal.org/drv_pg.html). Once
>> in a while, I'll have to load updated versions of these tables to get
>> the latest additions and possible corrections to older versions.
>>
>> In my plpython script, if a table is loaded for the first time, I first
>> load it empty, then I create a trigger function on insert (execute on
>> row) that will check for duplicates on each insert. Depending on the
>> type of data I load, my trigger first checks for equality in a subset of
>> columns (between 1 and 3 columns that would be like my primary key(s))
>> and if true, I check if all columns are equal between NEW and the
>> matching row from my table. When this condition is true, I return null,
>> else I store rows (i.e. NEW.* and matching row(s) in a new table called
>> "duplicate" for further manual investigation. Here's an example for one
>> table:
>>
>> CREATE OR REPLACE FUNCTION check_naipf_insert()
>>RETURNS trigger AS
>> ' BEGIN
>> IF EXISTS (SELECT 1
>>  FROMpublic.naipf
>>  WHERE id_pet_mes IS NOT DISTINCT FROM NEW.id_pet_mes
>>  AND etage IS NOT DISTINCT FROM NEW.etage) THEN
>>IF EXISTS (SELECT 1
>> FROM public.naipf
>> WHERE id_pet_mes IS NOT DISTINCT FROM
>> NEW.id_pet_mes
>> AND etage IS NOT DISTINCT FROM NEW.etage
>> AND type_couv IS NOT DISTINCT FROM NEW.type_couv
>> AND densite IS NOT DISTINCT FROM NEW.densite
>> AND hauteur IS NOT DISTINCT FROM NEW.hauteur
>> AND cl_age IS NOT DISTINCT FROM NEW.cl_age) THEN
>>   RETURN NULL;
>> ELSE
>>   INSERT INTO public.duplic_naipf SELECT NEW.*;
>>   INSERT INTO public.duplic_naipf (SELECT *
>> FROM
>> public.naipf
>> WHERE
>> id_pet_mes IS NOT DISTINCT FROM  NEW.id_pet_mes
>>  AND etage
>> IS NOT DISTINCT FROM NEW.etage );
>>   RETURN NULL;
>> END IF;
>>   END IF;
>>   RETURN NEW;
>>   END;  '
>>   LANGUAGE plpgsql VOLATILE COST 100;
>>
>> CREATE TRIGGER check_insert_naipf
>>BEFORE INSERT
>>ON public.pet4_naipf
>>FOR EACH ROW
>>EXECUTE PROCEDURE check_naipf_insert();
>>
>> (in this case, duplicate rows that need investigation are rows that may
>> have changed relative to older version of the DBF file, but that have no
>> change in what I call their primary keys although they are not really
>> PKs since I don't want to raise errors at loading)
>>
>> Once this is done, ogr2ogr is called a second time to load the data. It
>> is quite fast for small tables (tens of thousands of rows, tens of
>> columns) but for large tables it takes forever. For example, I started
>> loading a table with 3.5 million rows/33 columns last Friday at 3PM and
>> this now, Monday morning at 9PM some 3 million rows have been loaded.
>>
>> My question is: what are the other approaches that would make this
>> procedure faster? How is this kind of task usually implemented in
>> postgresql? Would it be better to load everything with no check and then
>> apply some functions to find duplicate rows (although this would involve
>> more manual work)?
>>
>
> I guess it depends on what end purpose of the above is? If you are just
> trying to keep relatively update to date information from the DBF sources,
> would it not be easier just to load them into a new table?
>
> So, where existing table is some_dbf_data:
>
> 1) CREATE TABLE new_some_dbf_data(...)
> 2) Dump DBF file into new_some_dbf_data
> 3)In transaction rename/drop some_dbf_data, rename new_some_dbf_data to
> some_dbf_data
>
>
> Option 2 is what I do for a similar procedure:
>
> 1) Dump DBF data into holding table.
> 2) Use SQL in function(s) to compare old/new table and make appropriate
> adjustments. Doing SQL in bulk is a lot faster then checking each row, or
> least that is what I found. In any case the way you are doing it looks to
> involve 3.5 million inserts with a trigger action on each, that is bound to
> be slow:)
> 3) This is automated in an external Python script.
>
> Option 

Re: [GENERAL] JSON "pretty" and selecting nested JSON fields

2015-03-30 Thread Adrian Klaver

On 03/30/2015 10:54 AM, Deven Phillips wrote:

Hi all,

 I have a query which selects several rows of data, and contained in
one of those rows is some aggregated JSON data. I am using row_to_json()
to make the whole output JSON and I am providing "true" for pretty
formatting of the JSON. The problem that I am seeing is that they nested
JSON block is not being prettified along with the outer JSON.

Example:

I have a function which takes a single key param and returns a JSON array:

CREATE OR REPLACE FUNCTION get_virtual_interfaces(vm_id BIGINT) RETURNS
jsonb AS $$
DECLARE
 res jsonb;
BEGIN
SELECT array_to_json(array_agg(row_to_json(i, true)), true)
 FROM (
 SELECT DISTINCT * FROM virtual_interfaces WHERE vmid=vm_id)
i INTO res;
 RETURN res;
END;
$$ LANGUAGE PLPGSQL;

That function is then used in another query to provide a nested JSON
containing the array:

SELECT
 row.snt_code AS "snt_code",
 row.vdc AS "vdc",
 row.uuid AS "uuid",
 row_to_json(row, true) AS "json"
FROM (
 SELECT
 vm.*,
 CONCAT('https://int.cloudcontrol.sgns.net/v3/customer/',
vm.snt_code, '/vdc/', vm.vdc, '/vm/', vm.uuid) AS "self",
 'cc.v3.sungardas.vm' AS "type",
 (get_virtual_interfaces(vm.vmid)) as interfaces
 FROM virtual_machines vm
) row;

The outer level of JSON is "pretty printed", but the content of the
array from the function is NOT, even though I have specified that it
should be. Any suggestions of how to address this?


Well it is documented:

http://www.postgresql.org/docs/9.4/interactive/functions-json.html

row_to_json(record [, pretty_bool]) 	Returns the row as a JSON object. 
Line feeds will be added between level-1 elements if

 ^^^
pretty_bool is true.

I would say post a feature request on --hackers or at ask if work is 
being done on this.




Thanks in advance!

Deven



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] store and retrieve html in postgres from rails

2015-03-30 Thread David G. Johnston
On Mon, Mar 30, 2015 at 1:06 PM, john.tiger 
wrote:

> want to include html text from newsletters into postgres - then retrieve
>
> have seen posts saying this is really difficult (?) and others just
> sugesting pg_escape_string and pg entities -
> tried to put into text field but it seems stripped out
> does anyone have a good suggestion


​Forget structural format.  What you are doing is storing and retrieving
text.  Use the "text" data type.  If you are having issues supply some
code/examples so others can point out what you are doing wrong.  Saying "it
seems stripped out" doesn't communicate anything substantially useful.

​You can encode and decode at the application layer.​

David J.
​


[GENERAL] store and retrieve html in postgres from rails

2015-03-30 Thread john.tiger

want to include html text from newsletters into postgres - then retrieve

have seen posts saying this is really difficult (?) and others just 
sugesting pg_escape_string and pg entities -

tried to put into text field but it seems stripped out
does anyone have a good suggestion


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] unrecognized configuration parameter "bdr.connections"

2015-03-30 Thread negrus
Hi  , all the installation of the BDR  according to the documentation:

https://wiki.postgresql.org/wiki/BDR_Packages

I have set the following parameters:

  max_replication_slots = 3
  max_wal_senders = 4
  wal_level = 'logical'
  track_commit_timestamp = on
  shared_preload_libraries = 'bdr'
  max_worker_processes = 10
  bdr.connections = 'master2'
  bdr.master2_dsn = 'dbname = master user = postgres port = 5432'

with this package :

postgresql-bdr94-server-9.4.1_bdr2-1_2ndQuadrant.el7.centos.x86_64
postgresql-bdr94-2ndquadrant-redhat-1.0-2.noarch
postgresql-bdr94-9.4.1_bdr2-1_2ndQuadrant.el7.centos.x86_64
postgresql-bdr94-contrib-9.4.1_bdr2-1_2ndQuadrant.el7.centos.x86_64
postgresql-bdr94-libs-9.4.1_bdr2-1_2ndQuadrant.el7.centos.x86_64
postgresql-bdr94-bdr-0.9.0-1_2ndQuadrant.el7.centos.x86_64



But when try startup , not workin the BDR :

< 2015-03-30 15:57:53.957 PYT >LOG:  registering background worker "bdr
supervisor"
< 2015-03-30 15:57:53.957 PYT >WARNING:  unrecognized configuration
parameter "bdr.connections"
< 2015-03-30 15:57:53.957 PYT >WARNING:  unrecognized configuration
parameter "bdr.master2_dsn"
< 2015-03-30 15:57:53.978 PYT >LOG:  redirecting log output to logging
collector process


what is the problem ?

Regards !

Christian Vazquez SR DBA








--
View this message in context: 
http://postgresql.nabble.com/unrecognized-configuration-parameter-bdr-connections-tp5843957.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] JSON "pretty" and selecting nested JSON fields

2015-03-30 Thread Merlin Moncure
On Mon, Mar 30, 2015 at 12:54 PM, Deven Phillips
 wrote:
> Hi all,
>
> I have a query which selects several rows of data, and contained in one
> of those rows is some aggregated JSON data. I am using row_to_json() to make
> the whole output JSON and I am providing "true" for pretty formatting of the
> JSON. The problem that I am seeing is that they nested JSON block is not
> being prettified along with the outer JSON.

It looks like a bug. The 'outer' to_json's pretty print feature should
control the whole structure IMO.  Personally, I think you'll have
better luck rigging another function to do whitespace insertion
formatting.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Muti-table join and roll-up aggregate data into nested JSON?

2015-03-30 Thread Merlin Moncure
On Mon, Mar 30, 2015 at 6:27 AM, Deven Phillips
 wrote:
> I have a multi-table join which will return 1 row per "interface" and I
> would like to aggregate the interfaces as a nested section on the resulting
> JSON output. Could someone help me to figure out how to accomplish that?
>
> Example:
>
>> SELECT row_to_json(row) AS json
>> FROM (
>> SELECT
>> c.snt_code AS "snt_code",
>> vdc.id AS "vdc",
>> vm.uuid AS "uuid",
>> vm.name AS "name",
>> vm.os AS "os",
>> vm.service_type AS "service_type",
>> vm.template_name AS "template_name",
>> vi.virtual_machine_id AS "vmid",
>> vi.mac_address AS "mac_address",
>> vi."order" AS "sort",
>> ip.address AS "ip_address",
>> ip.is_reserved AS "is_reserved",
>> ip.is_gateway AS "is_gateway",
>> vlan.vlan_id AS "vlan",
>> false AS "is_backend"
>> FROM customer c
>> LEFT JOIN virtualdatacenter vdc ON c.id=vdc.customer_id
>> LEFT JOIN virtualmachine vm ON vm.virtual_data_center_id=vdc.id
>> LEFT JOIN virtualinterface vi ON vm.id=vi.virtual_machine_id
>> INNER JOIN ipaddress ip ON vi.id=ip.virtual_interface_id
>> INNER JOIN virtuallan vlan ON ip.network_id=vlan.id
>> WHERE c.snt_code='abcd' AND vdc.id=111 AND
>> vm.uuid='422a141f-5e46-b0f2-53b8-e31070c883ed'
>> ) row
>
>
> The output is 2 rows of JSON data, but I would like to roll up those 2 rows
> so that the 2 "virtualinterfaces" are in a nested JSON field called
> "interfaces"... The only way I have found to accomplish this so far is to
> use a function to grab the joined interface data like:
>
> SELECT row_to_json(row) AS json
> FROM (
> SELECT
> c.snt_code AS "snt_code",
> vdc.id AS "vdc",
> vm.uuid AS "uuid",
> vm.name AS "name",
> vm.os AS "os",
> vm.service_type AS "service_type",
> vm.template_name AS "template_name",
> (get_vm_with_interfaces(vm.id)) as interfaces
> FROM liquorstore_customer c
> LEFT JOIN liquorstore_virtualdatacenter vdc ON c.id=vdc.customer_id
> LEFT JOIN liquorstore_virtualmachine vm ON
> vm.virtual_data_center_id=vdc.id
> WHERE c.snt_code='abcd' AND vdc.id=111 AND
> vm.uuid='422a141f-5e46-b0f2-53b8-e31070c883ed'
> ) row
>
>
> Is there a way to do this in a single join?

sure! what is "get_vm_with_interfaces" doing?

also, postgres version is extremely relevant here. It's possible to do
it in 9.2+, but the solution in 9.4 is very different due to the
presence of json_build().

At a very high level, you can aggregate arbitrary records into arrays
and those arrays will automatically be converted into json arrays by
row_to_json.   The three basic mechanisms of making arrays are
array_agg(), array(), and array[] -- array[] however is pretty much
only useful when dealing with a fixed set of values.

For example, here is a query that makes an internal nested array:

select row_to_json(q)
from
(
  select v % 3 as k, array_agg(v)
  from (select generate_series(1,10) v) q group by 1
) q;


merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Index corruption

2015-03-30 Thread Scott Marlowe
On Tue, Mar 24, 2015 at 12:49 PM, Bankim Bhavsar
 wrote:
> Hello postgres experts,
>
> We are running a test that periodically abruptly kills postgres
> process(equivalent to kill -9) and restarts it.
> After running this test for 24 hrs or so, we see duplicate primary key
> entries in postgres table.
>

Are you killing the POSTMASTER process, deleting the pid file, and
then restarting it without killing its children? That's a recipe for
corruption.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] JSON "pretty" and selecting nested JSON fields

2015-03-30 Thread Deven Phillips
Here's an example of the JSON output I am getting:

{"customer_code":"abcd",
 "vdc":1241,
 "vmid":114778,
 "uuid":"421ea391-b292-ca2e-9a3a-6da3037748c8",
 "name":"vmname",
 "os":"Red Hat Enterprise Linux 6 (64-bit)",
 "service_type":"CU",
 "template_name":"",
 "self":"
https://mysite.mydomain.tld/v3/customer/3mcc/vdc/1241/vm/421ea391-b292-ca2e-9a3a-6da3037748c8
",
 "type":"cc.v3.sungardas.vm",
 "interfaces":[{"vlan": null, "vmid": 114778, "order": 1, "ip_address":
"10.129.114.45", "is_backend": true, "is_gateway": false, "is_reserved":
false, "mac_address": "00:50:56:9e:25:40"}, {"vlan": null, "vmid": 114778,
"order": 0, "ip_address": "10.137.154.212", "is_backend": true,
"is_gateway": false, "is_reserved": false, "mac_address":
"00:50:56:9e:25:3d"}]}

I would expect it to be:

{"customer_code":"abcd",
 "vdc":1241,
 "vmid":114778,
 "uuid":"421ea391-b292-ca2e-9a3a-6da3037748c8",
 "name":"vmname",
 "os":"Red Hat Enterprise Linux 6 (64-bit)",
 "service_type":"CU",
 "template_name":"",
 "self":"https://mysite.mydomain.tld
/v3/customer/3mcc/vdc/1241/vm/421ea391-b292-ca2e-9a3a-6da3037748c8",
 "type":"cc.v3.vm",
 "interfaces":[
  {"vlan": null,
   "vmid": 114778,
   "order": 1,
   "ip_address": "10.129.114.45",
   "is_backend": true,
   "is_gateway": false,
   "is_reserved": false,
   "mac_address": "00:50:56:9e:25:40"
  }, {"vlan": null,
   "vmid": 114778,
   "order": 0,
   "ip_address": "10.137.154.212",
   "is_backend": true,
   "is_gateway": false,
   "is_reserved": false,
   "mac_address": "00:50:56:9e:25:3d"}]}

On Mon, Mar 30, 2015 at 1:54 PM, Deven Phillips 
wrote:

> Hi all,
>
> I have a query which selects several rows of data, and contained in
> one of those rows is some aggregated JSON data. I am using row_to_json() to
> make the whole output JSON and I am providing "true" for pretty formatting
> of the JSON. The problem that I am seeing is that they nested JSON block is
> not being prettified along with the outer JSON.
>
> Example:
>
> I have a function which takes a single key param and returns a JSON array:
>
> CREATE OR REPLACE FUNCTION get_virtual_interfaces(vm_id BIGINT) RETURNS
> jsonb AS $$
> DECLARE
> res jsonb;
> BEGIN
> SELECT array_to_json(array_agg(row_to_json(i, true)), true)
> FROM (
> SELECT DISTINCT * FROM virtual_interfaces WHERE vmid=vm_id) i
> INTO res;
> RETURN res;
> END;
> $$ LANGUAGE PLPGSQL;
>
> That function is then used in another query to provide a nested JSON
> containing the array:
>
> SELECT
> row.snt_code AS "snt_code",
> row.vdc AS "vdc",
> row.uuid AS "uuid",
> row_to_json(row, true) AS "json"
> FROM (
> SELECT
> vm.*,
> CONCAT('https://int.cloudcontrol.sgns.net/v3/customer/',
> vm.snt_code, '/vdc/', vm.vdc, '/vm/', vm.uuid) AS "self",
> 'cc.v3.sungardas.vm' AS "type",
> (get_virtual_interfaces(vm.vmid)) as interfaces
> FROM virtual_machines vm
> ) row;
>
> The outer level of JSON is "pretty printed", but the content of the array
> from the function is NOT, even though I have specified that it should be.
> Any suggestions of how to address this?
>
> Thanks in advance!
>
> Deven
>


[GENERAL] JSON "pretty" and selecting nested JSON fields

2015-03-30 Thread Deven Phillips
Hi all,

I have a query which selects several rows of data, and contained in one
of those rows is some aggregated JSON data. I am using row_to_json() to
make the whole output JSON and I am providing "true" for pretty formatting
of the JSON. The problem that I am seeing is that they nested JSON block is
not being prettified along with the outer JSON.

Example:

I have a function which takes a single key param and returns a JSON array:

CREATE OR REPLACE FUNCTION get_virtual_interfaces(vm_id BIGINT) RETURNS
jsonb AS $$
DECLARE
res jsonb;
BEGIN
SELECT array_to_json(array_agg(row_to_json(i, true)), true)
FROM (
SELECT DISTINCT * FROM virtual_interfaces WHERE vmid=vm_id) i
INTO res;
RETURN res;
END;
$$ LANGUAGE PLPGSQL;

That function is then used in another query to provide a nested JSON
containing the array:

SELECT
row.snt_code AS "snt_code",
row.vdc AS "vdc",
row.uuid AS "uuid",
row_to_json(row, true) AS "json"
FROM (
SELECT
vm.*,
CONCAT('https://int.cloudcontrol.sgns.net/v3/customer/',
vm.snt_code, '/vdc/', vm.vdc, '/vm/', vm.uuid) AS "self",
'cc.v3.sungardas.vm' AS "type",
(get_virtual_interfaces(vm.vmid)) as interfaces
FROM virtual_machines vm
) row;

The outer level of JSON is "pretty printed", but the content of the array
from the function is NOT, even though I have specified that it should be.
Any suggestions of how to address this?

Thanks in advance!

Deven


Re: [SQL] Re: [GENERAL] Link Office Word form document with data from PostgreSQL

2015-03-30 Thread Vincent Veyron
On Mon, 30 Mar 2015 17:37:28 +0200
Vincenzo Campanella  wrote:
Stirling Newberry  wrote:

>post the code

Here you go :

http://pastebin.com/eVXZiFws


This is a mod_perl module used in the HeaderParser phase of the Apache request 
cycle; it first grabs the sql associated with the model, and passes parameters 
to it; this sql is used to retrieve the required values for the model, and the 
placeholders in the document are replaced with those values. The user is then 
redirected to the newly created document

You can see it in action in the demo account of the site referenced in my 
signature : open any file (those are legal files/insurance claims), click on 
the 'Contributors' tab (third tab from left; add a contributor if none is 
present), and click on 'Templates'. Click on the template to generate a document


-- 
Salutations, Vincent Veyron 

https://legalcase.libremen.com/ 
Legal case, contract and insurance claim management software


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Link Office Word form document with data from PostgreSQL

2015-03-30 Thread Andy Colson

On 3/30/2015 10:18 AM, John R Pierce wrote:

On 3/30/2015 1:20 AM, avpro avpro wrote:


Have you ever experienced how to populate fields from a MS Office Word
document with a PostgreSQL data available either in a table or view?

I haven’t seen anything on the web; only possible with MS products, VS
or Access.

Thank you for your input



Microsoft Office tools like Word should be able to directly access
Postgres databases via ODBC or OLEDB with the suitable Postgres driver
(odbc or oledb) installed on the Windows system, and configured to
access the Postgres database.   I've done this in the past with Excel,
no reason it wouldn't work in Word.





Agreed.  I too have use PG odbc to link with excel.  If it works in 
excel it should work in word.


-Andy


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [SQL] Re: [GENERAL] Link Office Word form document with data from PostgreSQL

2015-03-30 Thread Vincenzo Campanella

Il 30.03.2015 15:01, Vincent Veyron ha scritto:

Hi,

I use Perl for this. The process is :

-unzip the .docx document, and look for /word/document.xml in the resulting 
directory
-substitute field names with their values in this file (I don't use MS Word 
'fields', though, just placeholders that are unlikely to be found in documents, 
such as client_name, client_address, facturation_date, etc)
-zip the archive anew with a .docx extension

Voilà.

The code is about 40 lines long, I can post it if you care; the same process 
also works with Libre/OpenOffice documents


Hi Vincent, could you please post the code? That would be very nice.

Thank you
vince


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: [postgis-users] Memory management in postgres (with liblwgeom functions in particular)

2015-03-30 Thread Paul Ramsey
Igor,
Your supposition is is all correct (you might want to ask your
questions on postgis-devel, where the developer density is higher).
lwalloc by default is just a facade over malloc, for standalone use.
But, when used in PostGIS, it it backed by palloc, which is in turn
the memory manager provided by PgSQL. The PgSQL memory manager is a
heirarchical memory manager that provides indepedent blocks of memory
for different levels of operation. So a SQL query will get a block,
and every palloc in that context will go into that block. Then when
the query is done, the whole block gets thrown away. The upshot is
that code that uses palloc in relatively short-lived contexts (like
queries) can be pretty lazy about memory management, since the whole
context is getting tossed at the end anyways.
P.

On Mon, Mar 30, 2015 at 8:11 AM, Igor Stassiy  wrote:
> Hello,
>
> I am developing a C++ extension (most of the code is C++) for postgres that
> links dynamically with liblwgeom, without linking to postgis. I call
> liblwgeom functions that serialize/deserialize LWGEOM* (and similar
> structures) that don't need a backend like GEOS.
>
> I wonder how is the memory freed when we call lwerror, as the transaction
> will be terminated (elog/ereport create a long jump), so if I call
> liblwgeoms functions from within C++, the stack will not be unwind and even
> if I use smart pointers it wouldn't make a difference (right?).
>
> On the contrary, when Postgis module loads itself, in _PG_init it overrides
> memory allocation functions of liblwgeom with pg_alloc and pg_free. Which in
> turn call palloc and pfree. And in this case when we call lwerror, the
> memory that we allocated is freed automatically (right?).
>
> I guess (just a guess) it has something to do with the memory context and
> when a memory context is "closed" the entire memory allocated within would
> be freed. But lwalloc by default is malloc, so does Postgres do something
> extremely clever like overriding malloc with its palloc?
>
> Thank you,
> Igor
>
> ___
> postgis-users mailing list
> postgis-us...@lists.osgeo.org
> http://lists.osgeo.org/cgi-bin/mailman/listinfo/postgis-users


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Link Office Word form document with data from PostgreSQL

2015-03-30 Thread John R Pierce

On 3/30/2015 1:20 AM, avpro avpro wrote:


Have you ever experienced how to populate fields from a MS Office Word 
document with a PostgreSQL data available either in a table or view?


I haven’t seen anything on the web; only possible with MS products, VS 
or Access.


Thank you for your input



Microsoft Office tools like Word should be able to directly access 
Postgres databases via ODBC or OLEDB with the suitable Postgres driver 
(odbc or oledb) installed on the Windows system, and configured to 
access the Postgres database.   I've done this in the past with Excel, 
no reason it wouldn't work in Word.




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Memory management in postgres (with liblwgeom functions in particular)

2015-03-30 Thread Igor Stassiy
Hello,

I am developing a C++ extension (most of the code is C++) for postgres that
links dynamically with liblwgeom, without linking to postgis. I call
liblwgeom functions that serialize/deserialize LWGEOM* (and similar
structures) that don't need a backend like GEOS.

I wonder how is the memory freed when we call lwerror, as the transaction
will be terminated (elog/ereport create a long jump), so if I call
liblwgeoms functions from within C++, the stack will not be unwind and even
if I use smart pointers it wouldn't make a difference (right?).

On the contrary, when Postgis module loads itself, in _PG_init it overrides
memory allocation functions of liblwgeom with pg_alloc and pg_free. Which
in turn call palloc and pfree. And in this case when we call lwerror, the
memory that we allocated is freed automatically (right?).

I guess (just a guess) it has something to do with the memory context and
when a memory context is "closed" the entire memory allocated within would
be freed. But lwalloc by default is malloc, so does Postgres do something
extremely clever like overriding malloc with its palloc?

Thank you,
Igor


Re: [GENERAL] Foreign Data Wrapper for remote view?

2015-03-30 Thread Deven Phillips
Nevermind, it works exactly as I would have expected and it is the exact
speed improvement I needed!

Deven

On Mon, Mar 30, 2015 at 10:12 AM, Deven Phillips 
wrote:

> To prevent multiple foreign table scans during queries, I would like to be
> able to create an FDW against a VIEW in a foreign MySQL database. Is this
> possible?
>
> Thanks in advance!
>
> Deven
>


[GENERAL] Foreign Data Wrapper for remote view?

2015-03-30 Thread Deven Phillips
To prevent multiple foreign table scans during queries, I would like to be
able to create an FDW against a VIEW in a foreign MySQL database. Is this
possible?

Thanks in advance!

Deven


Re: [GENERAL] Slow trigger on insert: alternative solutions?

2015-03-30 Thread Adrian Klaver

On 03/30/2015 06:04 AM, Guillaume Drolet wrote:

Hello,

I need your help speeding up the procedure I will explain below. I am
looking for improvements to my method or different approaches/ideas to
would help in this matter.

I have a set of DBF files that I load into my database using a plpython
function and a call to ogr2ogr (http://www.gdal.org/drv_pg.html). Once
in a while, I'll have to load updated versions of these tables to get
the latest additions and possible corrections to older versions.

In my plpython script, if a table is loaded for the first time, I first
load it empty, then I create a trigger function on insert (execute on
row) that will check for duplicates on each insert. Depending on the
type of data I load, my trigger first checks for equality in a subset of
columns (between 1 and 3 columns that would be like my primary key(s))
and if true, I check if all columns are equal between NEW and the
matching row from my table. When this condition is true, I return null,
else I store rows (i.e. NEW.* and matching row(s) in a new table called
"duplicate" for further manual investigation. Here's an example for one
table:

CREATE OR REPLACE FUNCTION check_naipf_insert()
   RETURNS trigger AS
' BEGIN
IF EXISTS (SELECT 1
 FROMpublic.naipf
 WHERE id_pet_mes IS NOT DISTINCT FROM NEW.id_pet_mes
 AND etage IS NOT DISTINCT FROM NEW.etage) THEN
   IF EXISTS (SELECT 1
FROM public.naipf
WHERE id_pet_mes IS NOT DISTINCT FROM
NEW.id_pet_mes
AND etage IS NOT DISTINCT FROM NEW.etage
AND type_couv IS NOT DISTINCT FROM NEW.type_couv
AND densite IS NOT DISTINCT FROM NEW.densite
AND hauteur IS NOT DISTINCT FROM NEW.hauteur
AND cl_age IS NOT DISTINCT FROM NEW.cl_age) THEN
  RETURN NULL;
ELSE
  INSERT INTO public.duplic_naipf SELECT NEW.*;
  INSERT INTO public.duplic_naipf (SELECT *
FROM
public.naipf
WHERE
id_pet_mes IS NOT DISTINCT FROM  NEW.id_pet_mes
 AND etage
IS NOT DISTINCT FROM NEW.etage );
  RETURN NULL;
END IF;
  END IF;
  RETURN NEW;
  END;  '
  LANGUAGE plpgsql VOLATILE COST 100;

CREATE TRIGGER check_insert_naipf
   BEFORE INSERT
   ON public.pet4_naipf
   FOR EACH ROW
   EXECUTE PROCEDURE check_naipf_insert();

(in this case, duplicate rows that need investigation are rows that may
have changed relative to older version of the DBF file, but that have no
change in what I call their primary keys although they are not really
PKs since I don't want to raise errors at loading)

Once this is done, ogr2ogr is called a second time to load the data. It
is quite fast for small tables (tens of thousands of rows, tens of
columns) but for large tables it takes forever. For example, I started
loading a table with 3.5 million rows/33 columns last Friday at 3PM and
this now, Monday morning at 9PM some 3 million rows have been loaded.

My question is: what are the other approaches that would make this
procedure faster? How is this kind of task usually implemented in
postgresql? Would it be better to load everything with no check and then
apply some functions to find duplicate rows (although this would involve
more manual work)?


I guess it depends on what end purpose of the above is? If you are just 
trying to keep relatively update to date information from the DBF 
sources, would it not be easier just to load them into a new table?


So, where existing table is some_dbf_data:

1) CREATE TABLE new_some_dbf_data(...)
2) Dump DBF file into new_some_dbf_data
3)In transaction rename/drop some_dbf_data, rename new_some_dbf_data to 
some_dbf_data



Option 2 is what I do for a similar procedure:

1) Dump DBF data into holding table.
2) Use SQL in function(s) to compare old/new table and make appropriate 
adjustments. Doing SQL in bulk is a lot faster then checking each row, 
or least that is what I found. In any case the way you are doing it 
looks to involve 3.5 million inserts with a trigger action on each, that 
is bound to be slow:)

3) This is automated in an external Python script.

Option 3

Use dbf(https://pypi.python.org/pypi/dbf/0.88.16) and do the comparisons 
in the DBF files outside Postgres and only import what has changed.








Thanks a lot for your help!







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extensibility features in PQSQL

2015-03-30 Thread Seref Arikan
Postgres documentation is really excellent.
http://www.postgresql.org/docs/9.1/static/extend.html should get you
started.



On Mon, Mar 30, 2015 at 2:20 PM, Danushka Menikkumbura <
danushka.menikkumb...@gmail.com> wrote:

> Hi all,
>
> I am interesting in understanding the extensibility features available in
> PostgreSQL. For instance what it takes to define a new operator or a new
> index and in particular what goes into the query optimizer in that case.
>
> Please kindly point me to some useful resources on this.
>
> Thanks and Regards,
> Danushka
>


Re: [GENERAL] Extensibility features in PQSQL

2015-03-30 Thread Adrian Klaver

On 03/30/2015 06:20 AM, Danushka Menikkumbura wrote:

Hi all,

I am interesting in understanding the extensibility features available
in PostgreSQL. For instance what it takes to define a new operator or a
new index and in particular what goes into the query optimizer in that case.

Please kindly point me to some useful resources on this.


I would start here:

http://www.postgresql.org/docs/9.4/interactive/sql-createoperator.html

http://www.postgresql.org/developer/coding/


Thanks and Regards,
Danushka



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to recover or resent the password for the user 'postgres'

2015-03-30 Thread Adrian Klaver

On 03/30/2015 01:09 AM, Arup Rakshit wrote:

Hi,

I am trying to follow what has been mentioned below **Setting Up Postgres** 
(https://www.digitalocean.com/community/tutorials/how-to-setup-ruby-on-rails-with-postgres).

But no luck!

[shreyas@myapp(add_postgres_addapter)]$ su - postgres
Password:
su: Sorry
[shreyas@myapp (add_postgres_addapter)]$

It is asking me the password, whereas I don't know the password of the user 
`postgres`. How would I create a new role with a password in this case ?


How did you install Postgres?



I am using OS X version 10.8.2
  Regards,
Arup Rakshit





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Extensibility features in PQSQL

2015-03-30 Thread Danushka Menikkumbura
Hi all,

I am interesting in understanding the extensibility features available in
PostgreSQL. For instance what it takes to define a new operator or a new
index and in particular what goes into the query optimizer in that case.

Please kindly point me to some useful resources on this.

Thanks and Regards,
Danushka


[GENERAL] Slow trigger on insert: alternative solutions?

2015-03-30 Thread Guillaume Drolet
Hello,

I need your help speeding up the procedure I will explain below. I am
looking for improvements to my method or different approaches/ideas to
would help in this matter.

I have a set of DBF files that I load into my database using a plpython
function and a call to ogr2ogr (http://www.gdal.org/drv_pg.html). Once in a
while, I'll have to load updated versions of these tables to get the latest
additions and possible corrections to older versions.

In my plpython script, if a table is loaded for the first time, I first
load it empty, then I create a trigger function on insert (execute on row)
that will check for duplicates on each insert. Depending on the type of
data I load, my trigger first checks for equality in a subset of columns
(between 1 and 3 columns that would be like my primary key(s)) and if true,
I check if all columns are equal between NEW and the matching row from my
table. When this condition is true, I return null, else I store rows (i.e.
NEW.* and matching row(s) in a new table called "duplicate" for further
manual investigation. Here's an example for one table:

CREATE OR REPLACE FUNCTION check_naipf_insert()
  RETURNS trigger AS
' BEGIN
   IF EXISTS (SELECT 1
FROMpublic.naipf
WHERE id_pet_mes IS NOT DISTINCT FROM NEW.id_pet_mes
AND etage IS NOT DISTINCT FROM NEW.etage) THEN
  IF EXISTS (SELECT 1
   FROM public.naipf
   WHERE id_pet_mes IS NOT DISTINCT FROM NEW.id_pet_mes
   AND etage IS NOT DISTINCT FROM NEW.etage
   AND type_couv IS NOT DISTINCT FROM NEW.type_couv
   AND densite IS NOT DISTINCT FROM NEW.densite
   AND hauteur IS NOT DISTINCT FROM NEW.hauteur
   AND cl_age IS NOT DISTINCT FROM NEW.cl_age) THEN
 RETURN NULL;
   ELSE
 INSERT INTO public.duplic_naipf SELECT NEW.*;
 INSERT INTO public.duplic_naipf (SELECT *
   FROM
public.naipf
   WHERE id_pet_mes
IS NOT DISTINCT FROM  NEW.id_pet_mes
AND etage IS
NOT DISTINCT FROM NEW.etage );
 RETURN NULL;
   END IF;
 END IF;
 RETURN NEW;
 END;  '
 LANGUAGE plpgsql VOLATILE COST 100;

CREATE TRIGGER check_insert_naipf
  BEFORE INSERT
  ON public.pet4_naipf
  FOR EACH ROW
  EXECUTE PROCEDURE check_naipf_insert();

(in this case, duplicate rows that need investigation are rows that may
have changed relative to older version of the DBF file, but that have no
change in what I call their primary keys although they are not really PKs
since I don't want to raise errors at loading)

Once this is done, ogr2ogr is called a second time to load the data. It is
quite fast for small tables (tens of thousands of rows, tens of columns)
but for large tables it takes forever. For example, I started loading a
table with 3.5 million rows/33 columns last Friday at 3PM and this now,
Monday morning at 9PM some 3 million rows have been loaded.

My question is: what are the other approaches that would make this
procedure faster? How is this kind of task usually implemented in
postgresql? Would it be better to load everything with no check and then
apply some functions to find duplicate rows (although this would involve
more manual work)?

Thanks a lot for your help!


Re: [GENERAL] Link Office Word form document with data from PostgreSQL

2015-03-30 Thread Vincent Veyron
On Mon, 30 Mar 2015 10:20:15 +0200
avpro avpro  wrote:

> Have you ever experienced how to populate fields from a MS Office Word
> document with a PostgreSQL data available either in a table or view?
> 
> I haven’t seen anything on the web; only possible with MS products, VS or
> Access.

Hi,

I use Perl for this. The process is :

-unzip the .docx document, and look for /word/document.xml in the resulting 
directory
-substitute field names with their values in this file (I don't use MS Word 
'fields', though, just placeholders that are unlikely to be found in documents, 
such as client_name, client_address, facturation_date, etc) 
-zip the archive anew with a .docx extension

Voilà.

The code is about 40 lines long, I can post it if you care; the same process 
also works with Libre/OpenOffice documents

-- 
Salutations, Vincent Veyron

https://marica.fr/
Gestion des contentieux, des dossiers de sinistres assurance et des contrats 
pour le service juridique


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to recover or resent the password for the user 'postgres'

2015-03-30 Thread John McKown
On Mon, Mar 30, 2015 at 3:09 AM, Arup Rakshit
 wrote:
> Hi,
>
> I am trying to follow what has been mentioned below **Setting Up Postgres** 
> (https://www.digitalocean.com/community/tutorials/how-to-setup-ruby-on-rails-with-postgres).
>
> But no luck!
>
> [shreyas@myapp(add_postgres_addapter)]$ su - postgres
> Password:
> su: Sorry
> [shreyas@myapp (add_postgres_addapter)]$
>
> It is asking me the password, whereas I don't know the password of the user 
> `postgres`. How would I create a new role with a password in this case ?

I'm not a Mac person. But, on Linux, what I do is:

 [shreyas@myapp(add_postgres_addapter)]$ sudo su - #change to root
 [root@myapp(add_postgres_addapter)]# su - postgres # change to postgres
 [postgres@myapp(add_postgres_addapter)]$ #do work as user postgres
 [postgres@myapp(add_postgres_addapter)]$exit #return to root
 [root@myapp(add_postgres_addapter)]# exit #return to my normal user
 [shreyas@myapp(add_postgres_addapter)]$

Note that the above is just my editing of what I _think_ you _might_
see. The commands are correct. This assumes you have authority to do
the "sudo su -" to change to root. This is how I change to root
because it does not require that I use the root password. Once in
root, you can "su postgres" to change to the postgres user, without a
password.

Another possibility, again using sudo would be to change the password
on the postgres user like:

[user@system]$ sudo passwd postgres
Changing password for user postgres.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
[user@system]$



>
> I am using OS X version 10.8.2
>  Regards,
> Arup Rakshit



-- 
If you sent twitter messages while exploring, are you on a textpedition?

He's about as useful as a wax frying pan.

10 to the 12th power microphones = 1 Megaphone

Maranatha! <><
John McKown


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Muti-table join and roll-up aggregate data into nested JSON?

2015-03-30 Thread Deven Phillips
I have a multi-table join which will return 1 row per "interface" and I
would like to aggregate the interfaces as a nested section on the resulting
JSON output. Could someone help me to figure out how to accomplish that?

Example:

SELECT row_to_json(row) AS json
> FROM (
> SELECT
> c.snt_code AS "snt_code",
> vdc.id AS "vdc",
> vm.uuid AS "uuid",
> vm.name AS "name",
> vm.os AS "os",
> vm.service_type AS "service_type",
> vm.template_name AS "template_name",
> vi.virtual_machine_id AS "vmid",
> vi.mac_address AS "mac_address",
> vi."order" AS "sort",
> ip.address AS "ip_address",
> ip.is_reserved AS "is_reserved",
> ip.is_gateway AS "is_gateway",
> vlan.vlan_id AS "vlan",
> false AS "is_backend"
> FROM customer c
> LEFT JOIN virtualdatacenter vdc ON c.id=vdc.customer_id
> LEFT JOIN virtualmachine vm ON vm.virtual_data_center_id=vdc.id
> LEFT JOIN virtualinterface vi ON vm.id=vi.virtual_machine_id
> INNER JOIN ipaddress ip ON vi.id=ip.virtual_interface_id
> INNER JOIN virtuallan vlan ON ip.network_id=vlan.id
> WHERE c.snt_code='abcd' AND vdc.id=111 AND
> vm.uuid='422a141f-5e46-b0f2-53b8-e31070c883ed'
> ) row
>

The output is 2 rows of JSON data, but I would like to roll up those 2 rows
so that the 2 "virtualinterfaces" are in a nested JSON field called
"interfaces"... The only way I have found to accomplish this so far is to
use a function to grab the joined interface data like:

SELECT row_to_json(row) AS json
FROM (
SELECT
c.snt_code AS "snt_code",
vdc.id AS "vdc",
vm.uuid AS "uuid",
vm.name AS "name",
vm.os AS "os",
vm.service_type AS "service_type",
vm.template_name AS "template_name",
(get_vm_with_interfaces(vm.id)) as interfaces
FROM liquorstore_customer c
LEFT JOIN liquorstore_virtualdatacenter vdc ON c.id=vdc.customer_id
LEFT JOIN liquorstore_virtualmachine vm ON vm.virtual_data_center_id=
vdc.id
WHERE c.snt_code='abcd' AND vdc.id=111 AND
vm.uuid='422a141f-5e46-b0f2-53b8-e31070c883ed'
) row


Is there a way to do this in a single join?

Thank in advance!

Deven


Re: [NOVICE] [SQL] [GENERAL] JSON TO POSTGRE TABLE

2015-03-30 Thread Felipe Santos
Hi,
> I want to ask a question about json index.
> Can I make a index on a field of a json column?
> e.g, for table 'json_tab' you given, can I run something like "create
> index 'json_index' on json_tab::key"?
>


Yes.

Here it is an example:
http://stackoverflow.com/questions/17807030/how-to-create-index-on-json-field-in-postgres-9-3

Also, in 9.4 we got JSONB data type, which may enhance JSON fields
performance.


Re: [GENERAL] [SQL] Link Office Word form document with data from PostgreSQL

2015-03-30 Thread Hector Vass
If you have windows then yes something like vba to pull data into MS Office 
Word .. search google for 'ms word odbc connection' a postgres mailing list is 
not the right place to do this search


If you are on *nix don't have/want windows & office products but want to 
generate/write to a MS Office Word document then postgresql extension perlu and 
something from the CPAN library like Win32::Word::Writer will work


Hector Vass



+44(0)7773 352 559
*  Metametrics, International House, 107 Gloucester Road,  Malmesbury, 
Wiltshire, SN16 0AJ
*   www.metametrics.co.uk


From: pgsql-sql-ow...@postgresql.org  on behalf 
of avpro avpro 
Sent: 30 March 2015 09:20
To: pgsql-nov...@postgresql.org; pgsql-...@postgresql.org; 
pgsql-general@postgresql.org
Subject: [SQL] Link Office Word form document with data from PostgreSQL


Hi all,

Have you ever experienced how to populate fields from a MS Office Word document 
with a PostgreSQL data available either in a table or view?

I haven't seen anything on the web; only possible with MS products, VS or 
Access.

Thank you for your input


Re: [SQL] [GENERAL] JSON TO POSTGRE TABLE

2015-03-30 Thread Shujie Shang
Hi,
I want to ask a question about json index.
Can I make a index on a field of a json column?
e.g, for table 'json_tab' you given, can I run something like "create index
'json_index' on json_tab::key"?

On Mon, Mar 30, 2015 at 6:11 PM, Michael Paquier 
wrote:

>
>
> On Mon, Mar 30, 2015 at 6:54 PM, Kavi  wrote:
>
>>
>> I sample data  as below-
>>
>>
>> {
>> "glossary": {
>> "title": "example glossary",
>>  "GlossDiv": {
>> "title": "S",
>>  "GlossList": {
>> "GlossEntry": {
>> "ID": "SGML",
>>  "SortAs": "SGML",
>>  "GlossTerm": "Standard Generalized 
>> Markup Language",
>>  "Acronym": "SGML",
>>  "Abbrev": "ISO 8879:1986",
>>  "GlossDef": {
>> "para": "A meta-markup language, used to create 
>> markup languages such as DocBook.",
>>  "GlossSeeAlso": ["GML", "XML"]
>> },
>>  "GlossSee": "markup"
>> }
>> }
>> }
>> }
>> }
>>
>> how can i inerst these json data in postgre tables.
>>
>> .ie I want to convert json into sql as per the field
>>
>> This may have been better asked in pgsql-novice I guess...
>
> JSON is a data type:
> http://www.postgresql.org/docs/devel/static/datatype-json.html
> So what you need to do is at least to create a table with a column having
> as data type JSON, and then insert data into it:
> =# CREATE TABLE json_tab (data json);
> CREATE TABLE
> =# INSERT INTO json_tab VALUES ('{"key":"value","key2":"value2"}');
> INSERT 0 1
> =# select * from json_tab ;
>   data
> -
>  {"key":"value","key2":"value2"}
> (1 row)
> --
> Michael
>


Re: [GENERAL] JSON TO POSTGRE TABLE

2015-03-30 Thread Michael Paquier
On Mon, Mar 30, 2015 at 7:11 PM, Michael Paquier 
wrote:

>
>
>
> On Mon, Mar 30, 2015 at 6:54 PM, Kavi  wrote:
>
>>
>> I sample data  as below-
>>
>>
>> {
>> "glossary": {
>> "title": "example glossary",
>>  "GlossDiv": {
>> "title": "S",
>>  "GlossList": {
>> "GlossEntry": {
>> "ID": "SGML",
>>  "SortAs": "SGML",
>>  "GlossTerm": "Standard Generalized 
>> Markup Language",
>>  "Acronym": "SGML",
>>  "Abbrev": "ISO 8879:1986",
>>  "GlossDef": {
>> "para": "A meta-markup language, used to create 
>> markup languages such as DocBook.",
>>  "GlossSeeAlso": ["GML", "XML"]
>> },
>>  "GlossSee": "markup"
>> }
>> }
>> }
>> }
>> }
>>
>> how can i inerst these json data in postgre tables.
>>
>> .ie I want to convert json into sql as per the field
>>
>> This may have been better asked in pgsql-novice I guess...
>

Oh, it was the case. You should avoid cross-posting into several lists at
the same time...
-- 
Michael


Re: [GENERAL] JSON TO POSTGRE TABLE

2015-03-30 Thread Michael Paquier
On Mon, Mar 30, 2015 at 6:54 PM, Kavi  wrote:

>
> I sample data  as below-
>
>
> {
> "glossary": {
> "title": "example glossary",
>   "GlossDiv": {
> "title": "S",
>   "GlossList": {
> "GlossEntry": {
> "ID": "SGML",
>   "SortAs": "SGML",
>   "GlossTerm": "Standard Generalized 
> Markup Language",
>   "Acronym": "SGML",
>   "Abbrev": "ISO 8879:1986",
>   "GlossDef": {
> "para": "A meta-markup language, used to create 
> markup languages such as DocBook.",
>   "GlossSeeAlso": ["GML", "XML"]
> },
>   "GlossSee": "markup"
> }
> }
> }
> }
> }
>
> how can i inerst these json data in postgre tables.
>
> .ie I want to convert json into sql as per the field
>
> This may have been better asked in pgsql-novice I guess...

JSON is a data type:
http://www.postgresql.org/docs/devel/static/datatype-json.html
So what you need to do is at least to create a table with a column having
as data type JSON, and then insert data into it:
=# CREATE TABLE json_tab (data json);
CREATE TABLE
=# INSERT INTO json_tab VALUES ('{"key":"value","key2":"value2"}');
INSERT 0 1
=# select * from json_tab ;
  data
-
 {"key":"value","key2":"value2"}
(1 row)
-- 
Michael


Re: [GENERAL] pgadmin3 installation on Oracle Linux 6.6 64-bit ?

2015-03-30 Thread Alban Hertroys
On 28 March 2015 at 02:14, Yuri Budilov  wrote:

> I am new to PostgreSQL and Linux (coming across from Microsoft SQL Server).
>
> I installed PostgreSQL 9.4 on Oracle Linux 6.6 and its working ok (psql,
> etc).
> Now I try to install pgadmin3 on the same OS.
>
> I am having problems installing pdadmin3 rpms on Oracle Linux 6.6 64-bit.

I'm often somewhat surprised that people expect to have GUI tools on
database servers. Why would you really? You can connect from anywhere
using your OS of choice (or rather, the choice of your employer in
most cases) - it's perfectly fine to run pgadmin3 on Windows to
connect to a PostgreSQL database on Oracle Linux.

And why waste potential DB cycles on a graphical environment? You don't need it.
For most things it's plenty (and frequently a lot more powerful and
reliable) to have a textual user interface through a remote terminal
session over SSH or equivalent. Of course, there is a learning curve
to be able to use the UNIX (or Linux) shell and its tools - start with
'man man' if you're new ;)

That said, of course I agree that it should be possible to install
pgadmin3 on Oracle Linux if you'd want to.
-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] JSON TO POSTGRE TABLE

2015-03-30 Thread Kavi
I sample data  as below-

{
"glossary": {
"title": "example glossary",
"GlossDiv": {
"title": "S",
"GlossList": {
"GlossEntry": {
"ID": "SGML",
"SortAs": "SGML",
"GlossTerm": "Standard Generalized 
Markup Language",
"Acronym": "SGML",
"Abbrev": "ISO 8879:1986",
"GlossDef": {
"para": "A meta-markup language, used to
create markup languages such as DocBook.",
"GlossSeeAlso": ["GML", "XML"]
},
"GlossSee": "markup"
}
}
}
}
}

how can i inerst these json data in postgre tables.

.ie I want to convert json into sql as per the field


-- 
Thanking You
Regards
Mr.Kavi R. Pachkawade


Re: [GENERAL] Link Office Word form document with data from PostgreSQL

2015-03-30 Thread Tim Clarke
Two options that I know of:
1) Use Java and the Apache POI project
2) Write a file containing the data in some other format like HTML then
open that with Word.

Tim Clarke

On 30/03/15 09:20, avpro avpro wrote:
>
> Hi all,
>
> Have you ever experienced how to populate fields from a MS Office Word
> document with a PostgreSQL data available either in a table or view?
>
> I haven’t seen anything on the web; only possible with MS products, VS
> or Access.
>
> Thank you for your input
>



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Link Office Word form document with data from PostgreSQL

2015-03-30 Thread avpro avpro
Hi all,

Have you ever experienced how to populate fields from a MS Office Word
document with a PostgreSQL data available either in a table or view?

I haven’t seen anything on the web; only possible with MS products, VS or
Access.

Thank you for your input


Re: [GENERAL] How to recover or resent the password for the user 'postgres'

2015-03-30 Thread Michael Paquier
On Mon, Mar 30, 2015 at 5:09 PM, Arup Rakshit 
wrote:

> Hi,
>
> I am trying to follow what has been mentioned below **Setting Up
> Postgres** (
> https://www.digitalocean.com/community/tutorials/how-to-setup-ruby-on-rails-with-postgres
> ).
>
> But no luck!
>
> [shreyas@myapp(add_postgres_addapter)]$ su - postgres
> Password:
> su: Sorry
> [shreyas@myapp (add_postgres_addapter)]$
>
> It is asking me the password, whereas I don't know the password of the
> user `postgres`. How would I create a new role with a password in this case
> ?
>

The user whose password has visibly been lost is not the database user
postgres, but the OS user postgres. If you have root on this server access,
simply enforce the password to a new value.
-- 
Michael


[GENERAL] How to recover or resent the password for the user 'postgres'

2015-03-30 Thread Arup Rakshit
Hi,

I am trying to follow what has been mentioned below **Setting Up Postgres** 
(https://www.digitalocean.com/community/tutorials/how-to-setup-ruby-on-rails-with-postgres).

But no luck!

[shreyas@myapp(add_postgres_addapter)]$ su - postgres
Password:
su: Sorry
[shreyas@myapp (add_postgres_addapter)]$

It is asking me the password, whereas I don't know the password of the user 
`postgres`. How would I create a new role with a password in this case ?

I am using OS X version 10.8.2
 Regards,
Arup Rakshit


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general