Re: [GENERAL] PostgreSQL 9.5, mysql_fdw, and IMPORT FOREIGN SCHEMA question

2016-01-08 Thread Deven Phillips
OK, that did it! I submitted 2 PRs to the EnterpriseDB/mysql_fdw GitHub
project which should resolve all outstanding issues for me.

https://github.com/EnterpriseDB/mysql_fdw/pull/81

https://github.com/EnterpriseDB/mysql_fdw/pull/82

Isn't it great when Open Source works like it's supposed to!!!

Deven

On Sat, Jan 9, 2016 at 12:06 AM, Deven Phillips 
wrote:

> I think that I may have found the problem. It looks like the mysql_fdw
> uses the following query to gather information about the foreign schema:
>
> SELECT
>  t.TABLE_NAME,
>  c.COLUMN_NAME,
>  CASE
>WHEN c.DATA_TYPE = 'enum' THEN LOWER(CONCAT(c.COLUMN_NAME, '_t'))
>WHEN c.DATA_TYPE = 'tinyint' THEN 'smallint'
>WHEN c.DATA_TYPE = 'mediumint' THEN 'integer'
>WHEN c.DATA_TYPE = 'tinyint unsigned' THEN 'smallint'
>WHEN c.DATA_TYPE = 'smallint unsigned' THEN 'integer'
>WHEN c.DATA_TYPE = 'mediumint unsigned' THEN 'integer'
>WHEN c.DATA_TYPE = 'int unsigned' THEN 'bigint'
>WHEN c.DATA_TYPE = 'bigint unsigned' THEN 'numeric(20)'
>WHEN c.DATA_TYPE = 'double' THEN 'double precision'
>WHEN c.DATA_TYPE = 'float' THEN 'real'
>WHEN c.DATA_TYPE = 'datetime' THEN 'timestamp'
>WHEN c.DATA_TYPE = 'longtext' THEN 'text'
>WHEN c.DATA_TYPE = 'mediumtext' THEN 'text'
>WHEN c.DATA_TYPE = 'blob' THEN 'bytea'
>ELSE c.DATA_TYPE
>  END,
>  c.COLUMN_TYPE,
>  IF(c.IS_NULLABLE = 'NO', 't', 'f'),
>  c.COLUMN_DEFAULT
> FROM
>  information_schema.TABLES AS t
> JOIN
>  information_schema.COLUMNS AS c
> ON
>  t.TABLE_CATALOG = c.TABLE_CATALOG AND t.TABLE_SCHEMA = c.TABLE_SCHEMA AND
> t.TABLE_NAME = c.TABLE_NAME
> WHERE
>  t.TABLE_SCHEMA = '%s'
>
> When I poked around inside of MySQL that t.TABLE_CATALOG and
> c.TABLE_CATALOG values are NULL. Trying to compare NULLs in MySQL using an
> equals sign (=) results in a "FALSE" and thus the "JOIN" does not provide
> an actual linkage. So, the query returns 0 tables and 0 columns to be
> imported.
>
> Deven
>
> On Fri, Jan 8, 2016 at 11:50 PM, Deven Phillips 
> wrote:
>
>> Additional details. The MySQL server I am targeting is running
>> version 5.1.73. Perhaps it's too old of a version to support foreign schema
>> import?
>>
>> Deven
>>
>> On Fri, Jan 8, 2016 at 11:45 PM, Deven Phillips > > wrote:
>>
>>> I DID get a foreign table to work using the following:
>>>
>>> CREATE FOREIGN TABLE customer (
>>> id BIGINT,
>>> name VARCHAR(150),
>>> parent_id BIGINT,
>>> oracle_id BIGINT,
>>> last_updated_time TIMESTAMP,
>>> created_time TIMESTAMP) SERVER mysql OPTIONS (dbname 'mydb', table_name
>>> 'customer');
>>>
>>> And I was subsequently able to query that table from PostgreSQL..
>>>
>>> I tried to add the "OPTIONS" to the IMPORT FOREIGN SCHEMA and got an
>>> error that "dbname" is not a valid parameter.
>>>
>>> Thanks,
>>>
>>> Deven
>>>
>>> On Fri, Jan 8, 2016 at 11:38 PM, Deven Phillips <
>>> deven.phill...@gmail.com> wrote:
>>>
 Apparently not, though I have done so in the past on PostgreSQL 9.4. It
 appears to be related to the "schema" with which the foreign table is
 associated:

 mydb=# CREATE FOREIGN TABLE customer (
 id BIGINT,
 name VARCHAR(150),
 parent_id BIGINT,
 oracle_id BIGINT,
 last_updated_time TIMESTAMP,
 created_time TIMESTAMP) SERVER mysql;
 CREATE FOREIGN TABLE
 mydb=# SELECT * FROM customer;
 ERROR:  failed to prepare the MySQL query:
 Table 'public.customer' doesn't exist

 Any suggestions would be greatly appreciated!

 Deven


 On Fri, Jan 8, 2016 at 10:26 AM, Adrian Klaver <
 adrian.kla...@aklaver.com> wrote:

> On 01/08/2016 07:04 AM, Deven Phillips wrote:
>
>> Hi all,
>>
>>  I installed the newly released PostgreSQL 9.5 this morning and
>> compiled the latest mysql_fdw extension from EnterpriseDB. I was able
>> to
>> create the SERVER and USER MAPPING, but I cannot seem to get IMPORT
>> FOREIGN SCHEMA to do anything. The command executes without error, but
>> none of the table schemas are imported from the MySQL DB. Does anyone
>> have any advice, links, documentation which might be of help?
>>
>
> Can you CREATE FOREIGN TABLE and use it?
>
>
>> Thanks in advance!
>>
>> Deven
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


>>>
>>
>


Re: [GENERAL] PostgreSQL 9.5, mysql_fdw, and IMPORT FOREIGN SCHEMA question

2016-01-08 Thread Deven Phillips
I think that I may have found the problem. It looks like the mysql_fdw uses
the following query to gather information about the foreign schema:

SELECT
 t.TABLE_NAME,
 c.COLUMN_NAME,
 CASE
   WHEN c.DATA_TYPE = 'enum' THEN LOWER(CONCAT(c.COLUMN_NAME, '_t'))
   WHEN c.DATA_TYPE = 'tinyint' THEN 'smallint'
   WHEN c.DATA_TYPE = 'mediumint' THEN 'integer'
   WHEN c.DATA_TYPE = 'tinyint unsigned' THEN 'smallint'
   WHEN c.DATA_TYPE = 'smallint unsigned' THEN 'integer'
   WHEN c.DATA_TYPE = 'mediumint unsigned' THEN 'integer'
   WHEN c.DATA_TYPE = 'int unsigned' THEN 'bigint'
   WHEN c.DATA_TYPE = 'bigint unsigned' THEN 'numeric(20)'
   WHEN c.DATA_TYPE = 'double' THEN 'double precision'
   WHEN c.DATA_TYPE = 'float' THEN 'real'
   WHEN c.DATA_TYPE = 'datetime' THEN 'timestamp'
   WHEN c.DATA_TYPE = 'longtext' THEN 'text'
   WHEN c.DATA_TYPE = 'mediumtext' THEN 'text'
   WHEN c.DATA_TYPE = 'blob' THEN 'bytea'
   ELSE c.DATA_TYPE
 END,
 c.COLUMN_TYPE,
 IF(c.IS_NULLABLE = 'NO', 't', 'f'),
 c.COLUMN_DEFAULT
FROM
 information_schema.TABLES AS t
JOIN
 information_schema.COLUMNS AS c
ON
 t.TABLE_CATALOG = c.TABLE_CATALOG AND t.TABLE_SCHEMA = c.TABLE_SCHEMA AND
t.TABLE_NAME = c.TABLE_NAME
WHERE
 t.TABLE_SCHEMA = '%s'

When I poked around inside of MySQL that t.TABLE_CATALOG and
c.TABLE_CATALOG values are NULL. Trying to compare NULLs in MySQL using an
equals sign (=) results in a "FALSE" and thus the "JOIN" does not provide
an actual linkage. So, the query returns 0 tables and 0 columns to be
imported.

Deven

On Fri, Jan 8, 2016 at 11:50 PM, Deven Phillips 
wrote:

> Additional details. The MySQL server I am targeting is running
> version 5.1.73. Perhaps it's too old of a version to support foreign schema
> import?
>
> Deven
>
> On Fri, Jan 8, 2016 at 11:45 PM, Deven Phillips 
> wrote:
>
>> I DID get a foreign table to work using the following:
>>
>> CREATE FOREIGN TABLE customer (
>> id BIGINT,
>> name VARCHAR(150),
>> parent_id BIGINT,
>> oracle_id BIGINT,
>> last_updated_time TIMESTAMP,
>> created_time TIMESTAMP) SERVER mysql OPTIONS (dbname 'mydb', table_name
>> 'customer');
>>
>> And I was subsequently able to query that table from PostgreSQL..
>>
>> I tried to add the "OPTIONS" to the IMPORT FOREIGN SCHEMA and got an
>> error that "dbname" is not a valid parameter.
>>
>> Thanks,
>>
>> Deven
>>
>> On Fri, Jan 8, 2016 at 11:38 PM, Deven Phillips > > wrote:
>>
>>> Apparently not, though I have done so in the past on PostgreSQL 9.4. It
>>> appears to be related to the "schema" with which the foreign table is
>>> associated:
>>>
>>> mydb=# CREATE FOREIGN TABLE customer (
>>> id BIGINT,
>>> name VARCHAR(150),
>>> parent_id BIGINT,
>>> oracle_id BIGINT,
>>> last_updated_time TIMESTAMP,
>>> created_time TIMESTAMP) SERVER mysql;
>>> CREATE FOREIGN TABLE
>>> mydb=# SELECT * FROM customer;
>>> ERROR:  failed to prepare the MySQL query:
>>> Table 'public.customer' doesn't exist
>>>
>>> Any suggestions would be greatly appreciated!
>>>
>>> Deven
>>>
>>>
>>> On Fri, Jan 8, 2016 at 10:26 AM, Adrian Klaver <
>>> adrian.kla...@aklaver.com> wrote:
>>>
 On 01/08/2016 07:04 AM, Deven Phillips wrote:

> Hi all,
>
>  I installed the newly released PostgreSQL 9.5 this morning and
> compiled the latest mysql_fdw extension from EnterpriseDB. I was able
> to
> create the SERVER and USER MAPPING, but I cannot seem to get IMPORT
> FOREIGN SCHEMA to do anything. The command executes without error, but
> none of the table schemas are imported from the MySQL DB. Does anyone
> have any advice, links, documentation which might be of help?
>

 Can you CREATE FOREIGN TABLE and use it?


> Thanks in advance!
>
> Deven
>


 --
 Adrian Klaver
 adrian.kla...@aklaver.com

>>>
>>>
>>
>


Re: [GENERAL] PostgreSQL 9.5, mysql_fdw, and IMPORT FOREIGN SCHEMA question

2016-01-08 Thread Deven Phillips
Additional details. The MySQL server I am targeting is running
version 5.1.73. Perhaps it's too old of a version to support foreign schema
import?

Deven

On Fri, Jan 8, 2016 at 11:45 PM, Deven Phillips 
wrote:

> I DID get a foreign table to work using the following:
>
> CREATE FOREIGN TABLE customer (
> id BIGINT,
> name VARCHAR(150),
> parent_id BIGINT,
> oracle_id BIGINT,
> last_updated_time TIMESTAMP,
> created_time TIMESTAMP) SERVER mysql OPTIONS (dbname 'mydb', table_name
> 'customer');
>
> And I was subsequently able to query that table from PostgreSQL..
>
> I tried to add the "OPTIONS" to the IMPORT FOREIGN SCHEMA and got an error
> that "dbname" is not a valid parameter.
>
> Thanks,
>
> Deven
>
> On Fri, Jan 8, 2016 at 11:38 PM, Deven Phillips 
> wrote:
>
>> Apparently not, though I have done so in the past on PostgreSQL 9.4. It
>> appears to be related to the "schema" with which the foreign table is
>> associated:
>>
>> mydb=# CREATE FOREIGN TABLE customer (
>> id BIGINT,
>> name VARCHAR(150),
>> parent_id BIGINT,
>> oracle_id BIGINT,
>> last_updated_time TIMESTAMP,
>> created_time TIMESTAMP) SERVER mysql;
>> CREATE FOREIGN TABLE
>> mydb=# SELECT * FROM customer;
>> ERROR:  failed to prepare the MySQL query:
>> Table 'public.customer' doesn't exist
>>
>> Any suggestions would be greatly appreciated!
>>
>> Deven
>>
>>
>> On Fri, Jan 8, 2016 at 10:26 AM, Adrian Klaver > > wrote:
>>
>>> On 01/08/2016 07:04 AM, Deven Phillips wrote:
>>>
 Hi all,

  I installed the newly released PostgreSQL 9.5 this morning and
 compiled the latest mysql_fdw extension from EnterpriseDB. I was able to
 create the SERVER and USER MAPPING, but I cannot seem to get IMPORT
 FOREIGN SCHEMA to do anything. The command executes without error, but
 none of the table schemas are imported from the MySQL DB. Does anyone
 have any advice, links, documentation which might be of help?

>>>
>>> Can you CREATE FOREIGN TABLE and use it?
>>>
>>>
 Thanks in advance!

 Deven

>>>
>>>
>>> --
>>> Adrian Klaver
>>> adrian.kla...@aklaver.com
>>>
>>
>>
>


Re: [GENERAL] PostgreSQL 9.5, mysql_fdw, and IMPORT FOREIGN SCHEMA question

2016-01-08 Thread Deven Phillips
I DID get a foreign table to work using the following:

CREATE FOREIGN TABLE customer (
id BIGINT,
name VARCHAR(150),
parent_id BIGINT,
oracle_id BIGINT,
last_updated_time TIMESTAMP,
created_time TIMESTAMP) SERVER mysql OPTIONS (dbname 'mydb', table_name
'customer');

And I was subsequently able to query that table from PostgreSQL..

I tried to add the "OPTIONS" to the IMPORT FOREIGN SCHEMA and got an error
that "dbname" is not a valid parameter.

Thanks,

Deven

On Fri, Jan 8, 2016 at 11:38 PM, Deven Phillips 
wrote:

> Apparently not, though I have done so in the past on PostgreSQL 9.4. It
> appears to be related to the "schema" with which the foreign table is
> associated:
>
> mydb=# CREATE FOREIGN TABLE customer (
> id BIGINT,
> name VARCHAR(150),
> parent_id BIGINT,
> oracle_id BIGINT,
> last_updated_time TIMESTAMP,
> created_time TIMESTAMP) SERVER mysql;
> CREATE FOREIGN TABLE
> mydb=# SELECT * FROM customer;
> ERROR:  failed to prepare the MySQL query:
> Table 'public.customer' doesn't exist
>
> Any suggestions would be greatly appreciated!
>
> Deven
>
>
> On Fri, Jan 8, 2016 at 10:26 AM, Adrian Klaver 
> wrote:
>
>> On 01/08/2016 07:04 AM, Deven Phillips wrote:
>>
>>> Hi all,
>>>
>>>  I installed the newly released PostgreSQL 9.5 this morning and
>>> compiled the latest mysql_fdw extension from EnterpriseDB. I was able to
>>> create the SERVER and USER MAPPING, but I cannot seem to get IMPORT
>>> FOREIGN SCHEMA to do anything. The command executes without error, but
>>> none of the table schemas are imported from the MySQL DB. Does anyone
>>> have any advice, links, documentation which might be of help?
>>>
>>
>> Can you CREATE FOREIGN TABLE and use it?
>>
>>
>>> Thanks in advance!
>>>
>>> Deven
>>>
>>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com
>>
>
>


[GENERAL] New Slave - timeline ERROR

2016-01-08 Thread drum.lu...@gmail.com
I've started a new SLAVE PostgreSQL server set up.

** NOTE: I run the pg_basebackup from another STANDBY SERVER. Not from the
MASTER*

1 - screen -t basebackup

2 - su - postgres

3 - cd ~/9.2/data/

4 - ssh postgres@slave01 'pg_basebackup --pgdata=- --format=tar
--label=bb_master --progress --host=localhost --port=5432
--username=replicator --xlog | pv --quiet --rate-limit 100M' | tar -x
--no-same-owner

5 - I've commented the "primary_conninfo =" and "standby_mode=" so the
slave can get the files from WAL_ARCHIVE

6 - Afte I got the logs:

postgres(iostreams)[10037]:   2016-01-09 00:07:26.604
UTC|10085|LOG:  database system is ready to accept read only
connections

7 - After the server finished the WAL_ARCHIVE, I turned on replication from
MASTER on recovery.conf:

*recovery.conf on the New Slave:*

restore_command = 'exec nice -n 19 ionice -c 2 -n 7
../../bin/restore_wal_segment.bash "../wal_archive/%f" "%p"'
archive_cleanup_command = 'exec nice -n 19 ionice -c 2 -n 7
../../bin/pg_archivecleaup_mv.bash -d "../wal_archive" "%r"'
recovery_target_timeline = 'latest'
standby_mode = on
primary_conninfo = 'host=192.168.100.XX port=5432 user=replicator
application_name=replication_slave02'

But, once I've restarted the POSTGRESQL I got this error:

WAL segment `../wal_archive/0005.history` not found2016-01-09
01:13:39.183 UTC|774|FATAL:  timeline 2 of the primary does not match
recovery target timeline 4

What can I do to solve the problem?

It's really important as it's a production New Slave. Thank you!


Re: [GENERAL] PostgreSQL 9.5, mysql_fdw, and IMPORT FOREIGN SCHEMA question

2016-01-08 Thread Deven Phillips
Apparently not, though I have done so in the past on PostgreSQL 9.4. It
appears to be related to the "schema" with which the foreign table is
associated:

mydb=# CREATE FOREIGN TABLE customer (
id BIGINT,
name VARCHAR(150),
parent_id BIGINT,
oracle_id BIGINT,
last_updated_time TIMESTAMP,
created_time TIMESTAMP) SERVER mysql;
CREATE FOREIGN TABLE
mydb=# SELECT * FROM customer;
ERROR:  failed to prepare the MySQL query:
Table 'public.customer' doesn't exist

Any suggestions would be greatly appreciated!

Deven


On Fri, Jan 8, 2016 at 10:26 AM, Adrian Klaver 
wrote:

> On 01/08/2016 07:04 AM, Deven Phillips wrote:
>
>> Hi all,
>>
>>  I installed the newly released PostgreSQL 9.5 this morning and
>> compiled the latest mysql_fdw extension from EnterpriseDB. I was able to
>> create the SERVER and USER MAPPING, but I cannot seem to get IMPORT
>> FOREIGN SCHEMA to do anything. The command executes without error, but
>> none of the table schemas are imported from the MySQL DB. Does anyone
>> have any advice, links, documentation which might be of help?
>>
>
> Can you CREATE FOREIGN TABLE and use it?
>
>
>> Thanks in advance!
>>
>> Deven
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: [GENERAL] pg_upgrade 9.4 -> 9.5 with pg_trgm fails for me

2016-01-08 Thread Tom Lane
Karsten Hilbert  writes:
> Just a crazy thought:
> If I create a foreign key from *.*.modified_by towards
> dem.staff.db_user but then DISABLE that FK -- would that still
> cue in pg_dump to order the tables appropriately ?

Hmm, probably.  Sounds like a kluge but ...

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade 9.4 -> 9.5 with pg_trgm fails for me

2016-01-08 Thread Tom Lane
Adrian Klaver  writes:
> On 01/08/2016 01:26 PM, Tom Lane wrote:
>> No, it's just a bug.  Although apparently not many people do that, or
>> we'd have heard complaints before.

> That dredged up a memory from way back:
> http://www.postgresql.org/message-id/200411251906.43881.akla...@comcast.net
> in particular:
> http://www.postgresql.org/message-id/20077.1101510...@sss.pgh.pa.us

Well, that was a long time ago.  Now that we have extensions, it should
be possible for pg_dump to do the right thing with an extension's members
whether they're in pg_catalog or not.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade 9.4 -> 9.5 with pg_trgm fails for me

2016-01-08 Thread Karsten Hilbert
On Fri, Jan 08, 2016 at 12:53:24PM -0500, Tom Lane wrote:

> Hm.  So, by having installed this function as a check constraint, you have
> created a data dependency that pg_dump has no way to know about.  It's
> going to load the tables in some order that's chosen without regard to the
> need for dem.staff to be populated first.  This is not a pg_dump bug.
> 
> In general, embedding lookups of other tables into CHECK constraints
> is going to cause you all kinds of grief quite aside from pg_dump
> not understanding it, because the backend doesn't really understand it
> either.  If the other table changes, causing the CHECK expression to
> fail, that will *not* cause anything to happen to the table with the
> CHECK constraint.  It could well be that pg_dump is loading the tables
> in the right order by chance, and the reason you're seeing a failure
> is that one or more rows have modified_by values corresponding to
> people who no longer are in the staff table.
> 
> Can you get rid of dem.staff in favor of something like creating a
> "staff" role and GRANT'ing that to appropriate users?
> 
> Alternatively, maybe you can make the modified_by column be a foreign
> key referencing a table of users (it probably couldn't be defined
> quite like "staff", but you get the idea).  The presence of the foreign
> key would be enough to cue pg_dump about load order.

Just a crazy thought:

If I create a foreign key from *.*.modified_by towards
dem.staff.db_user but then DISABLE that FK -- would that still
cue in pg_dump to order the tables appropriately ?

Thanks,
Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No postgresql-9.5-prefix ?

2016-01-08 Thread Adrian Klaver

On 01/08/2016 03:00 PM, Tim Smith wrote:



On Friday, 8 January 2016, Adrian Klaver mailto:adrian.kla...@aklaver.com>> wrote:

On 01/08/2016 07:43 AM, Tim Smith wrote:

Hi,

Many apologies if I missed some announcement anywhere, but there
appears to be no postgresql-9.5-prefix in the Postgres repository ?


Which repo are you talking about?


The one that's referred to on the Postgres website, the apt one, I.e
http://www.postgresql.org/download/linux/ubuntu/


Well that page just shows examples and from that page:

"The repository contains many different packages including third party 
addons. The most common and important packages are (substitute the 
version number as required): "


I would the version example has not been updated to 9.5 yet.

Still if you go to the actual repo, say for Trusty:

http://apt.postgresql.org/pub/repos/apt/dists/trusty-pgdg/

You see:

9.5/






Is this a deliberate omission or is it "coming real soon now" ?

Thanks !

Tim




--
Adrian Klaver
adrian.kla...@aklaver.com




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


SOLVED: Re: [GENERAL] pg_upgrade 9.4 -> 9.5 with pg_trgm fails for me

2016-01-08 Thread Karsten Hilbert
On Fri, Jan 08, 2016 at 04:26:25PM -0500, Tom Lane wrote:

> Karsten Hilbert  writes:
> > On Fri, Jan 08, 2016 at 04:03:51PM -0500, Tom Lane wrote:
> >> BTW, the one-liner fix that I'd had in mind when I wrote that does indeed
> >> fix this particular problem, but after studying the code I realized that
> >> there's a whole bunch of related problems; for instance I believe
> >> pg_upgrade would lose domain constraints on a domain type that's in an
> >> extension installed into pg_catalog :-(.
> 
> > Does this warrant adding a few words to the documentation
> > warning against installing extensions into pg_catalog. ?
> 
> No, it's just a bug.  Although apparently not many people do that, or
> we'd have heard complaints before.

For the record, apart from the aforementioned bug, I can
confirm that pg_upgrade will work fine when pg_trgm is
relocated to another schema (I chose "pgtrgm").

Caveats:

1)
One can't use "pg_trgm" as the schema name - PG will tell us
that the pg_ prefix is reserved for system schemata.

2)
One can't (easily ?) use

alter extension ... set schema ...

to relocate pg_trgm from pg_catalog to some other schema
because PG will inform us that pg_catalog is a system catalog:

gnumed_v21=# alter extension pg_trgm set schema pg_catalog;
ALTER EXTENSION
gnumed_v21=# alter extension pg_trgm set schema pgtrgm;
ERROR:  cannot remove dependency on schema pg_catalog because it is a 
system object
gnumed_v21=#

Relocating from pg_catalog requires a

drop extension ... cascade
create extension ... with schema

cycle, followed by recreating GIN indexes as needed (in my case).

Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No postgresql-9.5-prefix ?

2016-01-08 Thread Tim Smith
On Friday, 8 January 2016, Adrian Klaver  wrote:

> On 01/08/2016 07:43 AM, Tim Smith wrote:
>
>> Hi,
>>
>> Many apologies if I missed some announcement anywhere, but there
>> appears to be no postgresql-9.5-prefix in the Postgres repository ?
>>
>
> Which repo are you talking about?


The one that's referred to on the Postgres website, the apt one, I.e
http://www.postgresql.org/download/linux/ubuntu/



>
>
>> Is this a deliberate omission or is it "coming real soon now" ?
>>
>> Thanks !
>>
>> Tim
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: [GENERAL] Question -- Session Operations - Feasibility Of Proposed Synchronization Method?

2016-01-08 Thread Steve Petrie, P.Eng.

Andy,

Thanks very much for your response.

No worries about raining on my parade. Your feedback is exactly what I'm 
looking for -- praise is nice, but I really do prefer to have the 
experts throwing rocks at my naive ideas :)


Please see my comments embedded below.

Steve

- Original Message - 
From: "Andy Colson" 
To: "Steve Petrie, P.Eng." ; 


Sent: Thursday, January 07, 2016 10:17 PM
Subject: Re: [GENERAL] Question -- Session Operations - Feasibility Of 
Proposed Synchronization Method?




On 01/07/2016 06:30 PM, Steve Petrie, P.Eng. wrote:
Thanks to forum members for the four helpful replies, to my earlier 
message that initiated this thread.


The replies expressed concerns, with the feasibility of my proposal 
to use postgres tables to store short-lived context data, for dialog 
continuity during website app transient sessions, with visitor 
browsers over modeless HTTP connections.


Hope the four emails I sent in response (5 January 2016), went some 
way to satisfying the concerns expressed.


Here is a list of the issues discussed, in the dialog mentioned 
above:


1. "Session" defined;
2. Avoid row DELETEs;
3. Periodically TRUNCATE each table in a pool of session context 
tables;
4. Embed a session ID key parameter in an HTML "hidden" field 
(optional);
5. Use sequence generators as rapid global iterators controlling 
access to session context tables;






Thanks to forum members for taking the time to read my email.



This feels hugely overcomplicated.


I agree. It is complicated.

But I believe it's the minimum functionality required to both: 1. avoid 
using the  approach, 
to recycling "dead" session context row image storage space back to the 
filesystem, and 2. enable use of the much faster TRUNCATE command on an 
entire "dead" session context table..


I also didn't read most of the last thread, so forgive me if you've 
answered this already:  How many website requests a second (that 
actually need to touch session data) are you expecting?  How much 
space is the session data going to take?  (like, 5 Gig a day?)




Every incoming request to the website for non-static content, needs to 
touch (INSERT or SELECT + UPDATE) the corresponding session context row. 
That row is where the transient continuity context for the app session 
dialog, gets stored, between requestand request coming 
in from the browser driving that app session.


So session data will be touched by every request that launches an app 
php function, to service the next step in the session dialog with that 
visitor.


But an individual session isn't going to live all that long, from the 
time that it's context row gets INSERTed until the time that the session 
"dies" and it's context row gets UPDATEd as "dead" in its "status" 
column (the row is never explicitely DELETEd, the entire table in which 
it resides gets TRUNCATEd).


If the website manages to register e.g. 100,000 subscribers in its first 
year, it will be a runaway success. I'm not expecting more than a few 
percent of subscribers to visit on any given day. So if the website 
proves to be a massive winner, there will be maybe 1000 to 5000 
subscriber sessions / day, each session being initiated, conducted and 
then terminated over the time span of a few minutes (rarely more than 30 
minutes).


But I do fear "success disaster" if suddenly, the website (which will 
promote a politically controversial technology concept for preventing 
freeway traffic congestion) gets media coverage in its initial market 
area (the greater Toronto area in the province of Ontario, Canada), and 
among a million+ people watching the 6-o'clock Toronto news, a few 
thousand jump on their smart-phone browsers to hit the website, looking 
to subscribe or send a contact message via web page form.


So I'm looking to build in capacity to handle brief intense bursts of 
session traffic workload. Not anywhere near Google-scale, obviously. But 
maybe to handle a 10-minute burst driving a maximum rate of  e.g. 1000 
requests / second to the database server (being mostly a combination of 
an INSERT for each new session row, followed by a few s 
to that row, as the session proceeds through its brief "life", towards 
its inevitable "death".


Actual access activity to longer-lived data tables: 1. subscriber 
membership table, 2. contact message table; will be orders-of-magnitude 
lower, than activity in the session context tables.


Each visitor session is allowed a "quota" of requests (e.g. 25) so the 
visitor gets 25 chances to e.g. click a "submit" button. There is also a 
session timeout "quota" (e.g. 10 minutes) that will kill the session if 
the visitor waits too long between requests.


So the session context tables in aggregate, do not keep growing and 
growing. And session context data is short-term expendable data. No need 
to log it for recovery. No need for checkpoints, or any other backup 
provisions.


If all active session context data gets lost in a crash, n

"partial" data constraint - trigger or CONSTRAINT ? was: [GENERAL] pg_upgrade 9.4 -> 9.5 with pg_trgm fails for me

2016-01-08 Thread Karsten Hilbert
On Fri, Jan 08, 2016 at 12:53:24PM -0500, Tom Lane wrote:

> In general, embedding lookups of other tables into CHECK constraints
> is going to cause you all kinds of grief quite aside from pg_dump
> not understanding it, because the backend doesn't really understand it
> either.  If the other table changes, causing the CHECK expression to
> fail, that will *not* cause anything to happen to the table with the
> CHECK constraint.  It could well be that pg_dump is loading the tables
> in the right order by chance, and the reason you're seeing a failure
> is that one or more rows have modified_by values corresponding to
> people who no longer are in the staff table.

That has pretty much been the very intent of the constraint function:

Allowing only "postgres", the owner of the database, or
people _currently_ on staff to insert/update table data.

There may well be database accounts which used to be
associated with staff rows but are no longer listed as staff
(because they aren't). There will still be table data
associated with those accounts - their former staff entries
can be gotten from the audit system (that's why dem.staff
itself is being audited).

I realize that being able to foreign key into system tables
would not have helped with the part where only _current_
staff is to insert into/update data tables. That's why I
haven't moaned about it but rather written my own
(misguided?) attempt at enforcing such a constraint.

Would I be better of rewriting the constraint as an ON INSERT
OR UPDATE trigger ?

Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade 9.4 -> 9.5 with pg_trgm fails for me

2016-01-08 Thread Adrian Klaver
On 01/08/2016 01:26 PM, Tom Lane wrote:
> Karsten Hilbert  writes:
>> On Fri, Jan 08, 2016 at 04:03:51PM -0500, Tom Lane wrote:
>>> BTW, the one-liner fix that I'd had in mind when I wrote that does indeed
>>> fix this particular problem, but after studying the code I realized that
>>> there's a whole bunch of related problems; for instance I believe
>>> pg_upgrade would lose domain constraints on a domain type that's in an
>>> extension installed into pg_catalog :-(.
> 
>> Does this warrant adding a few words to the documentation
>> warning against installing extensions into pg_catalog. ?
> 
> No, it's just a bug.  Although apparently not many people do that, or
> we'd have heard complaints before.

That dredged up a memory from way back:

http://www.postgresql.org/message-id/200411251906.43881.akla...@comcast.net

in particular:

http://www.postgresql.org/message-id/20077.1101510...@sss.pgh.pa.us

> 
>   regards, tom lane
> 
> 


-- 
Adrian Klaver
adrian.kla...@aklaver.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade 9.4 -> 9.5 with pg_trgm fails for me

2016-01-08 Thread Tom Lane
Karsten Hilbert  writes:
> On Fri, Jan 08, 2016 at 04:03:51PM -0500, Tom Lane wrote:
>> BTW, the one-liner fix that I'd had in mind when I wrote that does indeed
>> fix this particular problem, but after studying the code I realized that
>> there's a whole bunch of related problems; for instance I believe
>> pg_upgrade would lose domain constraints on a domain type that's in an
>> extension installed into pg_catalog :-(.

> Does this warrant adding a few words to the documentation
> warning against installing extensions into pg_catalog. ?

No, it's just a bug.  Although apparently not many people do that, or
we'd have heard complaints before.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade 9.4 -> 9.5 with pg_trgm fails for me

2016-01-08 Thread Karsten Hilbert
On Fri, Jan 08, 2016 at 04:03:51PM -0500, Tom Lane wrote:

> BTW, the one-liner fix that I'd had in mind when I wrote that does indeed
> fix this particular problem, but after studying the code I realized that
> there's a whole bunch of related problems; for instance I believe
> pg_upgrade would lose domain constraints on a domain type that's in an
> extension installed into pg_catalog :-(.

Does this warrant adding a few words to the documentation
warning against installing extensions into pg_catalog. ?

Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade 9.4 -> 9.5 with pg_trgm fails for me

2016-01-08 Thread Tom Lane
Karsten Hilbert  writes:
> On Fri, Jan 08, 2016 at 12:38:47PM -0500, Tom Lane wrote:
>> After digging through this, I figured out the problem: you'd installed
>> pg_trgm into the pg_catalog schema, whereas when I was testing I'd just
>> dropped it into the public schema.  That confuses pg_dump into not
>> emitting the shell type that it should emit.  It's an easy fix now
>> that I see the problem.
>> This bug does *not*, AFAICT, explain any problem you might have with
>> "dump" transfers, only with pg_upgrade.

> I realize that. Thank you for looking into this issue.

BTW, the one-liner fix that I'd had in mind when I wrote that does indeed
fix this particular problem, but after studying the code I realized that
there's a whole bunch of related problems; for instance I believe
pg_upgrade would lose domain constraints on a domain type that's in an
extension installed into pg_catalog :-(.  See
http://www.postgresql.org/message-id/19767.1452279...@sss.pgh.pa.us

So a fix might take a bit more time than I thought, but hopefully we'll
have something in time for next month's update releases.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade 9.4 -> 9.5 with pg_trgm fails for me

2016-01-08 Thread Karsten Hilbert
On Fri, Jan 08, 2016 at 12:38:47PM -0500, Tom Lane wrote:

> After digging through this, I figured out the problem: you'd installed
> pg_trgm into the pg_catalog schema, whereas when I was testing I'd just
> dropped it into the public schema.  That confuses pg_dump into not
> emitting the shell type that it should emit.  It's an easy fix now
> that I see the problem.
> 
> This bug does *not*, AFAICT, explain any problem you might have with
> "dump" transfers, only with pg_upgrade.

I realize that. Thank you for looking into this issue.

I'll rethink the foreign key / staff / check constraint issue meanwhile.

Thanks,
Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade 9.4 -> 9.5 with pg_trgm fails for me

2016-01-08 Thread Karsten Hilbert
On Fri, Jan 08, 2016 at 12:38:47PM -0500, Tom Lane wrote:

>>> dumps of your 9.4 installation's databases.  If you'd be willing to send
>>> those to me off-list, maybe I could figure out what's happening.
> 
>> The list stalled the attachment so here as PM.
> 
> Well, you shouldn't have tried to send it to the list; there's no need
> to memorialize half a megabyte of transient data in the archives.

I hadn't realized that

>>> If you'd be willing to send those to me off-list

was to be understood as "if so you *should* send them *off*-list". Sorry.

Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade 9.4 -> 9.5 with pg_trgm fails for me

2016-01-08 Thread Karsten Hilbert
On Fri, Jan 08, 2016 at 12:53:24PM -0500, Tom Lane wrote:

> >> and here is the function that leads to the schema having a
> >> dependancy on table data:
> 
> Hm.  So, by having installed this function as a check constraint, you have
> created a data dependency that pg_dump has no way to know about.  It's
> going to load the tables in some order that's chosen without regard to the
> need for dem.staff to be populated first.  This is not a pg_dump bug.

Yes, I agree.

> In general, embedding lookups of other tables into CHECK constraints
> is going to cause you all kinds of grief quite aside from pg_dump
> not understanding it, because the backend doesn't really understand it
> either.  If the other table changes, causing the CHECK expression to
> fail, that will *not* cause anything to happen to the table with the
> CHECK constraint.  It could well be that pg_dump is loading the tables
> in the right order by chance, and the reason you're seeing a failure
> is that one or more rows have modified_by values corresponding to
> people who no longer are in the staff table.

Not really but for that I need to deliver more information.
The audit.audit_fields table is part of GNUmed's homegrown,
trigger based audit solution:

- tables have audit tables w/o constraints in the audit. schema
- triggers on tables log UPDATEs/DELETEs into the audit tables
- tables being audited (such as dem.staff) INHERIT from audit.audit_fields
- audit.audit_fields is never inserted into directly (only into child tables)
- audit.audit_fields carries the constraint based on gm.is_dbowner_or_staff()

(so, yes, it is even worse: since dem.staff is audited, and
 therefore inherits the check constraint, it depends on itself :-o


   Table "dem.staff"
Column |   Type   |   
Modifiers   | Storage  | Stats target | 
   Description
---+--+---+--+--+---
 pk_audit  | integer  | not null default 
nextval('audit.audit_fields_pk_audit_seq'::regclass) | plain|  
| 
 row_version   | integer  | not null default 0  
  | plain|  | 
 modified_when | timestamp with time zone | not null default now()  
  | plain|  | 
 modified_by   | name | not null default "current_user"()   
  | plain|  | 
 pk| integer  | not null default 
nextval('dem.staff_pk_seq'::regclass)| plain|  
| 
 fk_identity   | integer  | not null
  | plain|  | 
 db_user   | name | not null default "current_user"()   
  | plain|  | 
 short_alias   | text | not null
  | extended |  | a short signature 
unique to this staff member+
   |  | 
  |  |  |  to be 
used in the GUI, actually this is somewhat+
   |  | 
  |  |  |  
redundant with ext_person_id...
 comment   | text | 
  | extended |  | 
 is_active | boolean  | not null default true   
  | plain|  | 
Indexes:
"staff_pkey" PRIMARY KEY, btree (pk)
"staff_db_user_key" UNIQUE CONSTRAINT, btree (db_user)
"staff_short_alias_key" UNIQUE CONSTRAINT, btree (short_alias)
Foreign-key constraints:
"staff_fk_identity_fkey" FOREIGN KEY (fk_identity) REFERENCES 
dem.identity(pk) ON UPDATE CASCADE ON DELETE CASCADE
Referenced by:
TABLE "bill.bill_item" CONSTRAINT "bill_item_fk_provider_fkey" FOREIGN KEY 
(fk_provider) REFERENCES dem.staff(pk) ON UPDATE CASCADE ON DELETE RESTRICT
TABLE "blobs.doc_obj" CONSTRAINT "doc_obj_fk_intended_reviewer_fkey" 
FOREIGN KEY (fk_intended_reviewer) REFERENCES dem.staff(pk) ON UPDATE CASCADE 
ON DELETE RESTRICT
TABLE "dem.identity" CONSTRAINT "identity_fk_primary_provider_fkey" FOREIGN 
KEY (fk_primary_provider) REFERENCES dem.staff(pk) ON UPDATE CASCADE ON DELETE 
RESTRICT
TABLE "clin.incoming_data_unmatche

Re: [GENERAL] Support for BDR in 9.5?

2016-01-08 Thread Andrew Biggs (adb)
On 1/8/16, 12:51 PM, "Simon Riggs" 
mailto:si...@2ndquadrant.com>> wrote:

On 8 January 2016 at 18:56, Joshua D. Drake 
mailto:j...@commandprompt.com>> wrote:
On 01/08/2016 10:42 AM, Andrew Biggs (adb) wrote:

Installed 9.5 to CentOS7 via yum, and tried going through the BDR
quick-start guide (minus sections 2.1):

http://bdr-project.org/docs/stable/quickstart.html

It was unhappy that BDR binaries were not on the path, and failed at
section 2.4.

Then went back and tried installing the BDR patch (section 2.1) which
finished with this unhappy message:

 configure: error: bdr is not compatible with postgres 9.5

I was just curious if others have had success with this.

Although many of the components of BDR are in 9.5, BDR itself is still 
currently a fork. It can be downloaded from 2ndQuadrant. In theory it will be 
available in 9.6.

BDR 9.4 is currently at v0.9.3. There isn't a BDR 9.5, since we concentrated on 
pglogical.

pglogical works with 9.4 and 9.5 and is currently at v1.0

Future detailed planning for BDR and pglogical is happening now; there 
definitely will be future versions with increasing PostgreSQL compatibility.

Got it.  Thanks guys, very much appreciated!




Re: [GENERAL] Support for BDR in 9.5?

2016-01-08 Thread Simon Riggs
On 8 January 2016 at 18:56, Joshua D. Drake  wrote:

> On 01/08/2016 10:42 AM, Andrew Biggs (adb) wrote:
>
> Installed 9.5 to CentOS7 via yum, and tried going through the BDR
>> quick-start guide (minus sections 2.1):
>>
>> http://bdr-project.org/docs/stable/quickstart.html
>>
>> It was unhappy that BDR binaries were not on the path, and failed at
>> section 2.4.
>>
>> Then went back and tried installing the BDR patch (section 2.1) which
>> finished with this unhappy message:
>>
>>  configure: error: bdr is not compatible with postgres 9.5
>>
>> I was just curious if others have had success with this.
>>
>
> Although many of the components of BDR are in 9.5, BDR itself is still
> currently a fork. It can be downloaded from 2ndQuadrant. In theory it will
> be available in 9.6.
>

BDR 9.4 is currently at v0.9.3. There isn't a BDR 9.5, since we
concentrated on pglogical.

pglogical works with 9.4 and 9.5 and is currently at v1.0

Future detailed planning for BDR and pglogical is happening now; there
definitely will be future versions with increasing PostgreSQL compatibility.

-- 
Simon Riggshttp://www.2ndQuadrant.com/

PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: [GENERAL] Support for BDR in 9.5?

2016-01-08 Thread Andreas Kretschmer
Afaik no, you have to use 9.4.


Am 8. Januar 2016 18:39:07 MEZ, schrieb "Andrew Biggs (adb)" :
>Can anyone tell me if PostgreSQL 9.5 supports (either natively or by
>extension) the BDR functionality?
>
>I tried it out and ran into issues, but it could well have been I was
>doing something wrong.
>
>Thanks!
>Andrew

-- 
Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet.

Re: [GENERAL] pg_upgrade 9.4 -> 9.5 with pg_trgm fails for me

2016-01-08 Thread Karsten Hilbert
On Fri, Jan 08, 2016 at 10:45:27AM -0500, Tom Lane wrote:

> > (For what it's worth, I have also tried the --method=dump way 
> > of using Debian's pg_upgradecluster which internally uses a
> > dump/restore cycle rather than calling pg_upgrade. That
> > failed due to ordering problems with table data vs table
> > constraints.)
> 
> That seems like an independent bug.  Can you provide specifics?

Attached the log of

pg_upgradecluster -v 9.5 9.4 main &> pg-upgrade-9_4-9_5-dump_restore.log

and here is the function that leads to the schema having a
dependancy on table data:

create or replace function gm.account_is_dbowner_or_staff(_account name)
returns boolean
language plpgsql
as '
DECLARE
_is_owner boolean;
BEGIN
-- is _account member of current db group ?
--  PERFORM 1 FROM pg_auth_members
--  WHERE
--  roleid = (SELECT oid FROM pg_roles WHERE rolname = 
current_database())
--  AND
--  member = (SELECT oid FROM pg_roles WHERE rolname = 
_account)
--  ;
--  IF FOUND THEN
--  -- should catch people on staff, gm-dbo, and postgres
--  RETURN TRUE;
--  END IF;

-- postgres
IF _account = ''postgres'' THEN
RETURN TRUE;
END IF;

-- on staff list
PERFORM 1 FROM dem.staff WHERE db_user = _account;
IF FOUND THEN
RETURN TRUE;
END IF;

-- owner
SELECT pg_catalog.pg_get_userbyid(datdba) = _account INTO 
STRICT _is_owner FROM pg_catalog.pg_database WHERE datname = current_database();
IF _is_owner IS TRUE THEN
RETURN TRUE;
END IF;

-- neither
RAISE EXCEPTION
''gm.account_is_dbowner_or_staff(NAME): <%> is neither 
database owner, nor , nor on staff'', _account
USING ERRCODE = ''integrity_constraint_violation''
;
RETURN FALSE;
END;';

The function is used on audit tables:

alter table audit.audit_fields
drop constraint if exists
audit_audit_fields_sane_modified_by cascade;

alter table audit.audit_fields
add constraint audit_audit_fields_sane_modified_by check
(gm.account_is_dbowner_or_staff(modified_by) IS TRUE)
;

Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346
Stopping old cluster...
Disabling connections to the old cluster during upgrade...
Restarting old cluster with restricted connections...
Creating new cluster 9.5/main ...
  config /etc/postgresql/9.5/main
  data   /var/lib/postgresql/9.5/main
  locale de_DE.UTF-8
  port   5433
Disabling connections to the new cluster during upgrade...
Roles, databases, schemas, ACLs...
WARNING:  column "brand" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
WARNING:  column "atc_brand" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
WARNING:  column "external_code_brand" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
WARNING:  column "external_code_type_brand" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
WARNING:  column "fake_brand" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
WARNING:  column "pk_brand" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
WARNING:  column "pk_data_source" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
WARNING:  column "pk_drug_component" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
WARNING:  column "src_table" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
WARNING:  column "brand" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
WARNING:  column "atc_brand" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
WARNING:  column "external_code_brand" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
WARNING:  column "external_code_type_brand" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
WARNING:  column "fake_brand" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
WARNING:  column "pk_brand" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
WARNING:  column "pk_data_source" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
WARNING:  column "pk_drug_component" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
Fixing hardcoded library paths for stored procedures...
Upgrading database ort

Re: [GENERAL] Support for BDR in 9.5?

2016-01-08 Thread Roland van Laar


On January 8, 2016 7:42:06 PM GMT+01:00, "Andrew Biggs (adb)"  
wrote:
>On 1/8/16, 10:53 AM, Rob Sargent wrote:
>
>On 01/08/2016 10:39 AM, Andrew Biggs (adb) wrote:
>Can anyone tell me if PostgreSQL 9.5 supports (either natively or by
>extension) the BDR functionality?
>
>I tried it out and ran into issues, but it could well have been I was
>doing something wrong.
>
>Thanks!
>Andrew
>
>I'm sure those who might be able to help you would be interested in
>what, exactly, it was you tried.
>
>Installed 9.5 to CentOS7 via yum, and tried going through the BDR
>quick-start guide (minus sections 2.1):
>
>http://bdr-project.org/docs/stable/quickstart.html
>
>It was unhappy that BDR binaries were not on the path, and failed at
>section 2.4.
>
>Then went back and tried installing the BDR patch (section 2.1) which
>finished with this unhappy message:
>
>configure: error: bdr is not compatible with postgres 9.5
>
>I was just curious if others have had success with this.

Bdr is only compatible with 9.4 and needs the specific bdr postgres.

Roland
>
>Andrew



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Support for BDR in 9.5?

2016-01-08 Thread Joshua D. Drake

On 01/08/2016 10:42 AM, Andrew Biggs (adb) wrote:


Installed 9.5 to CentOS7 via yum, and tried going through the BDR
quick-start guide (minus sections 2.1):

http://bdr-project.org/docs/stable/quickstart.html

It was unhappy that BDR binaries were not on the path, and failed at
section 2.4.

Then went back and tried installing the BDR patch (section 2.1) which
finished with this unhappy message:

 configure: error: bdr is not compatible with postgres 9.5

I was just curious if others have had success with this.


Although many of the components of BDR are in 9.5, BDR itself is still 
currently a fork. It can be downloaded from 2ndQuadrant. In theory it 
will be available in 9.6.


JD




--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing "I'm offended" is basically telling the world you can't
control your own emotions, so everyone else should do it for you.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Support for BDR in 9.5?

2016-01-08 Thread Andrew Biggs (adb)
On 1/8/16, 10:53 AM, Rob Sargent wrote:

On 01/08/2016 10:39 AM, Andrew Biggs (adb) wrote:
Can anyone tell me if PostgreSQL 9.5 supports (either natively or by extension) 
the BDR functionality?

I tried it out and ran into issues, but it could well have been I was doing 
something wrong.

Thanks!
Andrew

I'm sure those who might be able to help you would be interested in what, 
exactly, it was you tried.

Installed 9.5 to CentOS7 via yum, and tried going through the BDR quick-start 
guide (minus sections 2.1):

http://bdr-project.org/docs/stable/quickstart.html

It was unhappy that BDR binaries were not on the path, and failed at section 
2.4.

Then went back and tried installing the BDR patch (section 2.1) which finished 
with this unhappy message:

configure: error: bdr is not compatible with postgres 9.5

I was just curious if others have had success with this.

Andrew



Re: [GENERAL] Support for BDR in 9.5?

2016-01-08 Thread Rob Sargent



On 01/08/2016 10:39 AM, Andrew Biggs (adb) wrote:
Can anyone tell me if PostgreSQL 9.5 supports (either natively or by 
extension) the BDR functionality?


I tried it out and ran into issues, but it could well have been I was 
doing something wrong.


Thanks!
Andrew


I'm sure those who might be able to help you would be interested in 
what, exactly, it was you tried.





Re: [GENERAL] pg_upgrade 9.4 -> 9.5 with pg_trgm fails for me

2016-01-08 Thread Tom Lane
Karsten Hilbert  writes:
>> and here is the function that leads to the schema having a
>> dependancy on table data:

Hm.  So, by having installed this function as a check constraint, you have
created a data dependency that pg_dump has no way to know about.  It's
going to load the tables in some order that's chosen without regard to the
need for dem.staff to be populated first.  This is not a pg_dump bug.

In general, embedding lookups of other tables into CHECK constraints
is going to cause you all kinds of grief quite aside from pg_dump
not understanding it, because the backend doesn't really understand it
either.  If the other table changes, causing the CHECK expression to
fail, that will *not* cause anything to happen to the table with the
CHECK constraint.  It could well be that pg_dump is loading the tables
in the right order by chance, and the reason you're seeing a failure
is that one or more rows have modified_by values corresponding to
people who no longer are in the staff table.

Can you get rid of dem.staff in favor of something like creating a
"staff" role and GRANT'ing that to appropriate users?

Alternatively, maybe you can make the modified_by column be a foreign
key referencing a table of users (it probably couldn't be defined
quite like "staff", but you get the idea).  The presence of the foreign
key would be enough to cue pg_dump about load order.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Support for BDR in 9.5?

2016-01-08 Thread Andrew Biggs (adb)
Can anyone tell me if PostgreSQL 9.5 supports (either natively or by extension) 
the BDR functionality?

I tried it out and ran into issues, but it could well have been I was doing 
something wrong.

Thanks!
Andrew




Re: [GENERAL] pg_upgrade 9.4 -> 9.5 with pg_trgm fails for me

2016-01-08 Thread Tom Lane
Karsten Hilbert  writes:
> On Fri, Jan 08, 2016 at 11:23:21AM -0500, Tom Lane wrote:
>> A suggestion is to run the pg_upgrade with -r switch, which will leave a
>> litter of files in your working directory.  Some of them will be named
>> like pg_upgrade_dump_NNN.custom and should be custom-format, schema-only
>> dumps of your 9.4 installation's databases.  If you'd be willing to send
>> those to me off-list, maybe I could figure out what's happening.

> The list stalled the attachment so here as PM.

Well, you shouldn't have tried to send it to the list; there's no need
to memorialize half a megabyte of transient data in the archives.

After digging through this, I figured out the problem: you'd installed
pg_trgm into the pg_catalog schema, whereas when I was testing I'd just
dropped it into the public schema.  That confuses pg_dump into not
emitting the shell type that it should emit.  It's an easy fix now
that I see the problem.

This bug does *not*, AFAICT, explain any problem you might have with
"dump" transfers, only with pg_upgrade.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade 9.4 -> 9.5 with pg_trgm fails for me

2016-01-08 Thread Karsten Hilbert
Again, as the list software doesn't like "config" at the
start of a line.

Karsten

On Fri, Jan 08, 2016 at 05:34:00PM +0100, Karsten Hilbert wrote:

> > > (For what it's worth, I have also tried the --method=dump way 
> > > of using Debian's pg_upgradecluster which internally uses a
> > > dump/restore cycle rather than calling pg_upgrade. That
> > > failed due to ordering problems with table data vs table
> > > constraints.)
> > 
> > That seems like an independent bug.  Can you provide specifics?
> 
> Attached the log of
> 
>   pg_upgradecluster -v 9.5 9.4 main &> pg-upgrade-9_4-9_5-dump_restore.log
> 
> and here is the function that leads to the schema having a
> dependancy on table data:
> 
>   create or replace function gm.account_is_dbowner_or_staff(_account name)
>   returns boolean
>   language plpgsql
>   as '
>   DECLARE
>   _is_owner boolean;
>   BEGIN
>   -- is _account member of current db group ?
>   --  PERFORM 1 FROM pg_auth_members
>   --  WHERE
>   --  roleid = (SELECT oid FROM pg_roles WHERE rolname = 
> current_database())
>   --  AND
>   --  member = (SELECT oid FROM pg_roles WHERE rolname = 
> _account)
>   --  ;
>   --  IF FOUND THEN
>   --  -- should catch people on staff, gm-dbo, and postgres
>   --  RETURN TRUE;
>   --  END IF;
> 
>   -- postgres
>   IF _account = ''postgres'' THEN
>   RETURN TRUE;
>   END IF;
> 
>   -- on staff list
>   PERFORM 1 FROM dem.staff WHERE db_user = _account;
>   IF FOUND THEN
>   RETURN TRUE;
>   END IF;
> 
>   -- owner
>   SELECT pg_catalog.pg_get_userbyid(datdba) = _account INTO 
> STRICT _is_owner FROM pg_catalog.pg_database WHERE datname = 
> current_database();
>   IF _is_owner IS TRUE THEN
>   RETURN TRUE;
>   END IF;
> 
>   -- neither
>   RAISE EXCEPTION
>   ''gm.account_is_dbowner_or_staff(NAME): <%> is neither 
> database owner, nor , nor on staff'', _account
>   USING ERRCODE = ''integrity_constraint_violation''
>   ;
>   RETURN FALSE;
>   END;';
> 
> The function is used on audit tables:
> 
>   alter table audit.audit_fields
>   drop constraint if exists
>   audit_audit_fields_sane_modified_by cascade;
> 
>   alter table audit.audit_fields
>   add constraint audit_audit_fields_sane_modified_by check
>   (gm.account_is_dbowner_or_staff(modified_by) IS TRUE)
>   ;
> 
> Karsten
> -- 
> GPG key ID E4071346 @ eu.pool.sks-keyservers.net
> E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346
Stopping old cluster...
Disabling connections to the old cluster during upgrade...
Restarting old cluster with restricted connections...
Creating new cluster 9.5/main ...
  _DEFANG_ config /etc/postgresql/9.5/main
  data   /var/lib/postgresql/9.5/main
  locale de_DE.UTF-8
  port   5433
Disabling connections to the new cluster during upgrade...
Roles, databases, schemas, ACLs...
WARNING:  column "brand" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
WARNING:  column "atc_brand" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
WARNING:  column "external_code_brand" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
WARNING:  column "external_code_type_brand" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
WARNING:  column "fake_brand" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
WARNING:  column "pk_brand" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
WARNING:  column "pk_data_source" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
WARNING:  column "pk_drug_component" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
WARNING:  column "src_table" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
WARNING:  column "brand" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
WARNING:  column "atc_brand" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
WARNING:  column "external_code_brand" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
WARNING:  column "external_code_type_brand" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
WARNING:  column "fake_brand" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
WARNING:  column "pk_brand" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
WARNING:  column "p

Re: [GENERAL] COPY FROM STDIN

2016-01-08 Thread Luke Coldiron
> On 1/6/16 9:45 PM, Luke Coldiron wrote:
> > In the example above I'm not sure if I can use some sub struct of the 
> > SPIPlanPtr and hand it off to the DoCopy function as the CopyStmt or 
> > if I need to go about this entirely different. Any advice on the 
> > matter would be much appreciated.
>
> I don't know off-hand. I suggest you look at what psql does to implement
\copy (note the \).
> --
> Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX Experts in
Analytics, Data Architecture and PostgreSQL Data in Trouble? Get it in
Treble! http://BlueTreble.com

I took a look at the \copy but as far as I can tell this is using the
frontend (client side) of postgresql and I need to be able to do this on the
backend (server side). I don't see a way with this either to attach to the
connection that called the c function and execute the copy statement. The
DoCopy in commands/copy.h appears to me to be the server side copy command.
And I think I can get it to work if I can figure out how to attach to the
connection of the calling function and pass in the CopyStmt variable.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade 9.4 -> 9.5 with pg_trgm fails for me

2016-01-08 Thread Tom Lane
I wrote:
> A suggestion is to run the pg_upgrade with -r switch, which will leave a
> litter of files in your working directory.  Some of them will be named
> like pg_upgrade_dump_NNN.custom and should be custom-format, schema-only
> dumps of your 9.4 installation's databases.  If you'd be willing to send
> those to me off-list, maybe I could figure out what's happening.

> It occurs to me that this might actually be related to the issue you
> saw in "dump" mode --- if there's some unresolved circular dependency,
> it could cause pg_dump to dump things in an unexpected order, which
> could possibly explain the message we're seeing.  But that's just a
> guess.

BTW, there will also be .log files, which might contain useful information
as well, especially if any of it is bleats from pg_dump about being unable
to break a circular dependency.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade 9.4 -> 9.5 with pg_trgm fails for me

2016-01-08 Thread Tom Lane
Karsten Hilbert  writes:
> On Fri, Jan 08, 2016 at 11:12:09AM -0500, Tom Lane wrote:
>> Hm.  I just tried running a pg_upgrade here on a 9.4 database containing
>> pg_trgm 1.1, and didn't see any particular problem, so there's some
>> additional factor needed to cause your result.  Hard to tell what.
>> Can you think of anything unusual about the history of your installation?

> No, other than that that cluster has been upgraded all the
> way from, I think, 8.4 over several Debian releases ;)

A suggestion is to run the pg_upgrade with -r switch, which will leave a
litter of files in your working directory.  Some of them will be named
like pg_upgrade_dump_NNN.custom and should be custom-format, schema-only
dumps of your 9.4 installation's databases.  If you'd be willing to send
those to me off-list, maybe I could figure out what's happening.

It occurs to me that this might actually be related to the issue you
saw in "dump" mode --- if there's some unresolved circular dependency,
it could cause pg_dump to dump things in an unexpected order, which
could possibly explain the message we're seeing.  But that's just a
guess.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade 9.4 -> 9.5 with pg_trgm fails for me

2016-01-08 Thread Karsten Hilbert
On Fri, Jan 08, 2016 at 11:12:09AM -0500, Tom Lane wrote:

> > Sure:
> > (pg_trgm,1.1,"text similarity measurement and index searching based on 
> > trigrams")
> 
> Hm.  I just tried running a pg_upgrade here on a 9.4 database containing
> pg_trgm 1.1, and didn't see any particular problem, so there's some
> additional factor needed to cause your result.  Hard to tell what.
> Can you think of anything unusual about the history of your installation?

No, other than that that cluster has been upgraded all the
way from, I think, 8.4 over several Debian releases ;)

Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Packages for Ubuntu Wily

2016-01-08 Thread Adrian Klaver

On 01/08/2016 08:02 AM, Henning Hoefer wrote:

2016-01-08 16:35 GMT+01:00 Adrian Klaver :


Well the mistake I often make is to do apt-get upgrade before doing apt-get 
update. I would suggest doing an update just to make sure.



2016-01-08 16:50 GMT+01:00 Joshua D. Drake :

Looks like you may have missed this step:

sudo apt-get install wget ca-certificates
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo
apt-key add -
sudo apt-get update



Thanks to both of you, but I already did that -- and like I said: For
some packages, the updates do in fact show up correctly.
It's just missing a few more packages.

If you compare the the package list for trusty-pgdg[1] with the one
for wily-pgdg[2], you'll find that the former is five times the size
of the latter -- and that the latter is missing pgdg-keyring and
pgadmin3 (among others).


What is not obvious to me is whether you actually tried an upgrade or 
just saw the package list and stopped at that point?


Re.pgAdmin3:

http://pgadmin.org/

2016-01-08 - pgAdmin v1.22.0 available

pgAdmin 1.22 has been released, offering new features, bug fixes and 
support for PostgreSQL 9.5. ...


So the 9.5 compatible version may not have made it way into the repos yet.



Kind Regards,

Henning.


[1]: 
http://apt.postgresql.org/pub/repos/apt/dists/trusty-pgdg/main/binary-amd64/Packages
[2]: 
http://apt.postgresql.org/pub/repos/apt/dists/wily-pgdg/main/binary-amd64/Packages


--
Henning Hoefer
Software Engineer

Device Insight GmbH, Willy-Brandt-Platz 6, D-81829 München
http://www.device-insight.com

Sitz der Gesellschaft: München
Registergericht: Amtsgericht München HRB 149018
Geschäftsführer: Reinhold Stammeier





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade 9.4 -> 9.5 with pg_trgm fails for me

2016-01-08 Thread Tom Lane
Karsten Hilbert  writes:
> On Fri, Jan 08, 2016 at 10:45:27AM -0500, Tom Lane wrote:
>> Just for completeness, can you tell us which pg_trgm version (1.0
>> or 1.1) is installed in the 9.4 database?

> Sure:
>   (pg_trgm,1.1,"text similarity measurement and index searching based on 
> trigrams")

Hm.  I just tried running a pg_upgrade here on a 9.4 database containing
pg_trgm 1.1, and didn't see any particular problem, so there's some
additional factor needed to cause your result.  Hard to tell what.
Can you think of anything unusual about the history of your installation?

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Packages for Ubuntu Wily

2016-01-08 Thread Henning Hoefer
2016-01-08 16:35 GMT+01:00 Adrian Klaver :
>
> Well the mistake I often make is to do apt-get upgrade before doing apt-get 
> update. I would suggest doing an update just to make sure.


2016-01-08 16:50 GMT+01:00 Joshua D. Drake :
> Looks like you may have missed this step:
>
> sudo apt-get install wget ca-certificates
> wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo
> apt-key add -
> sudo apt-get update


Thanks to both of you, but I already did that -- and like I said: For
some packages, the updates do in fact show up correctly.
It's just missing a few more packages.

If you compare the the package list for trusty-pgdg[1] with the one
for wily-pgdg[2], you'll find that the former is five times the size
of the latter -- and that the latter is missing pgdg-keyring and
pgadmin3 (among others).

Kind Regards,

Henning.


[1]: 
http://apt.postgresql.org/pub/repos/apt/dists/trusty-pgdg/main/binary-amd64/Packages
[2]: 
http://apt.postgresql.org/pub/repos/apt/dists/wily-pgdg/main/binary-amd64/Packages


--
Henning Hoefer
Software Engineer

Device Insight GmbH, Willy-Brandt-Platz 6, D-81829 München
http://www.device-insight.com

Sitz der Gesellschaft: München
Registergericht: Amtsgericht München HRB 149018
Geschäftsführer: Reinhold Stammeier


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade 9.4 -> 9.5 with pg_trgm fails for me

2016-01-08 Thread Karsten Hilbert
On Fri, Jan 08, 2016 at 10:45:27AM -0500, Tom Lane wrote:

> Karsten Hilbert  writes:
> > [ pg_upgrade failed on pg_trgm ]
> 
> Just for completeness, can you tell us which pg_trgm version (1.0
> or 1.1) is installed in the 9.4 database?

Sure:

(pg_trgm,1.1,"text similarity measurement and index searching based on 
trigrams")

taken from:

   pg_available_extensions  
  

--
 (moddatetime,1.0,"functions for tracking last modification time")
 (pg_freespacemap,1.0,"examine the free space map (FSM)")
 (earthdistance,1.0,"calculate great-circle distances on the surface of 
the Earth")
 (test_shm_mq,1.0,"Test code for shared memory message queues")
 (tablefunc,1.0,"functions that manipulate whole tables, including 
crosstab")
 (uuid-ossp,1.0,"generate universally unique identifiers (UUIDs)")
 (pageinspect,1.2,"inspect the contents of database pages at a low 
level")
 (isn,1.0,"data types for international product numbering standards")
 (pgrowlocks,1.1,"show row-level locking information")
 (pgagent,3.4,"A PostgreSQL job scheduler")
 (tcn,1.0,"Triggered change notifications")
 (unaccent,1.0,"text search dictionary that removes accents")
 (pg_stat_statements,1.2,"track execution statistics of all SQL 
statements executed")
 (dblink,1.1,"connect to other PostgreSQL databases from within a 
database")
 (insert_username,1.0,"functions for tracking who changed a table")
 (fuzzystrmatch,1.0,"determine similarities and distance between 
strings")
 (pg_buffercache,1.0,"examine the shared buffer cache")
 (timetravel,1.0,"functions for implementing time travel")
 (cube,1.0,"data type for multidimensional cubes")
 (pg_trgm,1.1,"text similarity measurement and index searching based on 
trigrams")
 (dict_int,1.0,"text search dictionary template for integers")
 (xml2,1.0,"XPath querying and XSLT")
 (sslinfo,1.0,"information about SSL certificates")
 (btree_gin,1.0,"support for indexing common datatypes in GIN")
 (btree_gist,1.0,"support for indexing common datatypes in GiST")
 (tsearch2,1.0,"compatibility package for pre-8.3 text search 
functions")
 (test_parser,1.0,"example of a custom parser for full-text search")
 (seg,1.0,"data type for representing line segments or floating-point 
intervals")
 (citext,1.0,"data type for case-insensitive character strings")
 (intarray,1.0,"functions, operators, and index support for 1-D arrays 
of integers")
 (worker_spi,1.0,"Sample background worker")
 (file_fdw,1.0,"foreign-data wrapper for flat file access")
 (dict_xsyn,1.0,"text search dictionary template for extended synonym 
processing")
 (intagg,1.0,"integer aggregator and enumerator (obsolete)")
 (pgstattuple,1.2,"show tuple-level statistics")
 (autoinc,1.0,"functions for autoincrementing fields")
 (pg_prewarm,1.0,"prewarm relation data")
 (chkpass,1.0,"data type for auto-encrypted passwords")
 (pgcrypto,1.1,"cryptographic functions")
 (plpgsql,1.0,"PL/pgSQL procedural language")
 (postgres_fdw,1.0,"foreign-data wrapper for remote PostgreSQL servers")
 (adminpack,1.0,"administrative functions for PostgreSQL")
 (hstore,1.3,"data type for storing sets of (key, value) pairs")
 (ltree,1.0,"data type for hierarchical tree-like structures")
 (lo,1.0,"Large Object maintenance")
 (refint,1.0,"functions for implementing referential integrity 
(obsolete)")
(46 Zeilen)

Diving into the postgresql-contrib-9.5 package shows that it
seems to install the same version (1.1, that is).

4c5dc5fb5743dd4534cc0ad082c075d8  
usr/share/postgresql/9.5/extension/pg_trgm--1.0--1.1.sql
5222fd4cbbc5049b8e1bc64817443d7b  
usr/share/postgresql/9.5/extension/pg_trgm--1.1.sql
aceed02fc9730e6d34000869e6dfa308  
usr/share/postgresql/9.5/extension/pg_trgm--unpackaged--1.0.sql
f81af8d3825cb3a1762b9a27d0899b38  
usr/share/postgresql/9.5/extension/pg_trgm.control

Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade 9.4 -> 9.5 with pg_trgm fails for me

2016-01-08 Thread Karsten Hilbert
On Fri, Jan 08, 2016 at 10:45:27AM -0500, Tom Lane wrote:

> Karsten Hilbert  writes:

> > (For what it's worth, I have also tried the --method=dump way 
> > of using Debian's pg_upgradecluster which internally uses a
> > dump/restore cycle rather than calling pg_upgrade. That
> > failed due to ordering problems with table data vs table
> > constraints.)
> 
> That seems like an independent bug.  Can you provide specifics?

I will, please bear with me as I'll have to rerun the upgrade
to get logs.

Thanks,
Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Packages for Ubuntu Wily

2016-01-08 Thread Joshua D. Drake

On 01/08/2016 07:35 AM, Adrian Klaver wrote:


depend on a package
pgdg-keyring
, and that package doesn't seem to be available in the "
wily-pgdg
" distribution in the apt repo.
Also, pgadmin3 seems to be missing in that distribution.

Am I doing something wrong? Or is this a bug in the packaging?




Looks like you may have missed this step:

sudo apt-get install wget ca-certificates
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | 
sudo apt-key add -

sudo apt-get update






--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing "I'm offended" is basically telling the world you can't
control your own emotions, so everyone else should do it for you.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade 9.4 -> 9.5 with pg_trgm fails for me

2016-01-08 Thread Adrian Klaver

On 01/08/2016 07:41 AM, Karsten Hilbert wrote:

On Fri, Jan 08, 2016 at 07:18:24AM -0800, Adrian Klaver wrote:


http://www.postgresql.org/docs/9.5/interactive/pgupgrade.html

"If an error occurs while restoring the database schema, pg_upgrade will
exit and you will have to revert to the old cluster as outlined in step 16
below.


Thanks for pointing this out. Debian made it fairly easy to
pick up the old cluster (they provide quite nice wrappers). I
am firmly seated on 9.4 again, which in itself has been doing
excellent duty.


To try pg_upgrade again, you will need to modify the old cluster so
the pg_upgrade schema restore succeeds. If the problem is a contrib module,
you might need to uninstall the contrib module from the old cluster and
install it in the new cluster after the upgrade, assuming the module is not
being used to store user data."


I am, indeed, using pg_trgm for an index on patients' names
so I will likely have to DROP / CREATE EXTENSION for
upgrading the cluster, and re-create the index after the
upgrade :-(


I do not use pg_trgm, so I have not had occasion to upgrade it. Maybe 
someone who has can provide a better method.




No problem for me but will need meticulous documentation and
instructions to end users (GPs, physical therapists ... ;-)

Thanks for answering,
Karsten




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No postgresql-9.5-prefix ?

2016-01-08 Thread Adrian Klaver

On 01/08/2016 07:43 AM, Tim Smith wrote:

Hi,

Many apologies if I missed some announcement anywhere, but there
appears to be no postgresql-9.5-prefix in the Postgres repository ?


Which repo are you talking about?



Is this a deliberate omission or is it "coming real soon now" ?

Thanks !

Tim





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade 9.4 -> 9.5 with pg_trgm fails for me

2016-01-08 Thread Tom Lane
Karsten Hilbert  writes:
> [ pg_upgrade failed on pg_trgm ]

Just for completeness, can you tell us which pg_trgm version (1.0
or 1.1) is installed in the 9.4 database?

> (For what it's worth, I have also tried the --method=dump way 
> of using Debian's pg_upgradecluster which internally uses a
> dump/restore cycle rather than calling pg_upgrade. That
> failed due to ordering problems with table data vs table
> constraints.)

That seems like an independent bug.  Can you provide specifics?

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade 9.4 -> 9.5 with pg_trgm fails for me

2016-01-08 Thread Karsten Hilbert
On Fri, Jan 08, 2016 at 07:41:09AM -0800, Adrian Klaver wrote:

> >>I thought --method=dump was the default, so this:
> >>
> >>pg_upgradecluster -v 9.5 9.4 main
> >>
> >>was using that?
> >
> >True enough. I did specify the "-m upgrade" though, as
> >witnessed by the log snippet.
> 
> Alright then. Just trying to match the output with the command and quiet the
> nagging voice in the head:)

For completeness, here's the actual command run

pg_upgradecluster -m upgrade -v 9.5 9.4 main &> pg-upgrade-9_4-9_5.log

(not that that would help along any, I suppose)

Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade 9.4 -> 9.5 with pg_trgm fails for me

2016-01-08 Thread Karsten Hilbert
On Fri, Jan 08, 2016 at 07:41:09AM -0800, Adrian Klaver wrote:

> >On Fri, Jan 08, 2016 at 07:18:24AM -0800, Adrian Klaver wrote:
> >
> >>I thought --method=dump was the default, so this:
> >>
> >>pg_upgradecluster -v 9.5 9.4 main
> >>
> >>was using that?
> >
> >True enough. I did specify the "-m upgrade" though, as
> >witnessed by the log snippet.
> 
> Alright then. Just trying to match the output with the command and quiet the
> nagging voice in the head:)

Absolutely. I should have paid that extra second of
*re*-checking before I sent the initial question. Sorry.

Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] No postgresql-9.5-prefix ?

2016-01-08 Thread Tim Smith
Hi,

Many apologies if I missed some announcement anywhere, but there
appears to be no postgresql-9.5-prefix in the Postgres repository ?

Is this a deliberate omission or is it "coming real soon now" ?

Thanks !

Tim


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade 9.4 -> 9.5 with pg_trgm fails for me

2016-01-08 Thread Adrian Klaver

On 01/08/2016 07:28 AM, Karsten Hilbert wrote:
Ccing list

On Fri, Jan 08, 2016 at 07:18:24AM -0800, Adrian Klaver wrote:


I thought --method=dump was the default, so this:

pg_upgradecluster -v 9.5 9.4 main

was using that?


True enough. I did specify the "-m upgrade" though, as
witnessed by the log snippet.


Alright then. Just trying to match the output with the command and quiet 
the nagging voice in the head:)




Karsten




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade 9.4 -> 9.5 with pg_trgm fails for me

2016-01-08 Thread Karsten Hilbert
On Fri, Jan 08, 2016 at 07:18:24AM -0800, Adrian Klaver wrote:

> http://www.postgresql.org/docs/9.5/interactive/pgupgrade.html
> 
> "If an error occurs while restoring the database schema, pg_upgrade will
> exit and you will have to revert to the old cluster as outlined in step 16
> below.

Thanks for pointing this out. Debian made it fairly easy to
pick up the old cluster (they provide quite nice wrappers). I
am firmly seated on 9.4 again, which in itself has been doing
excellent duty.

> To try pg_upgrade again, you will need to modify the old cluster so
> the pg_upgrade schema restore succeeds. If the problem is a contrib module,
> you might need to uninstall the contrib module from the old cluster and
> install it in the new cluster after the upgrade, assuming the module is not
> being used to store user data."

I am, indeed, using pg_trgm for an index on patients' names
so I will likely have to DROP / CREATE EXTENSION for
upgrading the cluster, and re-create the index after the
upgrade :-(

No problem for me but will need meticulous documentation and
instructions to end users (GPs, physical therapists ... ;-)

Thanks for answering,
Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Packages for Ubuntu Wily

2016-01-08 Thread Adrian Klaver

On 01/08/2016 06:12 AM, Henning Hoefer wrote:

Hi,

I'm trying to install 9.5 on Ubuntu Wily.

I've added a line "deb http://apt.postgresql.org/pub/repos/apt/
wily-pgdg main
​"​ to my
sources.list
and upgrades to various
postgresql-*
packages show up as available – however, I can't actually install all of
them:

postgresql-common
and
postgresql-client-common
depend on a package
pgdg-keyring
, and that package doesn't seem to be available in the "
wily-pgdg
" distribution in the apt repo.
Also, pgadmin3 seems to be missing in that distribution.

Am I doing something wrong? Or is this a bug in the packaging?


Well the mistake I often make is to do apt-get upgrade before doing 
apt-get update. I would suggest doing an update just to make sure.





Kind regards
,

Henning
​.​



--
Henning Hoefer
Software Engineer

Device Insight GmbH, Willy-Brandt-Platz 6, D-81829 München
http://www.device-insight.com

Sitz der Gesellschaft: München
Registergericht: Amtsgericht München HRB 149018
Geschäftsführer: Reinhold Stammeier



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL 9.5, mysql_fdw, and IMPORT FOREIGN SCHEMA question

2016-01-08 Thread Adrian Klaver

On 01/08/2016 07:04 AM, Deven Phillips wrote:

Hi all,

 I installed the newly released PostgreSQL 9.5 this morning and
compiled the latest mysql_fdw extension from EnterpriseDB. I was able to
create the SERVER and USER MAPPING, but I cannot seem to get IMPORT
FOREIGN SCHEMA to do anything. The command executes without error, but
none of the table schemas are imported from the MySQL DB. Does anyone
have any advice, links, documentation which might be of help?


Can you CREATE FOREIGN TABLE and use it?



Thanks in advance!

Deven



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade 9.4 -> 9.5 with pg_trgm fails for me

2016-01-08 Thread Adrian Klaver

On 01/08/2016 03:45 AM, Karsten Hilbert wrote:

Hi,

I have attempted a pg_upgrade on Debian using the Debian
wrapper scripts like so:

pg_upgradecluster -v 9.5 9.4 main

(meaning to upgrade a cluster named "main" from 9.4 to 9.5)

which resulted in this:

-
  pg_upgrade run on Fri Jan  8 11:47:32 2016
-

Performing Consistency Checks
-
Checking cluster versions   ok
Checking database user is the install user  ok
Checking database connection settings   ok
Checking for prepared transactions  ok
Checking for reg* system OID user data typesok
Checking for contrib/isn with bigint-passing mismatch   ok
Creating dump of global objects ok
Creating dump of database schemas
ok
Checking for presence of required libraries ok
Checking database user is the install user  ok
Checking for prepared transactions  ok

If pg_upgrade fails after this point, you must re-initdb the
new cluster before continuing.

Performing Upgrade
--
Analyzing all rows in the new cluster   ok
Freezing all rows on the new clusterok
Deleting files from new pg_clog ok
Copying old pg_clog to new server   ok
Setting next transaction ID and epoch for new cluster   ok
Deleting files from new pg_multixact/offsetsok
Copying old pg_multixact/offsets to new server  ok
Deleting files from new pg_multixact/membersok
Copying old pg_multixact/members to new server  ok
Setting next multixact ID and offset for new clusterok
Resetting WAL archives  ok
Setting frozenxid and minmxid counters in new cluster   ok
Restoring global objects in the new cluster ok
Restoring database schemas in the new cluster

*failure*
Consult the last few lines of "pg_upgrade_dump_512600.log" for
the probable cause of the failure.

-

The pg_upgrade_dump_512600.log shows:

command: "/usr/lib/postgresql/9.5/bin/pg_dump" --host "/var/log/postgresql/pg_upgradecluster-9.4-9.5-main.nb6n" --port 5432 
--username "postgres" --schema-only --quote-all-identifiers --binary-upgrade --format=custom  --file="pg_upgrade_dump_512600.custom" 
"gnumed_v20" >> "pg_upgrade_dump_512600.log" 2>&1

command: "/usr/lib/postgresql/9.5/bin/pg_restore" --host "/var/log/postgresql/pg_upgradecluster-9.4-9.5-main.nb6n" --port 
5433 --username "postgres" --exit-on-error --verbose --dbname "gnumed_v20" "pg_upgrade_dump_512600.custom" >> 
"pg_upgrade_dump_512600.log" 2>&1

pg_restore: verbinde mit der Datenbank zur Wiederherstellung
pg_restore: erstelle pg_largeobject „pg_largeobject“
pg_restore: erstelle pg_largeobject_metadata „pg_largeobject_metadata“
pg_restore: erstelle SCHEMA „au“
pg_restore: erstelle SCHEMA „audit“
pg_restore: erstelle SCHEMA „bill“
pg_restore: erstelle COMMENT „SCHEMA "bill"“
pg_restore: erstelle SCHEMA „blobs“
pg_restore: erstelle SCHEMA „cfg“
pg_restore: erstelle COMMENT „SCHEMA "cfg"“
pg_restore: erstelle SCHEMA „clin“
pg_restore: erstelle SCHEMA „de_de“
pg_restore: erstelle SCHEMA „dem“
pg_restore: erstelle SCHEMA „gm“
pg_restore: erstelle SCHEMA „i18n“
pg_restore: erstelle SCHEMA „public“
pg_restore: erstelle COMMENT „SCHEMA "public"“
pg_restore: erstelle SCHEMA „ref“
pg_restore: erstelle COMMENT „SCHEMA "ref"“
pg_restore: erstelle SCHEMA „staging“
pg_restore: erstelle COMMENT „SCHEMA "staging"“
pg_restore: erstelle EXTENSION „pg_trgm“
pg_restore: erstelle COMMENT „EXTENSION "pg_trgm"“
pg_restore: erstelle FUNCTION „pg_catalog.gtrgm_in("cstring")“
pg_restore: [Archivierer (DB)] Fehler in Phase PROCESSING TOC:
pg_restore: [Archivierer (DB)] Fehler in Inhaltsverzeichniseintrag 893; 1255 
511230 FUNCTION gtrgm_in("cstring") postgres
pg_restore: [Archivierer (DB)] could not execute query: ERROR:  pg_type 
OID value not set when in binary upgrade mode
Die Anweisung war: CREATE FUNCTION "gtrgm_in"("cstring") RETURNS 
"gtrgm"
LANGU

[GENERAL] PostgreSQL 9.5, mysql_fdw, and IMPORT FOREIGN SCHEMA question

2016-01-08 Thread Deven Phillips
Hi all,

I installed the newly released PostgreSQL 9.5 this morning and compiled
the latest mysql_fdw extension from EnterpriseDB. I was able to create the
SERVER and USER MAPPING, but I cannot seem to get IMPORT FOREIGN SCHEMA to
do anything. The command executes without error, but none of the table
schemas are imported from the MySQL DB. Does anyone have any advice, links,
documentation which might be of help?

Thanks in advance!

Deven


Re: [GENERAL] Using xmax to detect deleted rows

2016-01-08 Thread Alvaro Herrera
Meel Velliste wrote:
> I would like to use the "xmax" column to detect rows that have been
> recently deleted. Is it possible to get the deleted row versions with
> non-zero xmax to remain visible long enough that I could periodically
> check, say once an hour, and still be able to see rows that were deleted
> since I last checked?

No.  Maybe you want a trigger that saves the deleted row somewhere (a
separate table perhaps) which you can later inspect and delete again?

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Using xmax to detect deleted rows

2016-01-08 Thread Meel Velliste
I would like to use the "xmax" column to detect rows that have been
recently deleted. Is it possible to get the deleted row versions with
non-zero xmax to remain visible long enough that I could periodically
check, say once an hour, and still be able to see rows that were deleted
since I last checked?


[GENERAL] Packages for Ubuntu Wily

2016-01-08 Thread Henning Hoefer
Hi,

I'm trying to install 9.5 on Ubuntu Wily.

I've added a line "deb http://apt.postgresql.org/pub/repos/apt/ wily-pgdg
main
​"​ to my
sources.list
and upgrades to various
postgresql-*
packages show up as available – however, I can't actually install all of
them:

postgresql-common
and
postgresql-client-common
depend on a package
pgdg-keyring
, and that package doesn't seem to be available in the "
wily-pgdg
" distribution in the apt repo.
Also, pgadmin3 seems to be missing in that distribution.

Am I doing something wrong? Or is this a bug in the packaging?


Kind regards
,

Henning
​.​



--
Henning Hoefer
Software Engineer

Device Insight GmbH, Willy-Brandt-Platz 6, D-81829 München
http://www.device-insight.com

Sitz der Gesellschaft: München
Registergericht: Amtsgericht München HRB 149018
Geschäftsführer: Reinhold Stammeier


Re: [GENERAL] BDR and TX obeyance

2016-01-08 Thread Simon Riggs
On 4 January 2016 at 20:09, Riley Berton  wrote:


> The conflict on the "thingy" table has resulted in node2 winning based
> on last_update wins default resolution.  However, both inserts have
> applied.  My expectation is that the entire TX applies or does not
> apply.  This expectation is clearly wrong.
>

I'm sorry to say: Yes, I think so.

If you try to update the same data at the same time in multiple locations,
your application has a significant problem, period. That's just physics.

How that problem manifests itself is really based upon your choice of
technology. Choosing Postgres, Oracle or ProblemoDB won't change that.

If you choose single master, then you get an error because one of the nodes
can't be updated at all. If you have multiple masters, then you get to
choose between an early abort because of serialization errors (which causes
a huge performance overhead), or a later difficulty when conflict
resolution kicks in (which is why BDR supports post-commit conflict
resolution semantics). Or it you use a shared cache system like RAC then
you get significant performance degradation as the data blocks ping around
the cluster.

I'm personally in favour of giving people choice about how they configure
their databases. So you will see me acting to extend the range of options
available to users, allowing them to make informed choices.


> Question is: is there a way (via a custom conflict handler) to have the
> TX obeyed?  I can't see a way to even implement a simple bank account
> database that changes multiple tables in a single transaction without
> having the data end up in an inconsistent state.  Am I missing something
> obvious here?
>

Don't use updates in that way. Banks never do, but financial institutions
and many others have been using replication technology that supports
post-commit conflict resolution for more than a decade in products from
SQLServer, Informix and others. BDR was specified by a customer/user with
expert knowledge of that area and who knew what he wanted and didn't want
to see in the final product. I think those choices were good ones.

Design your applications carefully, understanding the trade-offs between
availability, local access times, serializability and performance.

-- 
Simon Riggshttp://www.2ndQuadrant.com/

PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: [GENERAL] BDR and TX obeyance

2016-01-08 Thread Riley Berton
Craig Ringer  writes:

> On 5 January 2016 at 04:09, Riley Berton  wrote:
>
>>
>> The conflict on the "thingy" table has resulted in node2 winning based
>> on last_update wins default resolution.  However, both inserts have
>> applied.  My expectation is that the entire TX applies or does not
>> apply.  This expectation is clearly wrong.
>>
>
> Correct. Conflicts are resolved row-by-row. Their outcomes are determined
> (by default) by transaction commit timestamps, but the conflicts themselves
> are row-by-row.
>
> Because BDR:
>
> * applies changes to other nodes only AFTER commit on the origin node; and
> * does not take row and table locks across nodes
>
> it has no way to sensibly apply all or none of a transaction on downstream
> peers because the client has already committed and moved on to other
> things. If the xact doesn't apply, what do we do? Log output on the failing
> node(s) and throw it away?

Yes.  This is impossible.  I understand that clearly now.

>
> It's probably practical to have xacts abort on the first conflict, though
> some thought would be needed about making sure that doesn't break
> consistency requirements across nodes. It's not clear if doing so is useful
> though.
>
> For that you IMO want synchronous replication where the client doesn't get
> a local COMMIT until all nodes have confirmed they can commit the xact.
> That's something that could be added to BDR in future, but doing it well it
> requires support for logical decoding of prepared transactions which is
> currently missing from PostgreSQL's logical decoding support. If it's
> something you think is important/useful you might want to explore what's
> involved in implementing that.

I have considered 2 paths here.

1. What you suggest above.
2. Write sharding across the masters with RLS to prevent writes to the
wrong master.  I have not fully thought through whether this will work
in practice, but as long as the constraints are identical on all the
masters and we never mutate the same row(s) on multiple masters we
should never get conflicts.  This requires application design that ties
all the data to some root node which can be used to shard on and is not
applicable generally.

>
> Question is: is there a way (via a custom conflict handler) to have the
>> TX obeyed?
>
>
> No.
>
> Even if you ERROR in your handler, BDR will just retry the xact. It has no
> concept of "throw this transaction away forever".
>
>
>> I can't see a way to even implement a simple bank account
>> database that changes multiple tables in a single transaction without
>> having the data end up in an inconsistent state.  Am I missing something
>> obvious here?
>>
>
> You're trying to use asynchronous multimaster replication as if it was an
> application-transparent synchronous cluster with a global transaction
> manager and global lock manager.
>
> BDR is not application-transparent. You need to understand replication
> conflicts and think about them. It does not preserve full READ COMMITTED
> semantics across nodes. This comes with big benefits in partition
> tolerance, performance and latency tolerance, but it means you can't point
> an existing app at more than one node and expect it to work properly.
>
> The documentation tries over and over to emphasise this. Can you suggest
> where it can be made clearer or more prominent?

I was not the only one to be confused by this.  I think the reputation
of PostgreSQL is for correct transactional semantics by default.  BDR
requires a different way of thinking about it.  You might prevent future
confusion by giving some example scenarios in the Overview (or Concepts)
where a traditional single master would result in X but BDR across 2
masters would result in Y.

Thanks so much for the detailed response.

riley

>
> -- 
>  Craig Ringer   http://www.2ndQuadrant.com/
>  PostgreSQL Development, 24x7 Support, Training & Services

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Recovery regression tests

2016-01-08 Thread Michael Paquier
On Fri, Jan 8, 2016 at 6:55 PM, kharagesuraj  wrote:
> I have postgres-9.6 dev source code.
> While executing regression tests(make check) for recovery i got following
> message.
> "TAP tests not enabled"
>
> I have pgTap(0.95) installed.

The in-core TAP tests and pgTap are two independent things, the latter
being an independent facility and the in-core TAP tests do not need
it. If you want to run them, simply install the perl module IPC::Run
and add this configure switch: --enable-tap-tests.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade 9.4 -> 9.5 with pg_trgm fails for me

2016-01-08 Thread Karsten Hilbert
On Fri, Jan 08, 2016 at 12:45:29PM +0100, Karsten Hilbert wrote:

>   pg_restore: erstelle EXTENSION „pg_trgm“
>   pg_restore: erstelle COMMENT „EXTENSION "pg_trgm"“
>   pg_restore: erstelle FUNCTION „pg_catalog.gtrgm_in("cstring")“
>   pg_restore: [Archivierer (DB)] Fehler in Phase PROCESSING TOC:
>   pg_restore: [Archivierer (DB)] Fehler in Inhaltsverzeichniseintrag 893; 
> 1255 511230 FUNCTION gtrgm_in("cstring") postgres
>   pg_restore: [Archivierer (DB)] could not execute query: ERROR:  pg_type 
> OID value not set when in binary upgrade mode
>   Die Anweisung war: CREATE FUNCTION "gtrgm_in"("cstring") RETURNS 
> "gtrgm"
>   LANGUAGE "c" IMMUTABLE STRICT
>   AS '$libdir/pg_trgm', 'gtrgm_in'...

It does sound similar to


http://postgresql.nabble.com/BUG-5942-pg-trgm-sql-has-cyclic-dependency-on-type-gtrgm-creation-td4259677.html

which, however, wouldn't help me in solving the problem.

Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade 9.4 -> 9.5 with pg_trgm fails for me

2016-01-08 Thread Karsten Hilbert
On Fri, Jan 08, 2016 at 12:45:29PM +0100, Karsten Hilbert wrote:

>   pg_restore: erstelle EXTENSION „pg_trgm“
>   pg_restore: erstelle COMMENT „EXTENSION "pg_trgm"“
>   pg_restore: erstelle FUNCTION „pg_catalog.gtrgm_in("cstring")“
>   pg_restore: [Archivierer (DB)] Fehler in Phase PROCESSING TOC:
>   pg_restore: [Archivierer (DB)] Fehler in Inhaltsverzeichniseintrag 893; 
> 1255 511230 FUNCTION gtrgm_in("cstring") postgres
>   pg_restore: [Archivierer (DB)] could not execute query: ERROR:  pg_type 
> OID value not set when in binary upgrade mode
>   Die Anweisung war: CREATE FUNCTION "gtrgm_in"("cstring") RETURNS 
> "gtrgm"
>   LANGUAGE "c" IMMUTABLE STRICT
>   AS '$libdir/pg_trgm', 'gtrgm_in'...
> 
> For one thing - does it seem odd that the function would be
> named "gtrgm_in" rather than "pgtrgm_in" ?

A bit of searching shows that that seems to be normal.

Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_upgrade 9.4 -> 9.5 with pg_trgm fails for me

2016-01-08 Thread Karsten Hilbert
Hi,

I have attempted a pg_upgrade on Debian using the Debian
wrapper scripts like so:

pg_upgradecluster -v 9.5 9.4 main

(meaning to upgrade a cluster named "main" from 9.4 to 9.5)

which resulted in this:

-
  pg_upgrade run on Fri Jan  8 11:47:32 2016
-

Performing Consistency Checks
-
Checking cluster versions   ok
Checking database user is the install user  ok
Checking database connection settings   ok
Checking for prepared transactions  ok
Checking for reg* system OID user data typesok
Checking for contrib/isn with bigint-passing mismatch   ok
Creating dump of global objects ok
Creating dump of database schemas
ok
Checking for presence of required libraries ok
Checking database user is the install user  ok
Checking for prepared transactions  ok

If pg_upgrade fails after this point, you must re-initdb the
new cluster before continuing.

Performing Upgrade
--
Analyzing all rows in the new cluster   ok
Freezing all rows on the new clusterok
Deleting files from new pg_clog ok
Copying old pg_clog to new server   ok
Setting next transaction ID and epoch for new cluster   ok
Deleting files from new pg_multixact/offsetsok
Copying old pg_multixact/offsets to new server  ok
Deleting files from new pg_multixact/membersok
Copying old pg_multixact/members to new server  ok
Setting next multixact ID and offset for new clusterok
Resetting WAL archives  ok
Setting frozenxid and minmxid counters in new cluster   ok
Restoring global objects in the new cluster ok
Restoring database schemas in the new cluster

*failure*
Consult the last few lines of "pg_upgrade_dump_512600.log" for
the probable cause of the failure.

-

The pg_upgrade_dump_512600.log shows:

command: "/usr/lib/postgresql/9.5/bin/pg_dump" --host 
"/var/log/postgresql/pg_upgradecluster-9.4-9.5-main.nb6n" --port 5432 
--username "postgres" --schema-only --quote-all-identifiers --binary-upgrade 
--format=custom  --file="pg_upgrade_dump_512600.custom" "gnumed_v20" >> 
"pg_upgrade_dump_512600.log" 2>&1

command: "/usr/lib/postgresql/9.5/bin/pg_restore" --host 
"/var/log/postgresql/pg_upgradecluster-9.4-9.5-main.nb6n" --port 5433 
--username "postgres" --exit-on-error --verbose --dbname "gnumed_v20" 
"pg_upgrade_dump_512600.custom" >> "pg_upgrade_dump_512600.log" 2>&1

pg_restore: verbinde mit der Datenbank zur Wiederherstellung
pg_restore: erstelle pg_largeobject „pg_largeobject“
pg_restore: erstelle pg_largeobject_metadata „pg_largeobject_metadata“
pg_restore: erstelle SCHEMA „au“
pg_restore: erstelle SCHEMA „audit“
pg_restore: erstelle SCHEMA „bill“
pg_restore: erstelle COMMENT „SCHEMA "bill"“
pg_restore: erstelle SCHEMA „blobs“
pg_restore: erstelle SCHEMA „cfg“
pg_restore: erstelle COMMENT „SCHEMA "cfg"“
pg_restore: erstelle SCHEMA „clin“
pg_restore: erstelle SCHEMA „de_de“
pg_restore: erstelle SCHEMA „dem“
pg_restore: erstelle SCHEMA „gm“
pg_restore: erstelle SCHEMA „i18n“
pg_restore: erstelle SCHEMA „public“
pg_restore: erstelle COMMENT „SCHEMA "public"“
pg_restore: erstelle SCHEMA „ref“
pg_restore: erstelle COMMENT „SCHEMA "ref"“
pg_restore: erstelle SCHEMA „staging“
pg_restore: erstelle COMMENT „SCHEMA "staging"“
pg_restore: erstelle EXTENSION „pg_trgm“
pg_restore: erstelle COMMENT „EXTENSION "pg_trgm"“
pg_restore: erstelle FUNCTION „pg_catalog.gtrgm_in("cstring")“
pg_restore: [Archivierer (DB)] Fehler in Phase PROCESSING TOC:
pg_restore: [Archivierer (DB)] Fehler in Inhaltsverzeichniseintrag 893; 
1255 511230 FUNCTION gtrgm_in("cstring") postgres
pg_restore: [Archivierer (DB)] could not execute query: ERROR:  pg_type 
OID value not set when in binary upgrade mode
Die Anweisung war: CREATE FUNCTION "gtrgm_in"("cstring") RETURNS 
"gtrgm"
LANGUAGE "c" IMMUTABLE STRICT
AS '$libd

Re: [GENERAL] Function error

2016-01-08 Thread Sachin Srivastava
Thanks Pavel for your help !!!

On Fri, Jan 8, 2016 at 3:28 PM, Pavel Stehule 
wrote:

>
>
> 2016-01-08 10:52 GMT+01:00 Sachin Srivastava :
>
>> Because I have migrated the database from Oracle to Postgres through
>> ORA2PG.
>>
>> So how I will change it, please suggest.
>>
>
>
> http://stackoverflow.com/questions/2269156/converting-from-oracle-join-to-postgres-join
>
>
>> On Fri, Jan 8, 2016 at 2:54 PM, Pavel Stehule 
>> wrote:
>>
>>>
>>>
>>> 2016-01-08 10:08 GMT+01:00 Sachin Srivastava :
>>>
 Hi,

 I am also getting "syntax error for below function (just some lines of
 function where I am getting syntax error), please suggest why?

>>>
>>> Using Oracle's outer join syntax, not ANSI SQL syntax
>>>
>>> Regards
>>>
>>> Pavel
>>>
>>>
 

   -- Image path
 AND mk1.subscriber_id(+)=pcat_catalog_item.subscriber_id
 AND cs1.subscriber_id(+)=pcat_catalog_item.subscriber_id
 AND cs1.company_id(+)=pcat_catalog_item.company_id
 AND
 pcat_catalog_dup_detail.dest_subscriber_id(+)=pcat_item_categorization.subscriber_id
 AND
 pcat_catalog_dup_detail.dest_supplier_company_id(+)=pcat_item_categorization.item_company_id
 AND
 pcat_catalog_dup_detail.dest_catalog_code(+)=pcat_item_categorization.catalog_code
 AND mk2.subscriber_id(+)=pcat_catalog_dup_detail.subscriber_id
 AND cs2.subscriber_id(+)=pcat_catalog_dup_detail.subscriber_id
 AND cs2.company_id(+)=pcat_catalog_dup_detail.company_id;

 --


 ERROR:  syntax error at or near ")"
 LINE 216: AND
 mk1.subscriber_id(+)=pcat_catalog_item.subscribe...
  ^
 ** Error **
 ERROR: syntax error at or near ")"
 SQL state: 42601
 Character: 10018

 Regards
 SS


>>>
>>
>


Re: [GENERAL] RAM of Postgres Server

2016-01-08 Thread John R Pierce

On 1/7/2016 10:32 PM, Sachin Srivastava wrote:
We are looking at more like 500-600 connections simultaneously in 1 
day and I want to say we get 1 to 12000 connections a day per db.


these applications, are they hammering queries, or mostly idle, and just 
issuing intermittent queries?


500 queries at the same time will cause serious resource thrashing with 
32 cores.   if they are mostly idle, you should look at the pgbouncer 
style connection pooling model, as Ben suggests


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Function error

2016-01-08 Thread Pavel Stehule
2016-01-08 10:52 GMT+01:00 Sachin Srivastava :

> Because I have migrated the database from Oracle to Postgres through
> ORA2PG.
>
> So how I will change it, please suggest.
>


http://stackoverflow.com/questions/2269156/converting-from-oracle-join-to-postgres-join


> On Fri, Jan 8, 2016 at 2:54 PM, Pavel Stehule 
> wrote:
>
>>
>>
>> 2016-01-08 10:08 GMT+01:00 Sachin Srivastava :
>>
>>> Hi,
>>>
>>> I am also getting "syntax error for below function (just some lines of
>>> function where I am getting syntax error), please suggest why?
>>>
>>
>> Using Oracle's outer join syntax, not ANSI SQL syntax
>>
>> Regards
>>
>> Pavel
>>
>>
>>> 
>>>
>>>   -- Image path
>>> AND mk1.subscriber_id(+)=pcat_catalog_item.subscriber_id
>>> AND cs1.subscriber_id(+)=pcat_catalog_item.subscriber_id
>>> AND cs1.company_id(+)=pcat_catalog_item.company_id
>>> AND
>>> pcat_catalog_dup_detail.dest_subscriber_id(+)=pcat_item_categorization.subscriber_id
>>> AND
>>> pcat_catalog_dup_detail.dest_supplier_company_id(+)=pcat_item_categorization.item_company_id
>>> AND
>>> pcat_catalog_dup_detail.dest_catalog_code(+)=pcat_item_categorization.catalog_code
>>> AND mk2.subscriber_id(+)=pcat_catalog_dup_detail.subscriber_id
>>> AND cs2.subscriber_id(+)=pcat_catalog_dup_detail.subscriber_id
>>> AND cs2.company_id(+)=pcat_catalog_dup_detail.company_id;
>>>
>>> --
>>>
>>>
>>> ERROR:  syntax error at or near ")"
>>> LINE 216: AND mk1.subscriber_id(+)=pcat_catalog_item.subscribe...
>>>  ^
>>> ** Error **
>>> ERROR: syntax error at or near ")"
>>> SQL state: 42601
>>> Character: 10018
>>>
>>> Regards
>>> SS
>>>
>>>
>>
>


[GENERAL] Recovery regression tests

2016-01-08 Thread kharagesuraj
I have postgres-9.6 dev source code.
While executing regression tests(make check) for recovery i got following
message.
"TAP tests not enabled"

I have pgTap(0.95) installed.

please help me on this.

regards




--
View this message in context: 
http://postgresql.nabble.com/Recovery-regression-tests-tp5881052.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Function error

2016-01-08 Thread Sachin Srivastava
Because I have migrated the database from Oracle to Postgres through ORA2PG.

So how I will change it, please suggest.

On Fri, Jan 8, 2016 at 2:54 PM, Pavel Stehule 
wrote:

>
>
> 2016-01-08 10:08 GMT+01:00 Sachin Srivastava :
>
>> Hi,
>>
>> I am also getting "syntax error for below function (just some lines of
>> function where I am getting syntax error), please suggest why?
>>
>
> Using Oracle's outer join syntax, not ANSI SQL syntax
>
> Regards
>
> Pavel
>
>
>> 
>>
>>   -- Image path
>> AND mk1.subscriber_id(+)=pcat_catalog_item.subscriber_id
>> AND cs1.subscriber_id(+)=pcat_catalog_item.subscriber_id
>> AND cs1.company_id(+)=pcat_catalog_item.company_id
>> AND
>> pcat_catalog_dup_detail.dest_subscriber_id(+)=pcat_item_categorization.subscriber_id
>> AND
>> pcat_catalog_dup_detail.dest_supplier_company_id(+)=pcat_item_categorization.item_company_id
>> AND
>> pcat_catalog_dup_detail.dest_catalog_code(+)=pcat_item_categorization.catalog_code
>> AND mk2.subscriber_id(+)=pcat_catalog_dup_detail.subscriber_id
>> AND cs2.subscriber_id(+)=pcat_catalog_dup_detail.subscriber_id
>> AND cs2.company_id(+)=pcat_catalog_dup_detail.company_id;
>>
>> --
>>
>>
>> ERROR:  syntax error at or near ")"
>> LINE 216: AND mk1.subscriber_id(+)=pcat_catalog_item.subscribe...
>>  ^
>> ** Error **
>> ERROR: syntax error at or near ")"
>> SQL state: 42601
>> Character: 10018
>>
>> Regards
>> SS
>>
>>
>


Re: [GENERAL] Function error

2016-01-08 Thread Pavel Stehule
2016-01-08 10:08 GMT+01:00 Sachin Srivastava :

> Hi,
>
> I am also getting "syntax error for below function (just some lines of
> function where I am getting syntax error), please suggest why?
>

Using Oracle's outer join syntax, not ANSI SQL syntax

Regards

Pavel


> 
>
>   -- Image path
> AND mk1.subscriber_id(+)=pcat_catalog_item.subscriber_id
> AND cs1.subscriber_id(+)=pcat_catalog_item.subscriber_id
> AND cs1.company_id(+)=pcat_catalog_item.company_id
> AND
> pcat_catalog_dup_detail.dest_subscriber_id(+)=pcat_item_categorization.subscriber_id
> AND
> pcat_catalog_dup_detail.dest_supplier_company_id(+)=pcat_item_categorization.item_company_id
> AND
> pcat_catalog_dup_detail.dest_catalog_code(+)=pcat_item_categorization.catalog_code
> AND mk2.subscriber_id(+)=pcat_catalog_dup_detail.subscriber_id
> AND cs2.subscriber_id(+)=pcat_catalog_dup_detail.subscriber_id
> AND cs2.company_id(+)=pcat_catalog_dup_detail.company_id;
>
> --
>
>
> ERROR:  syntax error at or near ")"
> LINE 216: AND mk1.subscriber_id(+)=pcat_catalog_item.subscribe...
>  ^
> ** Error **
> ERROR: syntax error at or near ")"
> SQL state: 42601
> Character: 10018
>
> Regards
> SS
>
>


Re: [GENERAL] Function error

2016-01-08 Thread Sachin Srivastava
Hi,

I am also getting "syntax error for below function (just some lines of
function where I am getting syntax error), please suggest why?


  -- Image path
AND mk1.subscriber_id(+)=pcat_catalog_item.subscriber_id
AND cs1.subscriber_id(+)=pcat_catalog_item.subscriber_id
AND cs1.company_id(+)=pcat_catalog_item.company_id
AND
pcat_catalog_dup_detail.dest_subscriber_id(+)=pcat_item_categorization.subscriber_id
AND
pcat_catalog_dup_detail.dest_supplier_company_id(+)=pcat_item_categorization.item_company_id
AND
pcat_catalog_dup_detail.dest_catalog_code(+)=pcat_item_categorization.catalog_code
AND mk2.subscriber_id(+)=pcat_catalog_dup_detail.subscriber_id
AND cs2.subscriber_id(+)=pcat_catalog_dup_detail.subscriber_id
AND cs2.company_id(+)=pcat_catalog_dup_detail.company_id;

--


ERROR:  syntax error at or near ")"
LINE 216: AND mk1.subscriber_id(+)=pcat_catalog_item.subscribe...
 ^
** Error **
ERROR: syntax error at or near ")"
SQL state: 42601
Character: 10018

Regards
SS


Re: [GENERAL] Function error

2016-01-08 Thread Pavel Stehule
Hi

2016-01-08 8:59 GMT+01:00 Sachin Srivastava :

> Hi,
>
> Also there is any command to see the invalid and valid function in
> postgres database.
>

No, Postgres is not a Oracle. All functions in database are valid. But it
means some different than in Oracle. That's "all embedded SQL are
syntactically valid". If you need semantic validation, you should to use
plpgsql_check. https://github.com/okbob/plpgsql_check/ .

Regards

Pavel


> Regards,
> SS
>
> On Fri, Jan 8, 2016 at 1:18 PM, Sachin Srivastava  > wrote:
>
>> Thanks Charles !!!
>>
>> On Fri, Jan 8, 2016 at 1:15 PM, Sachin Srivastava <
>> ssr.teleat...@gmail.com> wrote:
>>
>>> Thanks Pavel !!!
>>>
>>> On Fri, Jan 8, 2016 at 1:05 PM, Pavel Stehule 
>>> wrote:
>>>
 Hi



 2016-01-08 8:24 GMT+01:00 Sachin Srivastava :

> Dear Concern,
>
>
>
> I am creating below function *“create_catexp_ss_master()” *and
> getting error as below, I have already created dependent function firstly
> successfully *(“create_catexp_ss_1” and “create_catexp_ss_2”) *but
> still getting error, please suggest why?
>
>
>
> ERROR:  syntax error at or near "create_catexp_ss_1"
>
> LINE 38:  create_catexp_ss_1;
>
>   ^
>
> ** Error **
>
>
>
> ERROR: syntax error at or near "create_catexp_ss_1"
>
> SQL state: 42601
>
> Character: 1104
>
>
>
>
>
>
>
> -- Function: create_catexp_ss_master()
>
>
>
> -- DROP FUNCTION create_catexp_ss_master();
>
>
>
> CREATE OR REPLACE FUNCTION create_catexp_ss_master()
>
>   RETURNS void AS
>
> $BODY$
>
> DECLARE
>
>
>
> -- Build snapshot tables for catalog itme exposure.
>
>
>
> -- Versions:
>
> -- 2013.1 hxu T11464 - Two Table Catalog Snap Shot Process. 02/27/13
>
> -- 2013.02 hxu T11899 - Remove the Single Supplier check box from
> Screen and from the Code. 05/23/13
>
> --
>
>
>
> v_count_before bigint;
>
> v_count_after bigint;
>
> v_start_time timestamp;
>
> v_err_msg varchar(1000);
>
> v_set_name varchar(10);
>
>
>
>
> BEGIN
>
>
> v_set_name:=ssj4_snapshot_pkg.get_inactive_ss_set_name;
>
> SELECT LOCALTIMESTAMP INTO v_start_time ;
>
>
>
> if v_set_name='A' then
>
> SELECT count(1) INTO v_count_before
> FROM pcat_exp_supp_buyer_ss_a;
>
> else
>
> SELECT count(1) INTO v_count_before
> FROM pcat_exp_supp_buyer_ss_b;
>
> end if;
>
>
>
> -- Remove old data.
>
> EXECUTE 'truncate table
> pcat_exp_supp_buyer_ss_'||v_set_name;
>
> EXECUTE 'truncate table
> pcat_exp_supp_cat_buyer_ss_'||v_set_name;
>

 Attention - this is potentially serious security bug

 EXECUTE 'TRUNCATE TABLE ' || quote_ident(pcat_exp_supp_cat_buyer_ss_ ||
 v_set_name);


>
>
> -- Exposure for single supplier without category
> filtering
>
> create_catexp_ss_1;
>

 you have to call this function via PERFORM statement

PERFORM create_catexp_ss_1();


>
>
> -- Exposure for single supplier with category
> filtering
>
> create_catexp_ss_2;
>
>
>
> if v_set_name='A' then
>
> SELECT count(1) INTO v_count_after
> FROM pcat_exp_supp_buyer_ss_a;
>
> else
>
> SELECT count(1) INTO v_count_after
> FROM pcat_exp_supp_buyer_ss_b;
>
> end if;
>
>
>
> -- Log
>
> create_ss_log('Catalog Exposure', v_start_time,
> 'pcat_exp_supp_buyer_ss_'||v_set_name,
>
> v_count_before, v_count_after, null);
>
>
>
> exception-- log error
>
> when others then
>
> v_err_msg := SQLERRM;
>
> create_ss_log('Catalog Exposure - Error',
> v_start_time, 'pcat_exp_supp_buyer_ss_'||v_set_name,
>
> v_count_before,
> v_count_after, v_err_msg);
>
>
>
> END;
>
> $BODY$
>
>   LANGUAGE plpgsql VOLATILE SECURITY DEFINER
>
>   COST 100;
>
> ALTER FUNCTION create_catexp_ss_master()
>
>   OWNER TO postgres;
>
>
>
>
>
>
> Regards,
>>

Re: [GENERAL] BDR and TX obeyance

2016-01-08 Thread Craig Ringer
On 5 January 2016 at 04:09, Riley Berton  wrote:

>
> The conflict on the "thingy" table has resulted in node2 winning based
> on last_update wins default resolution.  However, both inserts have
> applied.  My expectation is that the entire TX applies or does not
> apply.  This expectation is clearly wrong.
>

Correct. Conflicts are resolved row-by-row. Their outcomes are determined
(by default) by transaction commit timestamps, but the conflicts themselves
are row-by-row.

Because BDR:

* applies changes to other nodes only AFTER commit on the origin node; and
* does not take row and table locks across nodes

it has no way to sensibly apply all or none of a transaction on downstream
peers because the client has already committed and moved on to other
things. If the xact doesn't apply, what do we do? Log output on the failing
node(s) and throw it away?

It's probably practical to have xacts abort on the first conflict, though
some thought would be needed about making sure that doesn't break
consistency requirements across nodes. It's not clear if doing so is useful
though.

For that you IMO want synchronous replication where the client doesn't get
a local COMMIT until all nodes have confirmed they can commit the xact.
That's something that could be added to BDR in future, but doing it well it
requires support for logical decoding of prepared transactions which is
currently missing from PostgreSQL's logical decoding support. If it's
something you think is important/useful you might want to explore what's
involved in implementing that.

Question is: is there a way (via a custom conflict handler) to have the
> TX obeyed?


No.

Even if you ERROR in your handler, BDR will just retry the xact. It has no
concept of "throw this transaction away forever".


> I can't see a way to even implement a simple bank account
> database that changes multiple tables in a single transaction without
> having the data end up in an inconsistent state.  Am I missing something
> obvious here?
>

You're trying to use asynchronous multimaster replication as if it was an
application-transparent synchronous cluster with a global transaction
manager and global lock manager.

BDR is not application-transparent. You need to understand replication
conflicts and think about them. It does not preserve full READ COMMITTED
semantics across nodes. This comes with big benefits in partition
tolerance, performance and latency tolerance, but it means you can't point
an existing app at more than one node and expect it to work properly.

The documentation tries over and over to emphasise this. Can you suggest
where it can be made clearer or more prominent?

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [GENERAL] Function error

2016-01-08 Thread Sachin Srivastava
Hi,

Also there is any command to see the invalid and valid function in postgres
database.

Regards,
SS

On Fri, Jan 8, 2016 at 1:18 PM, Sachin Srivastava 
wrote:

> Thanks Charles !!!
>
> On Fri, Jan 8, 2016 at 1:15 PM, Sachin Srivastava  > wrote:
>
>> Thanks Pavel !!!
>>
>> On Fri, Jan 8, 2016 at 1:05 PM, Pavel Stehule 
>> wrote:
>>
>>> Hi
>>>
>>>
>>>
>>> 2016-01-08 8:24 GMT+01:00 Sachin Srivastava :
>>>
 Dear Concern,



 I am creating below function *“create_catexp_ss_master()” *and getting
 error as below, I have already created dependent function firstly
 successfully *(“create_catexp_ss_1” and “create_catexp_ss_2”) *but
 still getting error, please suggest why?



 ERROR:  syntax error at or near "create_catexp_ss_1"

 LINE 38:  create_catexp_ss_1;

   ^

 ** Error **



 ERROR: syntax error at or near "create_catexp_ss_1"

 SQL state: 42601

 Character: 1104







 -- Function: create_catexp_ss_master()



 -- DROP FUNCTION create_catexp_ss_master();



 CREATE OR REPLACE FUNCTION create_catexp_ss_master()

   RETURNS void AS

 $BODY$

 DECLARE



 -- Build snapshot tables for catalog itme exposure.



 -- Versions:

 -- 2013.1 hxu T11464 - Two Table Catalog Snap Shot Process. 02/27/13

 -- 2013.02 hxu T11899 - Remove the Single Supplier check box from
 Screen and from the Code. 05/23/13

 --



 v_count_before bigint;

 v_count_after bigint;

 v_start_time timestamp;

 v_err_msg varchar(1000);

 v_set_name varchar(10);




 BEGIN

 v_set_name:=ssj4_snapshot_pkg.get_inactive_ss_set_name;

 SELECT LOCALTIMESTAMP INTO v_start_time ;



 if v_set_name='A' then

 SELECT count(1) INTO v_count_before
 FROM pcat_exp_supp_buyer_ss_a;

 else

 SELECT count(1) INTO v_count_before
 FROM pcat_exp_supp_buyer_ss_b;

 end if;



 -- Remove old data.

 EXECUTE 'truncate table
 pcat_exp_supp_buyer_ss_'||v_set_name;

 EXECUTE 'truncate table
 pcat_exp_supp_cat_buyer_ss_'||v_set_name;

>>>
>>> Attention - this is potentially serious security bug
>>>
>>> EXECUTE 'TRUNCATE TABLE ' || quote_ident(pcat_exp_supp_cat_buyer_ss_ ||
>>> v_set_name);
>>>
>>>


 -- Exposure for single supplier without category
 filtering

 create_catexp_ss_1;

>>>
>>> you have to call this function via PERFORM statement
>>>
>>>PERFORM create_catexp_ss_1();
>>>
>>>


 -- Exposure for single supplier with category filtering


 create_catexp_ss_2;



 if v_set_name='A' then

 SELECT count(1) INTO v_count_after
 FROM pcat_exp_supp_buyer_ss_a;

 else

 SELECT count(1) INTO v_count_after
 FROM pcat_exp_supp_buyer_ss_b;

 end if;



 -- Log

 create_ss_log('Catalog Exposure', v_start_time,
 'pcat_exp_supp_buyer_ss_'||v_set_name,

 v_count_before, v_count_after, null);



 exception-- log error

 when others then

 v_err_msg := SQLERRM;

 create_ss_log('Catalog Exposure - Error',
 v_start_time, 'pcat_exp_supp_buyer_ss_'||v_set_name,

 v_count_before,
 v_count_after, v_err_msg);



 END;

 $BODY$

   LANGUAGE plpgsql VOLATILE SECURITY DEFINER

   COST 100;

 ALTER FUNCTION create_catexp_ss_master()

   OWNER TO postgres;






 Regards,
 SS


>>> Regards
>>>
>>> Pavel Stehule
>>>
>>
>>
>