Re: JDBC connectivity issue

2018-03-13 Thread Adrian Klaver

On 03/13/2018 04:46 PM, chris wrote:

I'm sorry that took a few days but I am running;


Postgresql-9.4

and

jre7.jar


What we are looking for is the JDBC driver you are using?



Thanks in advance.


On 03/08/2018 02:30 PM, chris wrote:
Given that the syntax looks correct for the url, how would we go about 
debugging that it's not seeing the comma?



On 03/08/2018 02:27 PM, Adrian Klaver wrote:

On 03/08/2018 01:12 PM, chris wrote:

Hi,

I have a JDBC temping to connect to 2 postgres nodes in a hot 
standby configuration. Running postgres 9.4 on centos6.


What we are trying to accomplish is in an event of a failover, to 
first try to connect to the master. If that fails to then reach out 
to the standby.


I looked online and found the suggested way to do this, but it's not 
working.


This is the JDBC line we have:

/jdbc.url=jdbc:postgresql://10.16.10.12:5432,10.16.10.13:5432/app_db/

Here is the error we are getting:/
/

/2018-03-07 13:54:36, 994 ERROR:com.zaxxer.hikari.util.PropertyElf - 
failed to set properly port number on target class 
org.postgresql.ds.PGSimpleDataSource 
java.langNumberFormatException:for input string: 
"5432,10.16.10.13:5432"/


Lets try that again:

To me it looks like whatever code you are using is trying to use 
"5432,10.16.10.13:5432" as the port number for 10.16.10.12. In other 
words it is not seeing the ',' as a separator for the two IP's'.


Time to clean the glasses:)



It looks like the first IP address has disappeared or is not set 
right./

/

Thanks in advance for the help.


//













--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Reindex doesn't eliminate bloat

2018-03-13 Thread Ron Johnson

On 03/13/2018 06:10 PM, Joe Conway wrote:

On 03/12/2018 09:16 PM, Ron Johnson wrote:

On 03/12/2018 10:48 PM, Nikolay Samokhvalov wrote:

Those queries from wiki for table and index bloat estimation are for
estimation only. In many cases they show very wrong results. Better
(yet not ideal) approach is using pgstattuple extension (though I'm
not sure it existed back in 2009).

Can you provide table and index definition and, if you can, some
sample data?

Sadly, no sample data.  (It's all PCI controlled.)

Index idx_item_mapping_rp7_y2016m03itemmapping_custom_userfield_801 has
40% bloat.

Assuming the data in the indexed column(s) is not highly correlated with
the physical table order (i.e. it is roughly random), about 50% density
is theoretically expected.


What does physical table order have to do with b-tree organization, 
especially in a freshly reindexed table using the default 90% fill factor?



  In fact, in some empirical testing, I have
seen a long term steady state value of closer to 44% if I remember
correctly (but perhaps that was related to the way I was testing). For a
discussion on why this is the case, see for example:

https://www.postgresql.org/message-id/flat/87oa4xmss7.fsf%40news-spur.riddles.org.uk#87oa4xmss7@news-spur.riddles.org.uk

So what is being reported at 40% bloat is probably actually not really
bloat.



--
Angular momentum makes the world go 'round.



Re: JDBC connectivity issue

2018-03-13 Thread chris

I'm sorry that took a few days but I am running;


Postgresql-9.4

and

jre7.jar

Thanks in advance.


On 03/08/2018 02:30 PM, chris wrote:
Given that the syntax looks correct for the url, how would we go about 
debugging that it's not seeing the comma?



On 03/08/2018 02:27 PM, Adrian Klaver wrote:

On 03/08/2018 01:12 PM, chris wrote:

Hi,

I have a JDBC temping to connect to 2 postgres nodes in a hot 
standby configuration. Running postgres 9.4 on centos6.


What we are trying to accomplish is in an event of a failover, to 
first try to connect to the master. If that fails to then reach out 
to the standby.


I looked online and found the suggested way to do this, but it's not 
working.


This is the JDBC line we have:

/jdbc.url=jdbc:postgresql://10.16.10.12:5432,10.16.10.13:5432/app_db/

Here is the error we are getting:/
/

/2018-03-07 13:54:36, 994 ERROR:com.zaxxer.hikari.util.PropertyElf - 
failed to set properly port number on target class 
org.postgresql.ds.PGSimpleDataSource 
java.langNumberFormatException:for input string: 
"5432,10.16.10.13:5432"/


Lets try that again:

To me it looks like whatever code you are using is trying to use 
"5432,10.16.10.13:5432" as the port number for 10.16.10.12. In other 
words it is not seeing the ',' as a separator for the two IP's'.


Time to clean the glasses:)



It looks like the first IP address has disappeared or is not set 
right./

/

Thanks in advance for the help.


//












Re: Reindex doesn't eliminate bloat

2018-03-13 Thread Joe Conway
On 03/12/2018 09:16 PM, Ron Johnson wrote:
> On 03/12/2018 10:48 PM, Nikolay Samokhvalov wrote:
>> Those queries from wiki for table and index bloat estimation are for
>> estimation only. In many cases they show very wrong results. Better
>> (yet not ideal) approach is using pgstattuple extension (though I'm
>> not sure it existed back in 2009).
>>
>> Can you provide table and index definition and, if you can, some
>> sample data?
> 
> Sadly, no sample data.  (It's all PCI controlled.)
> 
> Index idx_item_mapping_rp7_y2016m03itemmapping_custom_userfield_801 has
> 40% bloat.

Assuming the data in the indexed column(s) is not highly correlated with
the physical table order (i.e. it is roughly random), about 50% density
is theoretically expected. In fact, in some empirical testing, I have
seen a long term steady state value of closer to 44% if I remember
correctly (but perhaps that was related to the way I was testing). For a
discussion on why this is the case, see for example:

https://www.postgresql.org/message-id/flat/87oa4xmss7.fsf%40news-spur.riddles.org.uk#87oa4xmss7@news-spur.riddles.org.uk

So what is being reported at 40% bloat is probably actually not really
bloat.

HTH,

Joe

-- 
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development



signature.asc
Description: OpenPGP digital signature


Re: Postgresql upgrade to 9.5.12/10.3 changes pg_dump format for cloning schemas

2018-03-13 Thread Melvin Davidson
On Tue, Mar 13, 2018 at 1:47 PM, Melvin Davidson 
wrote:

>
>
>
>> Thank you Melvin, I forgot to mention I've already found your script
>> before I asked here, but I didn’t think it was robust enough (please don't
>> offend :-). Particularly, it didn't work well on PostgreSQL 10.
>>
>>
> Aldrin,
>
> I apologize. I just tested and found that the reason it is failing is
> because, once again, the catalogs have been changed. In this case the
> structure of sequences.
> I am trying to find a workaround. I will get back to you when I do.
>
>
> --
> *Melvin Davidson*
> *Maj. Database & Exploration Specialist*
> *Universe Exploration Command – UXC*
> Employment by invitation only!
>



*Aldrin,*


*I've solved the problem with the sequences. *

*The attached clone_schema_10.sql has been tested on my system and now
works. *

*Let me know if you find any bugs. As you also said it is not robust
enough, Please also let me know what additional features you think it
needs.*


-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!

-- Function: public.clone_schema(text, text, boolean)

-- DROP FUNCTION public.clone_schema(text, text, boolean);

CREATE OR REPLACE FUNCTION public.clone_schema(
source_schema text,
dest_schema text,
include_recs boolean)
  RETURNS void AS
$BODY$
--  Initial code by Emanuel '3manuek' 
--  Last revision 2017-04-17 by Melvin Davidson
--  Added SELECT REPLACE for schema views
--
--  This function will clone all sequences, tables, indexes, rules, triggers, 
--  data(optional), views & functions from any existing schema to a new schema
-- SAMPLE CALL:
-- SELECT clone_schema('public', 'new_schema', TRUE);

DECLARE
  src_oid  oid;
  tbl_oid  oid;
  seq_oid  oid;
  func_oid oid;
  owner_id oid;
  con_oid  oid;
  v_path   text;
  v_func   text;
  v_args   text;
  v_connametext;
  v_rule   text;
  v_trig   text;
  object   text;
  buffer   text;
  srctbl   text;
  default_ text;
  v_column text;
  qry  text;
  dest_qry text;
  v_deftext;
  v_stat   integer;
  seqval   bigint;
  seq_name name;
  sq_type  oid;
  sq_last_valuebigint;
  sq_max_value bigint;
  sq_start_value   bigint;
  sq_increment_by  bigint;
  sq_min_value bigint;
  sq_cache_value   bigint;
  sq_log_cnt   bigint;
--  sq_is_called boolean;
  sq_is_cycled boolean;
  sq_typname   name;
  seq_cycled   text;
  seq_ownername;  
--  sq_cycledchar(10);
  
BEGIN

-- Check that source_schema exists
  SELECT oid INTO src_oid
FROM pg_namespace
   WHERE nspname = quote_ident(source_schema);
   
  IF NOT FOUND
THEN 
  RAISE NOTICE 'source schema % does not exist!', source_schema;
  RETURN ;
  END IF;

  -- Check that dest_schema does not yet exist
  PERFORM nspname 
FROM pg_namespace
   WHERE nspname = quote_ident(dest_schema);
   
  IF FOUND
THEN 
  RAISE NOTICE 'dest schema % already exists!', dest_schema;
  RETURN ;
  END IF;

  EXECUTE 'CREATE SCHEMA ' || quote_ident(dest_schema) ;
   
  -- Add schema comment
  SELECT description INTO v_def
FROM pg_description
   WHERE objoid = src_oid
 AND objsubid = 0;

  IF FOUND 
THEN 
  EXECUTE 'COMMENT ON SCHEMA ' || quote_ident(dest_schema) || ' IS ' || 
quote_literal(v_def);
  END IF;
  
  -- Create sequences
  -- TODO: Find a way to make this sequence's owner is the correct table.
  FOR object IN
 SELECT sequence_name::text 
  FROM information_schema.sequences
 WHERE sequence_schema = quote_ident(source_schema)

  LOOP

SELECT oid, relowner into seq_oid, owner_id
  FROM pg_class 
 WHERE relname = quote_ident(object)
   AND relkind = 'S';

SELECT seqtypid,
   seqstart,
   seqincrement,
   seqmax,
   seqmin,
   seqcache,
   seqcycle
  INTO sq_type, 
   sq_start_value,
   sq_increment_by,
   sq_max_value,
   sq_min_value,
   sq_cache_value,
   sq_is_cycled
  FROM pg_sequence
 WHERE seqrelid = seq_oid;
 
SELECT typname INTO sq_typname
  FROM pg_type 
 WHERE oid = sq_type;

SELECT rolname INTO seq_owner
  FROM pg_authid WHERE oid = owner_id;

--EXECUTE 'CREATE SEQUENCE ' || quote_ident(dest_schema) || '.' || 
quote_ident(object);
--srctbl := quote_ident(source_schema) || '.' || quote_ident(object);

IF sq_is_cycled 
  THEN 
seq_cycled = ' CYCLE'; 
  ELSE 
seq_cycled = ' NO CYCLE';
END IF;

EXECUTE 'CREATE SEQUENCE ' || quote_ident(dest_schema) || '.' || 

Re: query_to_xml() returns invalid XML when query returns no rows

2018-03-13 Thread Peter Eisentraut
On 3/13/18 15:21, Thomas Kellerer wrote:
> I still think it's incorrect to return an empty (=invalid) XML instead of a 
> NULL value though.

This behavior is specified in the SQL standard.  While an empty string
is not a valid XML "document", it is valid as XML "content".

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Is there a way to create a functional index that tables tableoid column as an arg?

2018-03-13 Thread Tom Lane
Ryan Murphy  writes:
> Hi David!  Thanks for the reply.
>> tableoid might be an exception to
>> this, but it does not really seem like a useful column to index,
>> giving it would be indexing the same value for each record in the
>> table.

> Unless you're using inheritance - then tableoid may vary.  That's the case
> I'm interested in.

Uh, no, not within a single table ... and indexes only cover one table.

regards, tom lane



Re: Is there a way to create a functional index that tables tableoid column as an arg?

2018-03-13 Thread Ryan Murphy
Hi David!  Thanks for the reply.

> tableoid might be an exception to
> this, but it does not really seem like a useful column to index,
> giving it would be indexing the same value for each record in the
> table.

Unless you're using inheritance - then tableoid may vary.  That's the case
I'm interested in.


> -- get all animals that are persons
> select ... from animal where tableoid in (select
> get_inherited_tables('person'::regclass);
>
> -- get all animals that are not persons
> select ... from animal where tableoid not in (select
> get_inherited_tables('person'::regclass);
>
>
That's a great idea.  I'll try it!


> Just be careful around search_paths and your use of regclass. In this
> case, if "animal" was not in the first schema in search_path, but
> someone created another table called "person" that was in the first
> schema listed in search_path, then the query would not do what you
> want. You might want to consider prefixing the input parameter into
> get_inherited_tables with the schema name too.
>

Good point.

Thanks again!
Ryan


Re: query_to_xml() returns invalid XML when query returns no rows

2018-03-13 Thread Thomas Kellerer

Peter Eisentraut schrieb am 12.03.2018 um 23:31:

I am not sure if this qualifies as a bug:

query_to_xml() returns an empty XML document when the query returns no rows, 
e.g:

select query_to_xml('select 42 where false', false, true, '');

The problem with this is, that if the resulting XML is then fed
into e.g. the xpath() function, that function fails because the
"empty" document is an invalid XML:


That's because you have the tableforest argument set to true.  If you
want a proper XML document, then you should write

 select query_to_xml('select 42 where false', false, false, '');



Hmm, that indeed works.
I didn't want the extra level introduced by the  tag, that's why I used 
tableforest = true.
But that's easier to deal with than the check for an invalid document

I still think it's incorrect to return an empty (=invalid) XML instead of a 
NULL value though.

Regards
Thomas
 






Re: Postgresql upgrade to 9.5.12/10.3 changes pg_dump format for cloning schemas

2018-03-13 Thread Melvin Davidson
> Thank you Melvin, I forgot to mention I've already found your script
> before I asked here, but I didn’t think it was robust enough (please don't
> offend :-). Particularly, it didn't work well on PostgreSQL 10.
>
>
Aldrin,

I apologize. I just tested and found that the reason it is failing is
because, once again, the catalogs have been changed. In this case the
structure of sequences.
I am trying to find a workaround. I will get back to you when I do.


-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: Postgresql upgrade to 9.5.12/10.3 changes pg_dump format for cloning schemas

2018-03-13 Thread Aldrin Martoq Ahumada


> On Mar 9, 2018, at 12:15 PM, Melvin Davidson  wrote:
> On Fri, Mar 9, 2018 at 10:00 AM, Aldrin Martoq Ahumada 
> > wrote:
> Yes, here is the issue: https://github.com/influitive/apartment/issues/532 
> 
> It happens if you configured apartment with use_sql=true, which means it 
> clones the schema from pg_dump. My first attempt was to “fix” the script 
> generated by pg_dump, but I feel it will be a mess. We solved our issue going 
> back to use_sql=false, which is the default (creates the schema from 
> db/schema.rb). But there is people that have other requirements, like 
> functions, so the easier way for them is to keep use_sql and replace strings 
> in the script.
>  >...how could be the best way to clone a schema into another?
>  
>  The safest way is to use pgdump -F p -n  > schema.sql
>  Then edit schema.sql and change all references to old_schema name to 
> new_schema name.
>  Finally, use psql < schema.sql to create the new_schema.
>  
>  That being said, a year ago I optimized a function originally written by 
> Emanuel '3manuek'
>  called clone_schema, which is added to the public schema. It clones all 
> sequences, tables, 
>  indexes, rules, triggers, data(optional), views & functions from any 
> existing schema to a 
>  new  schema
>  SAMPLE CALL:
>  SELECT clone_schema('public', 'new_schema', TRUE);
>  
>  I've attached it for your convenience.
>  disclaimer: I do not accept any responsibility for any unknow bugs in the 
> function. 
>  Test first and use at your own risk.

Thank you Melvin, I forgot to mention I've already found your script before I 
asked here, but I didn’t think it was robust enough (please don't offend :-). 
Particularly, it didn't work well on PostgreSQL 10.


I think the solution for the long term is to add a flag to pg_dump, but in the 
short/mid term we’ll have to replicate some form of your script into the gem.


Cheers,
Aldrin.

PS: I’ve added initial support for PostgreSQL 10 to clone_schema here, but I 
have no time to test it well:
https://gist.github.com/aldrinmartoq/5df0aa03f86f3ad03982c793753c04a1 





Re: Programmatically duplicating a schema

2018-03-13 Thread Adrian Klaver

On 03/12/2018 11:05 PM, matt.f...@internode.on.net wrote:

Thanks Adrian,

Really appreciate the suggestions.

The objective when trying to solve this for the Apartment library itself 
is to keep it generic (works for any/all database objects - tables, 
views, stored procedures, 3rd party extension objects, etc.) & to 
require minimal configuration (preferably not having to tell the library 
your schema).


I believe pg_dump was the only way to satisfy the second point.  The 
reason we'd want to infer the create statements via pg_dump is, so we 
don't need to keep database migration files in sync with a 'create new 
schema' SQL script.  It adds risk that they get out of sync, causing 
inconsistencies in new schemas created by the library.


Seems to me your best hope is to make a plea on --hackers for a flag 
that turns off schema qualification of object names.




Assuming there's no other way to infer the create statements from the 
public schema, Ruby on Rails' structure.sql could probably be used as a 
starting point for the 'create new schema' SQL file.  It's similar 
already, however it's also generated via pg_dump (having the same issues 
as Apartment library). 
http://guides.rubyonrails.org/v3.2.9/migrations.html#types-of-schema-dumps  
This is outside the realms of this mail group though.


Cheers,

Matt.





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: UPSERT on a view

2018-03-13 Thread Tom Lane
Melvin Davidson  writes:
> On Mon, Mar 12, 2018 at 2:26 PM, Steven Roth  wrote:
>> Why does the following code raise an error?
>> 
>> CREATE TABLE ttest (x integer);
>> CREATE VIEW vtest AS SELECT x FROM ttest;
>> CREATE FUNCTION vtest_insert() RETURNS trigger LANGUAGE plpgsql AS $$
>> RAISE 'foo' USING ERRCODE='unique_violation';
>> END $$;
>> CREATE TRIGGER vtest_insert INSTEAD OF INSERT ON vtest
>> FOR EACH ROW EXECUTE PROCEDURE vtest_insert();
>> INSERT INTO vtest VALUES (1) ON CONFLICT DO NOTHING;
>> 
>> This code raises the error 'foo', even though the insert says DO NOTHING
>> and the error type is unique_violation.  Why?

> *Your problem is that A. TRIGGERS are meant for TABLES, not views*
> *and *
> *B. You CANNOT insert into a VIEW.*

No, instead-of triggers are supposed to do exactly that.  The OP's problem
is that he imagines that ON CONFLICT works by catching arbitrary errors and
checking to see if they are labeled unique_violation.  Doesn't work like
that; if it did, it'd be the wrong thing, because we would not know
whether or how to roll back any other side effects the trigger had had.

>> More generally:  how can one write trigger functions for a view (that is
>> not automatically updatable), such that INSERT ... ON CONFLICT DO UPDATE
>> will work with the correct semantics?

Don't think it's possible unfortunately.  The trigger would have to take
care of the situation, but it doesn't have any way to know that the
calling query has an ON CONFLICT rule.

regards, tom lane



Re: UPSERT on a view

2018-03-13 Thread Melvin Davidson
On Tue, Mar 13, 2018 at 10:02 AM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Tue, Mar 13, 2018 at 6:47 AM, Melvin Davidson 
> wrote:
>
>> On Mon, Mar 12, 2018 at 2:26 PM, Steven Roth 
>> wrote:
>>
>>> This code raises the error 'foo', even though the insert says DO NOTHING
>>> and the error type is unique_violation.  Why?
>>>
>>> More generally:  how can one write trigger functions for a view (that is
>>> not automatically updatable), such that INSERT ... ON CONFLICT DO UPDATE
>>> will work with the correct semantics?  What can one do in the INSERT
>>> trigger that will cause PostgreSQL to execute the caller-supplied UPDATE
>>> clause?
>>>
>>>
> Sorry, not sure...
> ​
>
>> >CREATE TRIGGER vtest_insert INSTEAD OF INSERT ON *vtest*
>> >FOR EACH ROW EXECUTE PROCEDURE vtest_insert();
>> >*INSERT INTO vtest *VALUES (1) ON CONFLICT DO NOTHING;
>>
>>
>>
>> *Your problem is that A. TRIGGERS are meant for TABLES, not views*
>>
>> *and *
>>
>> *B. You CANNOT insert into a VIEW.*
>>
>>
> ​Your knowledge is this area is out-of-date...you should read the page you
> linked to again.
>
> I'll admit "table_name" probably could be labelled
> "relation_name"...though that is then too broad.
>
> David J.
> ​
>
>

>Your knowledge is this area is out-of-date...you should read the page you
linked to again.
Yes, you can create a TRIGGER on view, but you CANNOT INSERT INTO A VIEW,

IE:
CREATE TRIGGER vtest_insert INSTEAD OF INSERT ON vtest
FOR EACH ROW EXECUTE PROCEDURE vtest_insert();
*INSERT INTO vtest VALUES (1) ON CONFLICT DO NOTHING;*

Which is the cause of the Error.!



-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


Re: UPSERT on a view

2018-03-13 Thread David G. Johnston
On Tue, Mar 13, 2018 at 6:47 AM, Melvin Davidson 
wrote:

> On Mon, Mar 12, 2018 at 2:26 PM, Steven Roth 
> wrote:
>
>> This code raises the error 'foo', even though the insert says DO NOTHING
>> and the error type is unique_violation.  Why?
>>
>> More generally:  how can one write trigger functions for a view (that is
>> not automatically updatable), such that INSERT ... ON CONFLICT DO UPDATE
>> will work with the correct semantics?  What can one do in the INSERT
>> trigger that will cause PostgreSQL to execute the caller-supplied UPDATE
>> clause?
>>
>>
Sorry, not sure...
​

> >CREATE TRIGGER vtest_insert INSTEAD OF INSERT ON *vtest*
> >FOR EACH ROW EXECUTE PROCEDURE vtest_insert();
> >*INSERT INTO vtest *VALUES (1) ON CONFLICT DO NOTHING;
>
>
>
> *Your problem is that A. TRIGGERS are meant for TABLES, not views*
>
> *and *
>
> *B. You CANNOT insert into a VIEW.*
>
>
​Your knowledge is this area is out-of-date...you should read the page you
linked to again.

I'll admit "table_name" probably could be labelled "relation_name"...though
that is then too broad.

David J.
​


Re: Question on corruption (PostgreSQL 9.6.1)

2018-03-13 Thread Tom Lane
Andy Halsall  writes:
>  db=# select * from x where col_a = 4675635;
>  col_a   | col_b   | col_c   | col_d | col_e |  
> last_modified
> +-+-+---+---+---
> | | |   |   |
> (1 row)

> Row 4675635 is very odd - NULL columns and at the same time retrievable by a 
> value in col_a.

Doesn't seem particularly surprising if col_a is indexed.  That query
would choose an indexscan plan, which would normally not bother to
re-verify the index condition against heap tuples found via the index.

If you're continuing to use this damaged database, it might be a good
idea to try to REINDEX all your indexes.  That'd be particularly
useful for primary/unique indexes, since if corruption has led to
any apparent duplicate rows, the reindex would fail and complain.
But in any case it'd clean up heap-vs-index inconsistencies like the
above, as well as repairing any cases where the corruption was in an
index rather than heap.

Another test I'd strongly recommend is to see if you can pg_dumpall
and reload into a spare server.  That might catch forms of data
corruption that reindexing would not, such as violated CHECK constraints.

regards, tom lane



Re: Programmatically duplicating a schema

2018-03-13 Thread David G. Johnston
On Mon, Mar 12, 2018 at 11:05 PM,  wrote:

> The reason we'd want to infer the create statements via pg_dump is, so we
> don't need to keep database migration files in sync with a 'create new
> schema' SQL script.  It adds risk that they get out of sync, causing
> inconsistencies in new schemas created by the library.
>
​Then don't have a "create new schema" script (aside from the very first
one) run your migrations even when creating new installations.

​David J.


Re: UPSERT on a view

2018-03-13 Thread Melvin Davidson
On Mon, Mar 12, 2018 at 2:26 PM, Steven Roth  wrote:

> Why does the following code raise an error?
>
> CREATE TABLE ttest (x integer);
> CREATE VIEW vtest AS SELECT x FROM ttest;
> CREATE FUNCTION vtest_insert() RETURNS trigger LANGUAGE plpgsql AS $$
> RAISE 'foo' USING ERRCODE='unique_violation';
> END $$;
> CREATE TRIGGER vtest_insert INSTEAD OF INSERT ON vtest
> FOR EACH ROW EXECUTE PROCEDURE vtest_insert();
> INSERT INTO vtest VALUES (1) ON CONFLICT DO NOTHING;
>
> This code raises the error 'foo', even though the insert says DO NOTHING
> and the error type is unique_violation.  Why?
>
> More generally:  how can one write trigger functions for a view (that is
> not automatically updatable), such that INSERT ... ON CONFLICT DO UPDATE
> will work with the correct semantics?  What can one do in the INSERT
> trigger that will cause PostgreSQL to execute the caller-supplied UPDATE
> clause?
>
> Thanks,
> Steve
>
>
>CREATE TRIGGER vtest_insert INSTEAD OF INSERT ON *vtest*
>FOR EACH ROW EXECUTE PROCEDURE vtest_insert();
>*INSERT INTO vtest *VALUES (1) ON CONFLICT DO NOTHING;



*Your problem is that A. TRIGGERS are meant for TABLES, not views*

*and *

*B. You CANNOT insert into a VIEW.*



*https://www.postgresql.org/docs/9.6/static/sql-createtrigger.html
*CREATE
[ CONSTRAINT ] TRIGGER name { BEFORE | AFTER | INSTEAD OF } { event [
OR ... ] }
ON *table_name*
[ FROM referenced_table_name ]
[ NOT DEFERRABLE | [ DEFERRABLE ] [ INITIALLY IMMEDIATE |
INITIALLY DEFERRED ] ]
[ FOR [ EACH ] { ROW | STATEMENT } ]*
[ WHEN ( condition ) ]*
EXECUTE PROCEDURE function_name ( arguments )


*Please also note that it is very helpful if you specify PostgreSQL version
and O/S when submitting to this list.*


-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


UPSERT on a view

2018-03-13 Thread Steven Roth
Why does the following code raise an error?

CREATE TABLE ttest (x integer);
CREATE VIEW vtest AS SELECT x FROM ttest;
CREATE FUNCTION vtest_insert() RETURNS trigger LANGUAGE plpgsql AS $$
RAISE 'foo' USING ERRCODE='unique_violation';
END $$;
CREATE TRIGGER vtest_insert INSTEAD OF INSERT ON vtest
FOR EACH ROW EXECUTE PROCEDURE vtest_insert();
INSERT INTO vtest VALUES (1) ON CONFLICT DO NOTHING;

This code raises the error 'foo', even though the insert says DO NOTHING
and the error type is unique_violation.  Why?

More generally:  how can one write trigger functions for a view (that is
not automatically updatable), such that INSERT ... ON CONFLICT DO UPDATE
will work with the correct semantics?  What can one do in the INSERT
trigger that will cause PostgreSQL to execute the caller-supplied UPDATE
clause?

Thanks,
Steve


Re: Programmatically duplicating a schema

2018-03-13 Thread Melvin Davidson
* > What is a reliable way to programmatically & generically populate an
empty schema with all the objects in the public schema as a template? The
simplest way is just to load the attached clone_schema function. It was
originally created by Emanuel '3manuek', which I enhanced. Itnow copies all
sequences, tables, indexes, rules, triggers, data(optional), views &
functions from any existing schema to a new schema.Then just call the
function.SAMPLE CALL:SELECT clone_schema('public', 'new_schema', TRUE);Use
FALSE if you do not want to copy data.*
-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!
-- Function: public.clone_schema(text, text, boolean)

-- DROP FUNCTION public.clone_schema(text, text, boolean);

CREATE OR REPLACE FUNCTION public.clone_schema(
source_schema text,
dest_schema text,
include_recs boolean)
  RETURNS void AS
$BODY$
--  Initial code by Emanuel '3manuek' 
--  Last revision 2017-04-17 by Melvin Davidson
--  Added SELECT REPLACE for schema views
--
--  This function will clone all sequences, tables, indexes, rules, triggers, 
--  data(optional), views & functions from any existing schema to a new schema
-- SAMPLE CALL:
-- SELECT clone_schema('public', 'new_schema', TRUE);

DECLARE
  src_oid  oid;
  tbl_oid  oid;
  func_oid oid;
  con_oid  oid;
  v_path   text;
  v_func   text;
  v_args   text;
  v_connametext;
  v_rule   text;
  v_trig   text;
  object   text;
  buffer   text;
  srctbl   text;
  default_ text;
  v_column text;
  qry  text;
  dest_qry text;
  v_deftext;
  v_stat   integer;
  seqval   bigint;
  sq_last_valuebigint;
  sq_max_value bigint;
  sq_start_value   bigint;
  sq_increment_by  bigint;
  sq_min_value bigint;
  sq_cache_value   bigint;
  sq_log_cnt   bigint;
  sq_is_called boolean;
  sq_is_cycled boolean;
  sq_cycledchar(10);

BEGIN

-- Check that source_schema exists
  SELECT oid INTO src_oid
FROM pg_namespace
   WHERE nspname = quote_ident(source_schema);
  IF NOT FOUND
THEN 
RAISE NOTICE 'source schema % does not exist!', source_schema;
RETURN ;
  END IF;

  -- Check that dest_schema does not yet exist
  PERFORM nspname 
FROM pg_namespace
   WHERE nspname = quote_ident(dest_schema);
  IF FOUND
THEN 
RAISE NOTICE 'dest schema % already exists!', dest_schema;
RETURN ;
  END IF;

  EXECUTE 'CREATE SCHEMA ' || quote_ident(dest_schema) ;

  -- Add schema comment
  SELECT description INTO v_def
FROM pg_description
   WHERE objoid = src_oid
   AND objsubid = 0;
  IF FOUND 
THEN 
  EXECUTE 'COMMENT ON SCHEMA ' || quote_ident(dest_schema) || ' IS ' || 
quote_literal(v_def);
  END IF;
  
  -- Create sequences
  -- TODO: Find a way to make this sequence's owner is the correct table.
  FOR object IN
SELECT sequence_name::text 
  FROM information_schema.sequences
 WHERE sequence_schema = quote_ident(source_schema)
  LOOP
EXECUTE 'CREATE SEQUENCE ' || quote_ident(dest_schema) || '.' || 
quote_ident(object);
srctbl := quote_ident(source_schema) || '.' || quote_ident(object);

EXECUTE 'SELECT last_value, max_value, start_value, increment_by, 
min_value, cache_value, log_cnt, is_cycled, is_called 
  FROM ' || quote_ident(source_schema) || '.' || 
quote_ident(object) || ';' 
  INTO sq_last_value, sq_max_value, sq_start_value, 
sq_increment_by, sq_min_value, sq_cache_value, sq_log_cnt, sq_is_cycled, 
sq_is_called ; 

IF sq_is_cycled 
  THEN 
sq_cycled := 'CYCLE';
ELSE
sq_cycled := 'NO CYCLE';
END IF;

EXECUTE 'ALTER SEQUENCE '   || quote_ident(dest_schema) || '.' || 
quote_ident(object) 
|| ' INCREMENT BY ' || sq_increment_by
|| ' MINVALUE ' || sq_min_value 
|| ' MAXVALUE ' || sq_max_value
|| ' START WITH '   || sq_start_value
|| ' RESTART '  || sq_min_value 
|| ' CACHE '|| sq_cache_value 
|| sq_cycled || ' ;' ;

buffer := quote_ident(dest_schema) || '.' || quote_ident(object);
IF include_recs 
THEN
EXECUTE 'SELECT setval( ''' || buffer || ''', ' || sq_last_value || 
', ' || sq_is_called || ');' ; 
ELSE
EXECUTE 'SELECT setval( ''' || buffer || ''', ' || sq_start_value 
|| ', ' || sq_is_called || ');' ;
END IF;

  -- add sequence comments
SELECT oid INTO tbl_oid
  FROM pg_class 
 WHERE relkind = 'S'
   AND relnamespace = src_oid
   AND relname = quote_ident(object);

SELECT description INTO v_def
  FROM pg_description
 WHERE objoid = tbl_oid
   AND objsubid = 0;

IF FOUND
  THEN  
EXECUTE 'COMMENT ON SEQUENCE ' || quote_ident(dest_schema) || '.' || 

Re: Point-in-time recovery after failover

2018-03-13 Thread Laurenz Albe
Dylan Luong wrote: 
> We are on Postgres 9.6 and we have primary/standby wal replication setup for 
> HA.
>  
> I am trying to perform a point-in-time recovery after a failover has occurred.
>  
> I extracted the base backups (tar files) to the data directory and extracted 
> the all the archive backups  (P1 and S1) to the archive directory.
> Followed the steps from
> Created recovery.conf with the following, but the recovery only recovers, to 
> the point before the failover and comes online.
>  
> Example of  my recovery.conf :
> restore_command = 'cp /pg_backup/backup/archive/%f %p'
> recovery_target_time = '2018-03-13 13:54:00'
> recovery_target_inclusive = 'false'
> recovery_target_timeline = '6'  (I also tried revovery_target_timeline = 
> ‘lastest)
>  
> Is it possible to recovery to PITR after a failover using base backup from 
> before failover and both archive backups from P1 and S1?
>  
>  
> Here is the log output :
>  
> 018-03-13 20:46:53 ACDT [154912]: [1-1] db=,user= app=,host= LOG:  database 
> system was interrupted; last known up at 2018-03-1
> 3 01:31:25 ACDT
> 2018-03-13 20:46:53 ACDT [154912]: [2-1] db=,user= app=,host= LOG:  restored 
> log file "0006.history" from archive
> cp: cannot stat ‘/pg_backup/backup/archive/0007.history’: No such file or 
> directory
> 2018-03-13 20:46:53 ACDT [154912]: [3-1] db=,user= app=,host= LOG:  starting 
> point-in-time recovery to 2018-03-13 13:54:00+10:3
> 0
> 2018-03-13 20:46:53 ACDT [154912]: [4-1] db=,user= app=,host= LOG:  restored 
> log file "0006.history" from archive
> cp: cannot stat ‘/pg_backup/backup/archive/000601110087’: No such 
> file or directory
> 2018-03-13 20:46:53 ACDT [154912]: [5-1] db=,user= app=,host= LOG:  restored 
> log file "000501110087" from archive
> cp: cannot stat ‘/pg_backup/backup/archive/0005.history’: No such file or 
> directory
> 2018-03-13 20:46:53 ACDT [154912]: [6-1] db=,user= app=,host= LOG:  redo 
> starts at 111/8728
> 2018-03-13 20:46:53 ACDT [154912]: [7-1] db=,user= app=,host= LOG:  
> consistent recovery state reached at 111/870B4418
> 2018-03-13 20:46:53 ACDT [154908]: [3-1] db=,user= app=,host= LOG:  database 
> system is ready to accept read only connections
> cp: cannot stat ‘/pg_backup/backup/archive/000601110088’: No such 
> file or directory
> 2018-03-13 20:46:53 ACDT [154912]: [8-1] db=,user= app=,host= LOG:  restored 
> log file "000501110088" from archive
> cp: cannot stat ‘/pg_backup/backup/archive/000601110089’: No such 
> file or directory
> 2018-03-13 20:46:53 ACDT [154912]: [9-1] db=,user= app=,host= LOG:  restored 
> log file "000501110089" from archive
> cp: cannot stat ‘/pg_backup/backup/archive/00060111008A’: No such 
> file or directory
> 2018-03-13 20:46:53 ACDT [154912]: [10-1] db=,user= app=,host= LOG:  restored 
> log file "00050111008A" from archive
[...]
> cp: cannot stat ‘/pg_backup/backup/archive/0006011100AD’: No such 
> file or directory
> 2018-03-13 20:46:57 ACDT [154912]: [45-1] db=,user= app=,host= LOG:  restored 
> log file "0005011100AD" from archive
> cp: cannot stat ‘/pg_backup/backup/archive/0006011100AE’: No such 
> file or directory
> cp: cannot stat ‘/pg_backup/backup/archive/0005011100AE’: No such 
> file or directory
> 2018-03-13 20:46:57 ACDT [154912]: [46-1] db=,user= app=,host= LOG:  redo 
> done at 111/ADFFE160
> 2018-03-13 20:46:57 ACDT [154912]: [47-1] db=,user= app=,host= LOG:  last 
> completed transaction was at log time 2018-03-13 13:4
> 0:02.36347+10:30
> 
>  
> Here is what is in the archive directory after extracting from the archive 
> backups and after database comes online:
>  
[...]
> -rw---. 1 postgres postgres 16777216 Mar 13 13:40 0005011100AA
> -rw---. 1 postgres postgres 16777216 Mar 13 13:40 0005011100AB
> -rw---. 1 postgres postgres 16777216 Mar 13 13:40 0005011100AC
> -rw---. 1 postgres postgres 16777216 Mar 13 13:40 0005011100AD
> -rw---. 1 postgres postgres 16777216 Mar 13 13:48 
> 0005011100B1.partial
> -rw---. 1 postgres postgres 16777216 Mar 13 13:49 0006011100B1
> -rw---. 1 postgres postgres 16777216 Mar 13 14:04 0006011100B2
> -rw---. 1 postgres postgres  221 Mar 13 13:48 0006.history
> -rw---. 1 postgres postgres 16777216 Mar 13 20:48 0007011100AD
> -rw---. 1 postgres postgres  271 Mar 13 20:46 0007.history

You are missing 0005011100AE or 0006011100AE.

One of these files should be there.

I suspect that there is a problem with your WAL archiving in connection with 
failover.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com



Point-in-time recovery after failover

2018-03-13 Thread Dylan Luong
Hi

We are on Postgres 9.6 and we have primary/standby wal replication setup for HA.

For PITR, we have scheduled nightly base backup and hourly archive backups on 
the primary server.
https://www.postgresql.org/docs/9.6/static/continuous-archiving.html

I am trying to perform a point-in-time recovery after a failover has occurred.

For example:

1 am archive backup on P1
1:30 am base backup on P1 (primary) (timeline 5)
2 am archive backup on P1
3 am archive backup on P1
Etc..
1:49 pm failover P1 crashed, S1 got promoted. (timeline 6)
2 pm archive backup on S1
3 pm archive backup on S1

For example:
At 3:15pm something got corrupted and need to restore from backup to 1:54pm.

I extracted the base backups (tar files) to the data directory and extracted 
the all the archive backups  (P1 and S1) to the archive directory.
Followed the steps from
Created recovery.conf with the following, but the recovery only recovers, to 
the point before the failover and comes online.

Example of  my recovery.conf :
restore_command = 'cp /pg_backup/backup/archive/%f %p'
recovery_target_time = '2018-03-13 13:54:00'
recovery_target_inclusive = 'false'
recovery_target_timeline = '6'  (I also tried revovery_target_timeline = 
'lastest)

Is it possible to recovery to PITR after a failover using base backup from 
before failover and both archive backups from P1 and S1?


Here is the log output :

018-03-13 20:46:53 ACDT [154912]: [1-1] db=,user= app=,host= LOG:  database 
system was interrupted; last known up at 2018-03-1
3 01:31:25 ACDT
2018-03-13 20:46:53 ACDT [154912]: [2-1] db=,user= app=,host= LOG:  restored 
log file "0006.history" from archive
cp: cannot stat '/pg_backup/backup/archive/0007.history': No such file or 
directory
2018-03-13 20:46:53 ACDT [154912]: [3-1] db=,user= app=,host= LOG:  starting 
point-in-time recovery to 2018-03-13 13:54:00+10:3
0
2018-03-13 20:46:53 ACDT [154912]: [4-1] db=,user= app=,host= LOG:  restored 
log file "0006.history" from archive
cp: cannot stat '/pg_backup/backup/archive/000601110087': No such 
file or directory
2018-03-13 20:46:53 ACDT [154912]: [5-1] db=,user= app=,host= LOG:  restored 
log file "000501110087" from archive
cp: cannot stat '/pg_backup/backup/archive/0005.history': No such file or 
directory
2018-03-13 20:46:53 ACDT [154912]: [6-1] db=,user= app=,host= LOG:  redo starts 
at 111/8728
2018-03-13 20:46:53 ACDT [154912]: [7-1] db=,user= app=,host= LOG:  consistent 
recovery state reached at 111/870B4418
2018-03-13 20:46:53 ACDT [154908]: [3-1] db=,user= app=,host= LOG:  database 
system is ready to accept read only connections
cp: cannot stat '/pg_backup/backup/archive/000601110088': No such 
file or directory
2018-03-13 20:46:53 ACDT [154912]: [8-1] db=,user= app=,host= LOG:  restored 
log file "000501110088" from archive
cp: cannot stat '/pg_backup/backup/archive/000601110089': No such 
file or directory
2018-03-13 20:46:53 ACDT [154912]: [9-1] db=,user= app=,host= LOG:  restored 
log file "000501110089" from archive
cp: cannot stat '/pg_backup/backup/archive/00060111008A': No such 
file or directory
2018-03-13 20:46:53 ACDT [154912]: [10-1] db=,user= app=,host= LOG:  restored 
log file "00050111008A" from archive
cp: cannot stat '/pg_backup/backup/archive/00060111008B': No such 
file or directory
2018-03-13 20:46:53 ACDT [154912]: [11-1] db=,user= app=,host= LOG:  restored 
log file "00050111008B" from archive
cp: cannot stat '/pg_backup/backup/archive/00060111008C': No such 
file or directory
2018-03-13 20:46:53 ACDT [154912]: [12-1] db=,user= app=,host= LOG:  restored 
log file "00050111008C" from archive
cp: cannot stat '/pg_backup/backup/archive/00060111008D': No such 
file or directory
2018-03-13 20:46:53 ACDT [154912]: [13-1] db=,user= app=,host= LOG:  restored 
log file "00050111008D" from archive
cp: cannot stat '/pg_backup/backup/archive/00060111008E': No such 
file or directory
2018-03-13 20:46:53 ACDT [154912]: [14-1] db=,user= app=,host= LOG:  restored 
log file "00050111008E" from archive
cp: cannot stat '/pg_backup/backup/archive/00060111008F': No such 
file or directory
2018-03-13 20:46:54 ACDT [154912]: [15-1] db=,user= app=,host= LOG:  restored 
log file "00050111008F" from archive
cp: cannot stat '/pg_backup/backup/archive/000601110090': No such 
file or directory
2018-03-13 20:46:54 ACDT [154912]: [16-1] db=,user= app=,host= LOG:  restored 
log file "000501110090" from archive
cp: cannot stat '/pg_backup/backup/archive/000601110091': No such 
file or directory
2018-03-13 20:46:54 ACDT [154912]: [17-1] db=,user= app=,host= LOG:  restored 
log file "000501110091" from archive
cp: cannot stat '/pg_backup/backup/archive/000601110092': No such 
file or 

Re: Programmatically duplicating a schema

2018-03-13 Thread Alban Hertroys

> On 13 Mar 2018, at 4:23, matt.f...@internode.on.net wrote:
> 
> Hi all,
> 
> What is a reliable way to programmatically & generically populate an empty 
> schema with all the objects in the public schema as a template?
> 
> We are using the multi tenancy ruby gem Apartment ( 
> https://github.com/influitive/apartment ), which was recently broken by the 
> changes made to pg_dump to address CVE-2018-1058 
> https://nvd.nist.gov/vuln/detail/CVE-2018-1058
> 
> Apartment attempts to duplicate the public schema whenever creating a new 
> schema by running:
> 
> pg_dump -s -x -0 -n public
> 
> to get the SQL statements needed to recreate the public schema & then 
> executes the pg_dump's sql output after creating & switching to the new 
> schema ( via set search_path to ; )
> 
> After the fix to CVE-2018-1058, all table references in pg_dump's output 
> (including within SQL of stored procedures) are prefixed by the public. 
> schema, which means you cannot just reuse this output in a different schema 
> context without first manually changing the sql.
> As a temporary fix so we can handle new customers in production, we are using 
> a regex search/replace for public. in the pg_dump output, but clearly this is 
> not a reliable solution for a generic gem such as Apartment.

In my opinion, it makes sense that if you have the option of dumping the 
contents of a specific schema, it should be possible to restore that dump into 
a different schema. Unfortunately, looking at pg_restore, there does not appear 
to be such an option (yet).

I'd even go so far to suggest that every single object type that can be dumped 
with pg_dump (single database, single schema, single table, single function, 
etc) should be restorable under a different name. I realise that this could 
make pg_restore options potentially more confusing.

I suppose people currently manually edit the dumps to this effect, but that 
risks silent corruption of data when for example a data value contains a string 
such as 'The building is now open to public.'. Regular expressions don't know 
the difference between data and identifiers in a dump file - pg_restore does.

Whether psql needs the same treatment? I'd qualify this as "advanced" use and 
limit it to pg_restore.

But then, I'm just a list-lurker, I currently have but the option of voicing my 
opinion.

Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.




Re: Logical decoding on standby

2018-03-13 Thread Andreas Kretschmer



Am 13.03.2018 um 02:40 schrieb Andres Freund:


The subject said logical decoding, not replication. There's a lot of
change data capture type workloads where decoding from the standby is
quite useful. And the design definitely would work for that, we've
explicitly took that into consideration.

Greetings,

Andres Freund



Ah, thx, didn't know that.


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Programmatically duplicating a schema

2018-03-13 Thread matt . figg


Thanks Adrian,

  

Really appreciate the suggestions.

  

The objective when trying to solve this for the Apartment library
itself is to keep it generic (works for any/all database objects -
tables, views, stored procedures, 3rd party extension objects, etc.) &
to require minimal configuration (preferably not having to tell the
library your schema).

  

I believe pg_dump was the only way to satisfy the second point.  The
reason we'd want to infer the create statements via pg_dump is, so we
don't need to keep database migration files in sync with a 'create new
schema' SQL script.  It adds risk that they get out of sync, causing
inconsistencies in new schemas created by the library.

  

Assuming there's no other way to infer the create statements from the
public schema, Ruby on Rails' structure.sql could probably be used as
a starting point for the 'create new schema' SQL file  It's similar
already, however it's also generated via pg_dump (having the same
issues as Apartment library).
http://guides.rubyonrails.org/v3.2.9/migrations.html#types-of-schema-dumps
[1]  This is outside the realms of this mail group though.

  

Cheers,

Matt.

  

- Original Message -
From: "Adrian Klaver" 
To:, 
Cc:
Sent:Mon, 12 Mar 2018 21:22:01 -0700
Subject:Re: Programmatically duplicating a schema

 On 03/12/2018 08:23 PM, matt.f...@internode.on.net wrote:
 > Hi all,
 > 
 > What is a reliable way to programmatically & generically populate
an 
 > empty schema with all the objects in the public schema as a
template?
 > 
 > We are using the multi tenancy ruby gem Apartment ( 
 > https://github.com/influitive/apartment ), which was recently
broken by 
 > the changes made to pg_dump to address CVE-2018-1058 
 > https://nvd.nist.gov/vuln/detail/CVE-2018-1058
 > 
 > Apartment attempts to duplicate the public schema whenever creating
a 
 > new schema by running:
 > 
 > pg_dump -s -x -0 -n public
 > 
 > to get the SQL statements needed to recreate the public schema &
then 
 > executes the pg_dump's sql output after creating & switching to the
new 
 > schema ( via set search_path to ; )
 > 
 > After the fix to CVE-2018-1058, all table references in pg_dump's
output 
 > (including within SQL of stored procedures) are prefixed by the
public. 
 > schema, which means you cannot just reuse this output in a
different 
 > schema context without first manually changing the sql.
 > As a temporary fix so we can handle new customers in production, we
are 
 > using a regex search/replace for public. in the pg_dump output, but

 > clearly this is not a reliable solution for a generic gem such as
Apartment.
 > 
 > Is there a different way this gem could now be achieving this?
 > Without a flag on pg_dump to force the old format, or a CREATE
SCHEMA 
 >  WITH TEMPLATE  kind of statement, we can't 
 > see any alternative more reliable than the regex search/replace.

 Wild idea:

 1) Roll back to:
 
https://git.postgresql.org/gitweb/?p=postgresql.git;a=tree;h=c987089c48afba99df0c3c2dc45fd69238b52705;hb=2840d201c6a62beb86d9671a66eeec56183d261b

 2) Use that to build pre-10.3 version of pg_dump and use that to dump

 your schema. Basically long way to to force old format. Is fragile 
 though as that version will lose contact with changes.

 Less wild idea and previously suggested upstream.

 1) Check your schema objects into scripts that are checked into
version 
 control w/o schema qualifications.

 2) Use scripts to populate new schema.

 > 
 > The open issue on the Apartment gem for 
 > context: https://github.com/influitive/apartment/issues/532
 > 
 > Thanks in advance.

 -- 
 Adrian Klaver
 adrian.kla...@aklaver.com


Links:
--
[1]
http://guides.rubyonrails.org/v3.2.9/migrations.html#types-of-schema-dumps