Re: [GENERAL] postgres function

2015-10-16 Thread Jim Nasby

On 10/14/15 8:38 AM, Ramesh T wrote:

Hi All,
   Do we have  function like  regexp_substr in postgres..?

in oracle this function seach the - from 1 to 2 and return result,
regexp_substr(PART_CATG_DESC,'[^-]+', 1, 2)


Use regexp_split_to_array(string text, pattern text [, flags text ]):

SELECT regexp_split_to_array('1-2-3-4-5', '-');
 regexp_split_to_array
---
 {1,2,3,4,5}

If you just want one part of the array:

SELECT (regexp_split_to_array('1-2-3-4-5', '-'))[2];
 regexp_split_to_array
---
 2

(Note the extra ()s)

If that's not what you need then as David suggested please provide a few 
input values and what you expect as your *final* output. IE: tell us 
what you're ultimately trying to do, instead of just asking about regexp 
matching. There may be a much better way to do it in Postgres than 
whatever you were doing in Oracle.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] converting in() clause into a with prefix?

2015-10-16 Thread Jim Nasby

On 10/16/15 1:18 PM, Benjamin Smith wrote:

I have a horribly-performing query similar to below, and I'd like to convert
it to use a "WITH mytable as ( ... ) " without having to re-architect my code.
For some reason, using a WITH prefix seems to generally work much faster than
IN() sub clause even allowing identical results. (runs in 1/4th the time)


Presumably it's faster because WITH is currently an optimization fence. 
Instead of WITH, you could also do


WHERE (classes.school_id, classes.building_id) IN( SELECT a, b FROM 
unnest(...) u(a, b))

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ID column naming convention

2015-10-16 Thread Jim Nasby

On 10/15/15 8:28 PM, Gavin Flower wrote:

It would seem to be very dodgy to us a join based on apparently very
different semantic implied by 'blah.person_id = foo.invoice_id'!!! :-)

Because 2 fields in different tables have the same name, it does not
necessarily mean they have the same semantics. For example 2 tables
could have a field named 'start_date', but the one in a table called
'employment' would have different semantics to the one in 'project'.


Right, which is why the fields should be called employment_start_date 
and project_start_date, not just start_date.



Since 'id' is only used to indicate a PRIMARY KEY, there is less
confusion in joins, and it is clear when something is a foreign key
rather than a PRIMARY KEY.  For example, if two tables both refer to the
same human, you can join using a.human_id = b.human_id - and it is
clearer when you are joining a child to a parent table, for example
line_item.stock_id = stock.id.

Adopting you convention, it would result in not only picking up foreign
key references, but also the primary keys - which may, or may not, too
helpful!


It generally shouldn't matter, because a person_id is *always* a 
person_id. In cases where it does matter then the field name alone 
probably won't help you much, if at all.


Of course, this has been a debate forever, so I know neither of us will 
convince the other to change. ;) I just wanted to point out some things 
that hadn't come up already.


BTW, I found Karsten's idea of using 'pk' for the surrogate key, and 
fk_table_name interesting. It helps avoid ambiguity from externally 
generated ID values.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] question

2015-10-16 Thread Adrian Klaver

On 10/16/2015 12:10 PM, anj patnaik wrote:

Thanks. what is the recommended command/options for backup and how to
restore?

I found the below online. let me know if this is better and how to
restore. Thank you

pg_dump -Fc  '' | xz -3 dump.xz


Again, why would compress an already compressed output?

Also online:

http://www.postgresql.org/docs/9.4/interactive/app-pgdump.html

http://www.postgresql.org/docs/9.4/interactive/app-pgrestore.html

They step you through the backup and restore process.



On Fri, Oct 16, 2015 at 4:05 AM, Francisco Olarte
mailto:fola...@peoplecall.com>> wrote:

On Fri, Oct 16, 2015 at 8:27 AM, Guillaume Lelarge
mailto:guilla...@lelarge.info>> wrote:
> 2015-10-15 23:05 GMT+02:00 Adrian Klaver mailto:adrian.kla...@aklaver.com>>:
>> On 10/15/2015 01:35 PM, anj patnaik wrote:
...
>>> ./pg_dump -t RECORDER  -Fc postgres |  gzip > /tmp/dump
>>> Are there any other options for large tables to run faster and occupy
>>> less disk space?
>> Yes, do not double compress. -Fc already compresses the file.
> Right. But I'd say "use custom format but do not compress with pg_dump". 
Use
> the -Z0 option to disable compression, and use an external multi-threaded
> tool such as pigz or pbzip2 to get faster and better compression.

Actually I would not recommend that, unless you are making a long term
or offsite copy. Doing it means you need to decompress the dump before
restoring or even testing it ( via i.e., pg_restore > /dev/null ).

And if you are pressed on disk space you may corner yourself using
that on a situation where you do NOT have enough disk space for an
uncompressed dump. Given you normally are nervous enough when
restoring, for normal operations I think built in compression is
better.

Also, I'm not current with the compressor Fc uses, I think it still is
gzip, which is not that bad and is normally quite fast ( In fact I do
not use that 'pbzip2', but I did some tests about a year ago and I
found bzip2 was beaten by xz quite easily ( That means on every level
of bzip2 one of the levels of xz beat it in BOTH size & time, that was
for my data, YMMV  ).


Francisco Olarte.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] converting in() clause into a with prefix?

2015-10-16 Thread dinesh kumar
On Fri, Oct 16, 2015 at 11:18 AM, Benjamin Smith 
wrote:

> I have a horribly-performing query similar to below, and I'd like to
> convert
> it to use a "WITH mytable as ( ... ) " without having to re-architect my
> code.
> For some reason, using a WITH prefix seems to generally work much faster
> than
> IN() sub clause even allowing identical results. (runs in 1/4th the time)
>
> Is there a PG native function that can convert the listing format of in()
> clause to row-level results from a WITH prefix? I see the array* functions
> but
> they seem to work with arrays like "array[1,2,3]" and unnest seems to drill
> right through nested arrays and flattens every single element to a new row,
> regardless of depth.  EG: the following two lines are equivalent:
>
> select unnest(array([1,2,2,3]);
> select unnest(array[array[1,2],array[2,3]]);
>
> I'd expect the latter to put out two rows as
>
> 1, 2
> 2, 3
>
> Thanks for your input, clarifying pseudo code examples below (PHP). We're
> running 9.4.4 on CentOS 6.
>
> Ben
>
>
> // DESIRED END RESULT PSUEDO CODE
> $query = "
> WITH mytable AS
> (
> unnest(". $in .", school_id, building_id)
> )
> SELECT
> id,
> name
> FROM mytable
> JOIN classes ON
> (
> mytable.school_id = classes.school_id
> AND mytable.building_id = classes.building_id
> )" ;
>
>
>
Ignore this approach, if you have already tried this as below.

PREPARE stmt(record[]) AS WITH incla AS (
SELECT * FROM (SELECT UNNEST(ARRAY[$1])) f
)
SELECT * FROM incla WHERE (1,2) IN (unnest);

EXECUTE stmt(ARRAY[(1,2), (2,1)]);




> // CURRENT CODE EXAMPLE (PHP)
> $query = "
> SELECT
> id,
> name
> FROM classes
> WHERE
> (classes.school_id, classes.building_id) IN (" . $in . ")";
>
>
> // EXAMPLE RESULT (small list)
> SELECT
> id,
> name
> FROM classes
> WHERE
> (classes.school_id, classes.building_id) IN ((291,189),(291,192),
> (291,191),(291,195),(291,198),(291,197),(291,188),(291,184),(291,199),
> (291,193),(291,185),(291,194),(291,-2),(291,186),(291,183),(291,187),
> (291,200),(291,190),(291,196),(291,202),(200,455),(200,433),(200,442),
> (200,440),(200,445),(200,438),(200,437),(200,450),(200,452),(200,459),
> (200,456),(200,432),(200,441),(200,460),(200,446),(200,447),(200,448),
> (200,449),(200,434),(200,443),(200,-2),(200,444),(200,457),(200,458),
> (200,451),(200,454),(200,453),(246,-1),(246,190),(246,186),(246,188),
> (246,195),(246,196),(246,197),(246,192),(246,-2),(246,184),(246,189),
> (246,193),(246,194),(246,198),(246,191),(246,199),(246,187),(246,185),
> (246,183),(246,200),(63,-1),(63,269),(63,267),(63,264),(63,258),(63,126),
> (63,270),(63,262),(63,127),(63,134),(63,263),(63,271),(63,272),(63,265),
> (63,-2),(63,259),(63,260),(63,266),(9,-1),(9,283),(9,192),(9,266),(9,276),
> (9,313),(9,294),(9,311),(9,298),(9,299),(9,300),(9,312),(9,310),(9,263),
> (9,272),(9,305),(9,314),(9,152),(9,284),(9,277),(9,260),(9,271),(9,278),
> (9,315),(9,287),(9,289),(9,316),(9,288),(9,290),(9,259),(9,258),(9,269),
> (9,279),(9,265),(9,273),(9,280),(9,64),(9,264),(9,281),(9,317),(9,304),
> (9,303),(9,322),(9,267),(9,282),(9,318),(9,306),(9,295),(9,319),(9,301),
>
> (9,302),(9,43),(9,268),(9,275),(9,-2),(9,262),(9,296),(9,297),(9,320),(9,286),
> (9,274),(9,307),(9,308),(9,321),(9,270),(9,261),(9,309),(9,292),(9,293),
>
> (9,285),(9,291),(94,-1),(94,280),(94,285),(94,268),(94,265),(94,278),(94,283),
> (94,287),(94,288),(94,289),(94,284),(94,281),(94,110),(94,262),(94,259),
> (94,273),(94,270),(94,266),(94,258),(94,269),(94,274),(94,263),(94,290),
> (94,272),(94,264),(94,260),(94,279),(94,261),(94,286),(94,275),(94,277),
> (94,291),(94,109),(94,-2),(94,257),(111,-1),(111,452),(111,486),(111,404),
> (111,456),(111,455),(111,485),(111,484),(111,483),(111,457),(111,441),
> (111,468),(111,430),(111,442),(111,446),(111,490),(111,440),(111,466),
> (111,491),(111,449),(111,454),(111,461),(111,492),(111,469),(111,465),
> (111,458),(111,426),(111,443),(111,493),(111,488),(111,475),(111,481),
> (111,482),(111,474),(111,477),(111,478),(111,476),(111,479),(111,480),
> (111,471),(111,463),(111,450),(111,470),(111,494),(111,447),(111,448),
> (111,495),(111,-2),(111,473),(111,496),(111,429),(111,487),(111,497),
> (111,167),(111,444),(111,451),(111,431),(111,445),(111,166),(111,453),
> (111,467),(111,462),(111,464),(111,472),(111,489),(334,-1),(334,188),
> (334,189),(334,203),(334,201),(334,200),(334,193),(334,192),(334,191),
> (334,202),(334,185),(334,190),(334,204),(334,186),(334,187),(334,197),
> (334,199),(334,205),(334,184),(334,194),(334,-2),(334,196),(334,183),
> (334,195),(334,206),(334,198),(334,207),(201,-1),(201,445),(201,442),
> (201,446),(201,468),(201,458),(201,455),(201,456),(201,469),(201,454),
> (201,453),(201,466),(201,474),(201,443),(201,441),(201,440),(201,447),
> (201,470),(201,460),(201,459),(201,461),(201,471),(201,465),(201,463),
> (201,448),(201,451),(201,472),(201,457),(201,-2),(201,462),(201,46

Re: [GENERAL] converting in() clause into a with prefix?

2015-10-16 Thread David G. Johnston
On Fri, Oct 16, 2015 at 2:18 PM, Benjamin Smith 
wrote:

> I have a horribly-performing query similar to below, and I'd like to
> convert
> it to use a "WITH mytable as ( ... ) " without having to re-architect my
> code.
> For some reason, using a WITH prefix seems to generally work much faster
> than
> IN() sub clause even allowing identical results. (runs in 1/4th the time)
>
> Is there a PG native function that can convert the listing format of in()
> clause to row-level results from a WITH prefix? I see the array* functions
> but
> they seem to work with arrays like "array[1,2,3]" and unnest seems to drill
> right through nested arrays and flattens every single element to a new row,
> regardless of depth.  EG: the following two lines are equivalent:
>
> select unnest(array([1,2,2,3]);
> select unnest(array[array[1,2],array[2,3]]);
>
> I'd expect the latter to put out two rows as
>
> 1, 2
> 2, 3
>
> Thanks for your input, clarifying pseudo code examples below (PHP). We're
> running 9.4.4 on CentOS 6.
>
> Ben
>
>
> // DESIRED END RESULT PSUEDO CODE
> $query = "
> WITH mytable AS
> (
> unnest(". $in .", school_id, building_id)
> )
> SELECT
> id,
> name
> FROM mytable
> JOIN classes ON
> (
> mytable.school_id = classes.school_id
> AND mytable.building_id = classes.building_id
> )" ;
>
>
> // CURRENT CODE EXAMPLE (PHP)
> $query = "
> SELECT
> id,
> name
> FROM classes
> WHERE
> (classes.school_id, classes.building_id) IN (" . $in . ")";
>
>
> // EXAMPLE RESULT (small list)
> SELECT
> id,
> name
> FROM classes
> WHERE
> (classes.school_id, classes.building_id) IN ((291,189),(291,192),
> ​[...]​
>
>
>
​WITH input_table AS (​
 SELECT
 split_part(input_as_table, ',', 1) AS col1,
 split_part(input_as_table, ',', 2) AS col2
  FROM
regexp_split_to_table(
​​
substring('(1,2),(3,4)', 2, 11-2),
​  -- 11 = length of input string​
​​
 '\),\('
) AS input_as_table
​
)
​


[GENERAL] converting in() clause into a with prefix?

2015-10-16 Thread Benjamin Smith
I have a horribly-performing query similar to below, and I'd like to convert 
it to use a "WITH mytable as ( ... ) " without having to re-architect my code. 
For some reason, using a WITH prefix seems to generally work much faster than 
IN() sub clause even allowing identical results. (runs in 1/4th the time) 

Is there a PG native function that can convert the listing format of in() 
clause to row-level results from a WITH prefix? I see the array* functions but 
they seem to work with arrays like "array[1,2,3]" and unnest seems to drill 
right through nested arrays and flattens every single element to a new row, 
regardless of depth.  EG: the following two lines are equivalent: 

select unnest(array([1,2,2,3]); 
select unnest(array[array[1,2],array[2,3]]); 

I'd expect the latter to put out two rows as
 
1, 2
2, 3

Thanks for your input, clarifying pseudo code examples below (PHP). We're 
running 9.4.4 on CentOS 6. 

Ben 


// DESIRED END RESULT PSUEDO CODE  
$query = "
WITH mytable AS 
(
unnest(". $in .", school_id, building_id) 
)
SELECT
id, 
name
FROM mytable 
JOIN classes ON 
(
mytable.school_id = classes.school_id
AND mytable.building_id = classes.building_id 
)" ;


// CURRENT CODE EXAMPLE (PHP) 
$query = "
SELECT 
id, 
name 
FROM classes 
WHERE 
(classes.school_id, classes.building_id) IN (" . $in . ")"; 


// EXAMPLE RESULT (small list) 
SELECT 
id, 
name 
FROM classes 
WHERE 
(classes.school_id, classes.building_id) IN ((291,189),(291,192),
(291,191),(291,195),(291,198),(291,197),(291,188),(291,184),(291,199),
(291,193),(291,185),(291,194),(291,-2),(291,186),(291,183),(291,187),
(291,200),(291,190),(291,196),(291,202),(200,455),(200,433),(200,442),
(200,440),(200,445),(200,438),(200,437),(200,450),(200,452),(200,459),
(200,456),(200,432),(200,441),(200,460),(200,446),(200,447),(200,448),
(200,449),(200,434),(200,443),(200,-2),(200,444),(200,457),(200,458),
(200,451),(200,454),(200,453),(246,-1),(246,190),(246,186),(246,188),
(246,195),(246,196),(246,197),(246,192),(246,-2),(246,184),(246,189),
(246,193),(246,194),(246,198),(246,191),(246,199),(246,187),(246,185),
(246,183),(246,200),(63,-1),(63,269),(63,267),(63,264),(63,258),(63,126),
(63,270),(63,262),(63,127),(63,134),(63,263),(63,271),(63,272),(63,265),
(63,-2),(63,259),(63,260),(63,266),(9,-1),(9,283),(9,192),(9,266),(9,276),
(9,313),(9,294),(9,311),(9,298),(9,299),(9,300),(9,312),(9,310),(9,263),
(9,272),(9,305),(9,314),(9,152),(9,284),(9,277),(9,260),(9,271),(9,278),
(9,315),(9,287),(9,289),(9,316),(9,288),(9,290),(9,259),(9,258),(9,269),
(9,279),(9,265),(9,273),(9,280),(9,64),(9,264),(9,281),(9,317),(9,304),
(9,303),(9,322),(9,267),(9,282),(9,318),(9,306),(9,295),(9,319),(9,301),
(9,302),(9,43),(9,268),(9,275),(9,-2),(9,262),(9,296),(9,297),(9,320),(9,286),
(9,274),(9,307),(9,308),(9,321),(9,270),(9,261),(9,309),(9,292),(9,293),
(9,285),(9,291),(94,-1),(94,280),(94,285),(94,268),(94,265),(94,278),(94,283),
(94,287),(94,288),(94,289),(94,284),(94,281),(94,110),(94,262),(94,259),
(94,273),(94,270),(94,266),(94,258),(94,269),(94,274),(94,263),(94,290),
(94,272),(94,264),(94,260),(94,279),(94,261),(94,286),(94,275),(94,277),
(94,291),(94,109),(94,-2),(94,257),(111,-1),(111,452),(111,486),(111,404),
(111,456),(111,455),(111,485),(111,484),(111,483),(111,457),(111,441),
(111,468),(111,430),(111,442),(111,446),(111,490),(111,440),(111,466),
(111,491),(111,449),(111,454),(111,461),(111,492),(111,469),(111,465),
(111,458),(111,426),(111,443),(111,493),(111,488),(111,475),(111,481),
(111,482),(111,474),(111,477),(111,478),(111,476),(111,479),(111,480),
(111,471),(111,463),(111,450),(111,470),(111,494),(111,447),(111,448),
(111,495),(111,-2),(111,473),(111,496),(111,429),(111,487),(111,497),
(111,167),(111,444),(111,451),(111,431),(111,445),(111,166),(111,453),
(111,467),(111,462),(111,464),(111,472),(111,489),(334,-1),(334,188),
(334,189),(334,203),(334,201),(334,200),(334,193),(334,192),(334,191),
(334,202),(334,185),(334,190),(334,204),(334,186),(334,187),(334,197),
(334,199),(334,205),(334,184),(334,194),(334,-2),(334,196),(334,183),
(334,195),(334,206),(334,198),(334,207),(201,-1),(201,445),(201,442),
(201,446),(201,468),(201,458),(201,455),(201,456),(201,469),(201,454),
(201,453),(201,466),(201,474),(201,443),(201,441),(201,440),(201,447),
(201,470),(201,460),(201,459),(201,461),(201,471),(201,465),(201,463),
(201,448),(201,451),(201,472),(201,457),(201,-2),(201,462),(201,464),
(201,452),(201,449),(201,450),(201,473),(201,467),(201,475),(62,-1),(62,279),
(62,280),(62,294),(62,281),(62,282),(62,285),(62,274),(62,299),(62,300),
(62,290),(62,291),(62,289),(62,273),(62,286),(62,194),(62,295),(62,275),
(62,-2),(62,292),(62,301),(62,196),(62,195),(62,296),(62,276),(62,284),
(62,287),(62,297),(62,288),(62,277),(62,298),(62,278),(188,-1),(188,443),
(188,446),(188,449),(188,453),(188,454),(188,455),(188,456),(188,450),
(188,445),(188,

Re: [GENERAL] Standby pg_dump Conflict with Recovery

2015-10-16 Thread Adrian Klaver

On 10/16/2015 07:50 AM, Louis Battuello wrote:




Also in the standby logs before and after the ERROR?


The primary log was clean. The standby contained the same error as the
pg_dump output log:

< 2015-10-15 01:10:50 EDT [42613] : [1-1]
user=postgres,db=,remote=::1(55426) > ERROR:  canceling statement
due to conflict with recovery
< 2015-10-15 01:10:50 EDT [42613] : [2-1]
user=postgres,db=,remote=::1(55426) > DETAIL:  User query might have
needed to see row versions that must be removed.
< 2015-10-15 01:10:50 EDT [42613] : [3-1]
user=postgres,db=,remote=::1(55426) > STATEMENT:  COPY  (...)
TO stdout;

I ran the pg_dump process again this morning, ensuring that the standby
parameters were set, and it completed successfully with the
hot_standby_feedback enabled.

postgres=# select name, setting, unit from pg_settings where category =
'Replication / Standby Servers';
  name | setting | unit
--+-+--
  hot_standby  | on  |
  hot_standby_feedback | on  |
  max_standby_archive_delay| 3   | ms
  max_standby_streaming_delay  | 3   | ms
  wal_receiver_status_interval | 10  | s
  wal_receiver_timeout | 6   | ms
(6 rows)

postgres=# \q


I’m going to file this one under: ”DBA (/me/) failed to ensure the
postgresql.conf was saved with updated parameters.”


Been there, done that, understand.



Thanks for your help.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Standby pg_dump Conflict with Recovery

2015-10-16 Thread Louis Battuello

> On Oct 16, 2015, at 9:35 AM, Adrian Klaver  wrote:
> 
> On 10/15/2015 03:30 PM, Louis Battuello wrote:
>> 
>>> On Oct 15, 2015, at 6:16 PM, Adrian Klaver >> > wrote:
>>> 
> 
>>> 
>>> How did you set and temporarily enable the settings
>> 
>> I changed the settings in the postgresql.conf file, restarted the
>> standby server, checked that there wasn't any activity on the primary or
>> the standby, and ran the pg_dump on the standby again - which failed. I
>> watched the xmin value on the primary pg_replication_slots, which held
>> steady until the dump failed.
>> 
>> Then, I changed the delay settings back to the defaults and restarted
>> the standby so I wouldn’t affect the replication during the next
>> business day.
>> 
> 
> Hmm. From what I see it looks okay.
> 
> Have looked in the logs of the master to see what is going on around the time 
> the query is cancelled?
> 
> Also in the standby logs before and after the ERROR?

The primary log was clean. The standby contained the same error as the pg_dump 
output log:

< 2015-10-15 01:10:50 EDT [42613] : [1-1] 
user=postgres,db=,remote=::1(55426) > ERROR:  canceling statement due to 
conflict with recovery
< 2015-10-15 01:10:50 EDT [42613] : [2-1] 
user=postgres,db=,remote=::1(55426) > DETAIL:  User query might have needed 
to see row versions that must be removed.
< 2015-10-15 01:10:50 EDT [42613] : [3-1] 
user=postgres,db=,remote=::1(55426) > STATEMENT:  COPY  (...) TO 
stdout;

I ran the pg_dump process again this morning, ensuring that the standby 
parameters were set, and it completed successfully with the 
hot_standby_feedback enabled.

postgres=# select name, setting, unit from pg_settings where category = 
'Replication / Standby Servers'; 
 name | setting | unit 
--+-+--
 hot_standby  | on  | 
 hot_standby_feedback | on  | 
 max_standby_archive_delay| 3   | ms
 max_standby_streaming_delay  | 3   | ms
 wal_receiver_status_interval | 10  | s
 wal_receiver_timeout | 6   | ms
(6 rows)

postgres=# \q


I’m going to file this one under: ”DBA (me) failed to ensure the 
postgresql.conf was saved with updated parameters.”

Thanks for your help.

> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
> 



Re: [GENERAL] question

2015-10-16 Thread anj patnaik
Hello all,
I will experiment with -Fc (custom). The file is already growing very large.

I am running this:
./pg_dump -t RECORDER  -Fc postgres |  gzip > /tmp/dump

Are there any other options for large tables to run faster and occupy less
disk space?

Below is memory info:

[root@onxl5179 tmp]# cat /proc/meminfo
MemTotal:   16333720 kB
MemFree:  187736 kB
Buffers:   79696 kB
Cached: 11176616 kB
SwapCached: 2024 kB
Active: 11028784 kB
Inactive:4561616 kB
Active(anon):3839656 kB
Inactive(anon):   642416 kB
Active(file):7189128 kB
Inactive(file):  3919200 kB
Unevictable:   0 kB
Mlocked:   0 kB
SwapTotal:  33456120 kB
SwapFree:   33428960 kB
Dirty: 33892 kB
Writeback: 0 kB
AnonPages:   4332408 kB
Mapped:   201388 kB
Shmem:147980 kB
Slab: 365380 kB
SReclaimable: 296732 kB
SUnreclaim:68648 kB
KernelStack:5888 kB
PageTables:37720 kB
NFS_Unstable:  0 kB
Bounce:0 kB
WritebackTmp:  0 kB
CommitLimit:41622980 kB
Committed_AS:7148392 kB
VmallocTotal:   34359738367 kB
VmallocUsed:  179848 kB
VmallocChunk:   34359548476 kB
HardwareCorrupted: 0 kB
AnonHugePages:   3950592 kB
HugePages_Total:   0
HugePages_Free:0
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB
DirectMap4k:   10240 kB
DirectMap2M:16766976 kB


# CPUs=8
RHEL 6.5

The PG shared memory info is the defaults as I've not touched the .conf
file. I am not a DBA, just a test tools developer who needs to backup the
table efficiently. I am fairly new to PG and not an expert at Linux.

Also if there are recommended backup scripts/cron that you recommend,
please point them to me.

Thanks!!

On Thu, Oct 15, 2015 at 3:59 PM, Scott Mead  wrote:

>
> On Thu, Oct 15, 2015 at 3:55 PM, Guillaume Lelarge  > wrote:
>
>> 2015-10-15 20:40 GMT+02:00 anj patnaik :
>>
>>> It's a Linux machine with 8 CPUs. I don't have the other details.
>>>
>>> I get archive member too large for tar format.
>>>
>>> Is there a recommended command/options when dealing with very large
>>> tables, aka 150K rows and half of the rows have data being inserted with
>>> 22MB?
>>>
>>>
>> Don't use tar format? I never understood the interest on this one. You
>> should better use the custom method.
>>
>
> + 1
>
>  Use -F c
>
>
> --
> Scott Mead
> Sr. Architect
> *OpenSCG*
> PostgreSQL, Java & Linux Experts
>
>
> http://openscg.com
>
>
>>
>>
>>> -bash-4.1$ ./pg_dump -t RECORDER postgres --format=t -w  > /tmp/dump
>>> pg_dump: [archiver (db)] connection to database "postgres" failed:
>>> fe_sendauth: no password supplied
>>> -bash-4.1$ ./pg_dump -t RECORDER postgres --format=t   > /tmp/dump
>>> Password:
>>> pg_dump: [tar archiver] archive member too large for tar format
>>> -bash-4.1$ pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz
>>> -bash: pg_dumpall: command not found
>>> -bash: tmpdb.out-2015101510.gz: Permission denied
>>> -bash-4.1$ ./pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz
>>>
>>>
>>> Thank you so much for replying and accepting my post to this NG.
>>>
>>> On Thu, Oct 15, 2015 at 11:17 AM, Melvin Davidson 
>>> wrote:
>>>
 In addition to exactly what you mean by "a long time" to pg_dump 77k of
 your table,

 What is your O/S and how much memory is on your system?
 How many CPU's are in your system?
 Also, what is your hard disk configuration?
 What other applications are running simultaneously with pg_dump?
 What is the value of shared_memory & maintenance_work_mem in
 postgresql.conf?

 On Thu, Oct 15, 2015 at 11:04 AM, Adrian Klaver <
 adrian.kla...@aklaver.com> wrote:

> On 10/14/2015 06:39 PM, anj patnaik wrote:
>
>> Hello,
>>
>> I recently downloaded postgres 9.4 and I have a client application
>> that
>> runs in Tcl that inserts to the db and fetches records.
>>
>> For the majority of the time, the app will connect to the server to do
>> insert/fetch.
>>
>> For occasional use, we want to remove the requirement to have a server
>> db and just have the application retrieve data from a local file.
>>
>> I know I can use pg_dump to export the tables. The questions are:
>>
>> 1) is there an in-memory db instance or file based I can create that
>> is
>> loaded with the dump file? This way the app code doesn't have to
>> change.
>>
>
> No.
>
>
>> 2) does pg support embedded db?
>>
>
> No.
>
> 3) Or is my best option to convert the dump to sqlite and the import
>> the
>> sqlite and have the app read that embedded db.
>>
>
> Sqlite tends to follow Postgres conventions, so you might be able to
> use the pg_dump output directly if you use --inserts or --column-inserts:
>
> http://www.postgresql.org/docs/9.4/intera

Re: [GENERAL] Standby pg_dump Conflict with Recovery

2015-10-16 Thread Adrian Klaver

On 10/15/2015 03:30 PM, Louis Battuello wrote:



On Oct 15, 2015, at 6:16 PM, Adrian Klaver mailto:adrian.kla...@aklaver.com>> wrote:





How did you set and temporarily enable the settings


I changed the settings in the postgresql.conf file, restarted the
standby server, checked that there wasn't any activity on the primary or
the standby, and ran the pg_dump on the standby again - which failed. I
watched the xmin value on the primary pg_replication_slots, which held
steady until the dump failed.

Then, I changed the delay settings back to the defaults and restarted
the standby so I wouldn’t affect the replication during the next
business day.



Hmm. From what I see it looks okay.

Have looked in the logs of the master to see what is going on around the 
time the query is cancelled?


Also in the standby logs before and after the ERROR?

--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgpool ssl handshake failure

2015-10-16 Thread Albe Laurenz
AI Rumman wrote:
> I am using pgpool-II version 3.4.3 (tataraboshi).
> Where my database is Postgresql 8.4.
> 
> I am trying to configure ssl mode from client and between pgpool and database 
> it is non-ssl.
> I configured as document and now I am getting this in my log:
> 
>   2015-10-13 22:17:58: pid 1857: LOG:  new connection received
>   2015-10-13 22:17:58: pid 1857: DETAIL:  connecting host=10.0.0.5 
> port=65326
>   2015-10-13 22:17:58: pid 1857: LOG:  pool_ssl: "SSL_read": "ssl 
> handshake failure"
>   2015-10-13 22:17:58: pid 1857: ERROR:  unable to read data from frontend
>   2015-10-13 22:17:58: pid 1857: DETAIL:  socket read failed with an 
> error "Success"
> 
> Please let me know what wrong I am doing.

Does the problem occur immediately when you try to connect or only after some 
time?

If the latter, it may well be a renegotiation issue.
If the former, I can't guess.
WHat are the OpenSSL versions involved on both ends?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] question

2015-10-16 Thread Francisco Olarte
On Fri, Oct 16, 2015 at 8:27 AM, Guillaume Lelarge
 wrote:
> 2015-10-15 23:05 GMT+02:00 Adrian Klaver :
>> On 10/15/2015 01:35 PM, anj patnaik wrote:
...
>>> ./pg_dump -t RECORDER  -Fc postgres |  gzip > /tmp/dump
>>> Are there any other options for large tables to run faster and occupy
>>> less disk space?
>> Yes, do not double compress. -Fc already compresses the file.
> Right. But I'd say "use custom format but do not compress with pg_dump". Use
> the -Z0 option to disable compression, and use an external multi-threaded
> tool such as pigz or pbzip2 to get faster and better compression.

Actually I would not recommend that, unless you are making a long term
or offsite copy. Doing it means you need to decompress the dump before
restoring or even testing it ( via i.e., pg_restore > /dev/null ).

And if you are pressed on disk space you may corner yourself using
that on a situation where you do NOT have enough disk space for an
uncompressed dump. Given you normally are nervous enough when
restoring, for normal operations I think built in compression is
better.

Also, I'm not current with the compressor Fc uses, I think it still is
gzip, which is not that bad and is normally quite fast ( In fact I do
not use that 'pbzip2', but I did some tests about a year ago and I
found bzip2 was beaten by xz quite easily ( That means on every level
of bzip2 one of the levels of xz beat it in BOTH size & time, that was
for my data, YMMV  ).


Francisco Olarte.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general