Re: [GENERAL] My postgres is not logging anymore

2015-06-18 Thread Maila Fatticcioni
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 06/18/2015 11:48 AM, Raymond O'Donnell wrote:
 On 18/06/2015 10:34, Maila Fatticcioni wrote:
 Hello. I have a postgres cluster made by two nodes with OS Ubuntu
 14.04. About two weeks ago I did a minor update of postgres to
 the version 9.4.1-1.pgdg70+1. After I had to shutdown the
 machines and I could start them just last week. Since then
 postgresql has been stopping logging on both nodes. If I destroy
 the log file and I do a restart, a new file is created but it 
 remains empty. I have tried to change some configuration
 parameters with no success. I have tried to restart postgres on
 both nodes and relocate the service as well - nothing. Apart from
 this everything is working fine and my applications don't show
 any errors during the connection to the database. Any ideas about
 how to have back the log?
 
 Here my configuration file postgresql.conf:
 
 What have you got set for the following? - Here's what they're set
 to on my laptop (Windows 7, PG 9.4):
 
 log_destination = 'stderr' logging_collector = on log_filename =
 'postgresql-%Y-%m-%d_%H%M%S.log' log_file_mode = 0600 
 log_truncate_on_rotation = off log_rotation_age = 1d 
 log_rotation_size = 10MB
 
 The comments in postgresql.conf indicate that log_destintion and 
 logging_collector, at least, need to be set as above to generate
 log files.
 
 Ray.
 

I set up the logs using the parameters:
log_connections = on
log_disconnections = on
log_duration = on
log_error_verbosity = terse
log_statement = 'all'
log_timezone = 'localtime'

I think it would be enough to get the log in the file
/var/log/postgresql/postgresql-9.4-main.log . If I set the
logging_collector up the log would became
/var/lib/postgresql/9.4/main/pg_log/postgresql-2015-06-18_X.log .
I don't understand why postgresql logged before my update and now it
doesn't work anymore.

Best Regards,
Maila Fatticcioni



- -- 
__
Maila Fatticcioni
__
 Mediterranean Broadband Infrastructure s.r.l.
 via Francesco Squartini n°7
   56121 Ospedaletto (PI) - ITALY
__
  Phone:   +39(050)3870851   Fax:   +39(050)3870809
__
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iEYEARECAAYFAlWCtc8ACgkQi2q3wPb3FcNt+ACguyNKJa0N4MIKsoovbxeokAzO
ctoAoIrWRgMseZMOaM2YomXqRhgnhRDy
=GL75
-END PGP SIGNATURE-


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PL/pgSQL: How to return two columns and multiple rows

2015-06-18 Thread Sven Geggus
Hello,

I supose this is simple, but I did not find a solution in the documentation.

I would like to be able to do something like this:

select myfunc('foo','bar');
or
select myfunc(foo, bar) from foobartable;
or even
select myfunc(foo, bar), 'baz' as baz from foobartable;

Which should return something like this:
 foo  | bar 
--+--
 foo1 | bar1
 foo2 | bar2
 foo3 | bar3
 foo4 | bar4
(4 rows)

So the output should be at least two columns and (usually) more than one row.

What I currently have is the following, which is mostly it.  Unfortunately
it gives me only one column (I really need two) and I would have to create a
custom type:

CREATE TYPE t_foobar AS (foo text, bar text);  

CREATE or REPLACE FUNCTION myfunc(foo text, bar text) 
returns SETOF t_foobar as $$
BEGIN
  FOR i IN 1..4 LOOP 
RETURN NEXT (foo || i::text, bar || i::text);   
  
  END LOOP;
  RETURN;
END;  
$$ language 'plpgsql';

mydb= select myfunc('foo','bar');
   myfunc
-
 (foo1,bar1)
 (foo2,bar2)
 (foo3,bar3)
 (foo4,bar4)
(4 rows)

Regards

Sven

-- 
Exploits and holes are a now a necessary protection against large
corporate interests. (Alan Cox)

/me is giggls@ircnet, http://sven.gegg.us/ on the Web


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PL/pgSQL: How to return two columns and multiple rows

2015-06-18 Thread Chris Travers
 On Thu, Jun 18, 2015, 14:38 Sven Geggus li...@fuchsschwanzdomain.de
wrote:

Hello,

I supose this is simple, but I did not find a solution in the documentation.

 Because you already are returning 2 columns.

I would like to be able to do something like this:

select myfunc('foo','bar');
or
select myfunc(foo, bar) from foobartable;
or even
select myfunc(foo, bar), 'baz' as baz from foobartable;

Which should return something like this:
 foo  | bar
--+--
 foo1 | bar1
 foo2 | bar2
 foo3 | bar3
 foo4 | bar4
(4 rows)

So the output should be at least two columns and (usually) more than one
row.

What I currently have is the following, which is mostly it.  Unfortunately
it gives me only one column (I really need two) and I would have to create a
custom type:

CREATE TYPE t_foobar AS (foo text, bar text);

CREATE or REPLACE FUNCTION myfunc(foo text, bar text)
returns SETOF t_foobar as $$
BEGIN
  FOR i IN 1..4 LOOP
RETURN NEXT (foo || i::text, bar || i::text);
  END LOOP;
  RETURN;
END;
$$ language 'plpgsql';

mydb= select myfunc('foo','bar');
   myfunc
-
 (foo1,bar1)
 (foo2,bar2)
 (foo3,bar3)
 (foo4,bar4)
(4 rows)

 Select (myfunc('foo','bar')).*;
Or
Select * from myfunc('foo','bar');

Regards

Sven

--
Exploits and holes are a now a necessary protection against large
corporate interests. (Alan Cox)

/me is giggls@ircnet, http://sven.gegg.us/ on the Web

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PL/pgSQL: How to return two columns and multiple rows

2015-06-18 Thread Sven Geggus
Pavel Stehule pavel.steh...@gmail.com wrote:

 CREATE OR REPLACE FUNCTION public.fx(i integer, OUT a text, OUT b text)
  RETURNS SETOF record
  LANGUAGE plpgsql
 AS $function$
 BEGIN
   RETURN QUERY SELECT 'foo'||id, 'bar'||id FROM generate_series(1,i) g(id);
   RETURN;
 END;
 $function$

I'm afraid I will almost certainly be unable to use RETURN QUERY. I have
just broken this down to the posted code to make it easier to understand.

In my real world code a loop will need to iterate over features of a postgis
geometry returning a couple of rows containing a string and a calculated
geometry as a result.

Regards

Sven

-- 
Thinking of using NT for your critical apps?
  Isn't there enough suffering in the world?
   (Advertisement of Sun Microsystems in Wall Street Journal)
/me is giggls@ircnet, http://sven.gegg.us/ on the Web


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PL/pgSQL: How to return two columns and multiple rows

2015-06-18 Thread Pavel Stehule
Hi

CREATE OR REPLACE FUNCTION public.fx(i integer, OUT a text, OUT b text)
 RETURNS SETOF record
 LANGUAGE plpgsql
AS $function$
BEGIN
  RETURN QUERY SELECT 'foo'||id, 'bar'||id FROM generate_series(1,i) g(id);
  RETURN;
END;
$function$

postgres=# SELECT a,b FROM fx(4);
┌──┬──┐
│  a   │  b   │
╞══╪══╡
│ foo1 │ bar1 │
│ foo2 │ bar2 │
│ foo3 │ bar3 │
│ foo4 │ bar4 │
└──┴──┘
(4 rows)

Regards

Pavel


2015-06-18 14:36 GMT+02:00 Sven Geggus li...@fuchsschwanzdomain.de:

 Hello,

 I supose this is simple, but I did not find a solution in the
 documentation.

 I would like to be able to do something like this:

 select myfunc('foo','bar');
 or
 select myfunc(foo, bar) from foobartable;
 or even
 select myfunc(foo, bar), 'baz' as baz from foobartable;

 Which should return something like this:
  foo  | bar
 --+--
  foo1 | bar1
  foo2 | bar2
  foo3 | bar3
  foo4 | bar4
 (4 rows)

 So the output should be at least two columns and (usually) more than one
 row.

 What I currently have is the following, which is mostly it.  Unfortunately
 it gives me only one column (I really need two) and I would have to create
 a
 custom type:

 CREATE TYPE t_foobar AS (foo text, bar text);

 CREATE or REPLACE FUNCTION myfunc(foo text, bar text)
 returns SETOF t_foobar as $$
 BEGIN
   FOR i IN 1..4 LOOP
 RETURN NEXT (foo || i::text, bar || i::text);
   END LOOP;
   RETURN;
 END;
 $$ language 'plpgsql';

 mydb= select myfunc('foo','bar');
myfunc
 -
  (foo1,bar1)
  (foo2,bar2)
  (foo3,bar3)
  (foo4,bar4)
 (4 rows)

 Regards

 Sven

 --
 Exploits and holes are a now a necessary protection against large
 corporate interests. (Alan Cox)

 /me is giggls@ircnet, http://sven.gegg.us/ on the Web


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] PL/pgSQL: How to return two columns and multiple rows

2015-06-18 Thread David G. Johnston
On Thursday, June 18, 2015, Sven Geggus li...@fuchsschwanzdomain.de wrote:

 Hello,

 I supose this is simple, but I did not find a solution in the
 documentation.

 I would like to be able to do something like this:

 select myfunc('foo','bar');
 or
 select myfunc(foo, bar) from foobartable;
 or even
 select myfunc(foo, bar), 'baz' as baz from foobartable;

 Which should return something like this:
  foo  | bar
 --+--
  foo1 | bar1
  foo2 | bar2
  foo3 | bar3
  foo4 | bar4
 (4 rows)

 So the output should be at least two columns and (usually) more than one
 row.

 What I currently have is the following, which is mostly it.  Unfortunately
 it gives me only one column (I really need two) and I would have to create
 a
 custom type:

 CREATE TYPE t_foobar AS (foo text, bar text);

 CREATE or REPLACE FUNCTION myfunc(foo text, bar text)
 returns SETOF t_foobar as $$
 BEGIN
   FOR i IN 1..4 LOOP
 RETURN NEXT (foo || i::text, bar || i::text);
   END LOOP;
   RETURN;
 END;
 $$ language 'plpgsql';

 mydb= select myfunc('foo','bar');
myfunc
 -
  (foo1,bar1)
  (foo2,bar2)
  (foo3,bar3)
  (foo4,bar4)
 (4 rows)


Look at the returns table (col1 type, col2 type) form.

David J.


Re: [GENERAL] PL/pgSQL: How to return two columns and multiple rows

2015-06-18 Thread Raymond O'Donnell
On 18/06/2015 13:36, Sven Geggus wrote:
 Hello,
 
 I supose this is simple, but I did not find a solution in the documentation.
 
 I would like to be able to do something like this:
 
 select myfunc('foo','bar');
 or
 select myfunc(foo, bar) from foobartable;
 or even
 select myfunc(foo, bar), 'baz' as baz from foobartable;
 
 Which should return something like this:
  foo  | bar 
 --+--
  foo1 | bar1
  foo2 | bar2
  foo3 | bar3
  foo4 | bar4
 (4 rows)
 
 So the output should be at least two columns and (usually) more than one row.
 
 What I currently have is the following, which is mostly it.  Unfortunately
 it gives me only one column (I really need two) and I would have to create a
 custom type:
 
 CREATE TYPE t_foobar AS (foo text, bar text);  
 
 CREATE or REPLACE FUNCTION myfunc(foo text, bar text) 
 returns SETOF t_foobar as $$
 BEGIN
   FOR i IN 1..4 LOOP 
 RETURN NEXT (foo || i::text, bar || i::text); 
 
   END LOOP;
   RETURN;
 END;  
 $$ language 'plpgsql';
 
 mydb= select myfunc('foo','bar');

You need to do:

   select * from myfunc('foo','bar');

Ray.

-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PL/pgSQL: How to return two columns and multiple rows

2015-06-18 Thread Sven Geggus
Raymond O'Donnell r...@iol.ie wrote:

 mydb= select myfunc('foo','bar');
 
 You need to do:
 
select * from myfunc('foo','bar');

This has been a misguided example. Reality should more likely look like this:

select myfunc(col1,col2) from mytable;

And it would of course be undesired if myfunc would be called twice per row.
So how would this look like to avoid the function beeing called twice?

Regards

Sven

-- 
Der wichtigste Aspekt, den Sie vor der Entscheidung für ein Open
Source-Betriebssystem bedenken sollten, ist, dass Sie kein
Windows-Betriebssystem erhalten. (von http://www.dell.de/ubuntu)
/me is giggls@ircnet, http://sven.gegg.us/ on the Web


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PL/pgSQL: How to return two columns and multiple rows

2015-06-18 Thread Merlin Moncure
On Thu, Jun 18, 2015 at 7:50 AM, Chris Travers chris.trav...@gmail.com wrote:

 On Thu, Jun 18, 2015, 14:38 Sven Geggus li...@fuchsschwanzdomain.de wrote:

 Hello,

 I supose this is simple, but I did not find a solution in the documentation.

 Because you already are returning 2 columns.

 I would like to be able to do something like this:

 select myfunc('foo','bar');
 or
 select myfunc(foo, bar) from foobartable;
 or even
 select myfunc(foo, bar), 'baz' as baz from foobartable;

 Which should return something like this:
  foo  | bar
 --+--
  foo1 | bar1
  foo2 | bar2
  foo3 | bar3
  foo4 | bar4
 (4 rows)

 So the output should be at least two columns and (usually) more than one
 row.

 What I currently have is the following, which is mostly it.  Unfortunately
 it gives me only one column (I really need two) and I would have to create a
 custom type:

 CREATE TYPE t_foobar AS (foo text, bar text);

 CREATE or REPLACE FUNCTION myfunc(foo text, bar text)
 returns SETOF t_foobar as $$
 BEGIN
   FOR i IN 1..4 LOOP
 RETURN NEXT (foo || i::text, bar || i::text);
   END LOOP;
   RETURN;
 END;
 $$ language 'plpgsql';

 mydb= select myfunc('foo','bar');
myfunc
 -
  (foo1,bar1)
  (foo2,bar2)
  (foo3,bar3)
  (foo4,bar4)
 (4 rows)

 Select (myfunc('foo','bar')).*;
 Or
 Select * from myfunc('foo','bar');

this syntax:
Select (myfunc('foo','bar')).*;

should generally be avoided. in this case, the server would expand that to:

select (myfunc('foo','bar')).foo, (myfunc('foo','bar')).bar;

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PL/pgSQL: How to return two columns and multiple rows

2015-06-18 Thread Tom Lane
Raymond O'Donnell r...@iol.ie writes:
 On 18/06/2015 13:36, Sven Geggus wrote:
 I would like to be able to do something like this:
 
 select myfunc('foo','bar');
 or
 select myfunc(foo, bar) from foobartable;
 or even
 select myfunc(foo, bar), 'baz' as baz from foobartable;

 You need to do:

select * from myfunc('foo','bar');

That's enough to expand the output from a simple function call.  If you
want to do something like Sven's later examples, the best way is with
LATERAL:

select f.*, 'baz' as baz from foobartable, lateral myfunc(foo, bar) as f;

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PL/pgSQL: How to return two columns and multiple rows

2015-06-18 Thread David G. Johnston
On Thursday, June 18, 2015, Chris Travers chris.trav...@gmail.com wrote:


  Select (myfunc('foo','bar')).*;



This should be avoided.  Use lateral instead,or a cte a/o offset 0.
My_func is evaluated twice (once per column) if called this way


 Or
 Select * from myfunc('foo','bar');


This is ok

David J.


Re: [GENERAL] PL/pgSQL: How to return two columns and multiple rows

2015-06-18 Thread David G. Johnston
On Thu, Jun 18, 2015 at 9:32 AM, Sven Geggus li...@fuchsschwanzdomain.de
wrote:

 David G. Johnston david.g.johns...@gmail.com wrote:

  Look at the returns table (col1 type, col2 type) form.

 If I got this right returns table is not what I want as I need to select
 from my function as a virtual table in this case.


​Yes, I mis-read your question.  Your issue is placing the SRF (set
returning function) in the select-list which causes it to be treated as a
single composite-typed column.  You need to place the function in after
FROM or LATERAL

Something like:

SELECT * FROM src_tbl LATERAL my_func(src_tbl.col1, src_tbl.col2)​

I haven't had much experience writing lateral clauses but their benefit is
that they can reference columns from other tables so you don't have to
place the function in the select-list.

David J.


Re: [GENERAL] My postgres is not logging anymore

2015-06-18 Thread Raymond O'Donnell
On 18/06/2015 13:13, Maila Fatticcioni wrote:
 On 06/18/2015 11:48 AM, Raymond O'Donnell wrote:
 On 18/06/2015 10:34, Maila Fatticcioni wrote:
 Hello. I have a postgres cluster made by two nodes with OS Ubuntu
 14.04. About two weeks ago I did a minor update of postgres to
 the version 9.4.1-1.pgdg70+1. After I had to shutdown the
 machines and I could start them just last week. Since then
 postgresql has been stopping logging on both nodes. If I destroy
 the log file and I do a restart, a new file is created but it 
 remains empty. I have tried to change some configuration
 parameters with no success. I have tried to restart postgres on
 both nodes and relocate the service as well - nothing. Apart from
 this everything is working fine and my applications don't show
 any errors during the connection to the database. Any ideas about
 how to have back the log?

 Here my configuration file postgresql.conf:
 
 What have you got set for the following? - Here's what they're set
 to on my laptop (Windows 7, PG 9.4):
 
 log_destination = 'stderr' logging_collector = on log_filename =
 'postgresql-%Y-%m-%d_%H%M%S.log' log_file_mode = 0600 
 log_truncate_on_rotation = off log_rotation_age = 1d 
 log_rotation_size = 10MB
 
 The comments in postgresql.conf indicate that log_destintion and 
 logging_collector, at least, need to be set as above to generate
 log files.
 
 Ray.
 
 
 I set up the logs using the parameters:
 log_connections = on
 log_disconnections = on
 log_duration = on
 log_error_verbosity = terse
 log_statement = 'all'
 log_timezone = 'localtime'
 
 I think it would be enough to get the log in the file
 /var/log/postgresql/postgresql-9.4-main.log . If I set the

Well, according to the comments in postgresql.conf - which you really
should read if you haven't already - you need logging_collector to be on
in order to log to a file at all. Read this too:

http://www.postgresql.org/docs/9.4/static/runtime-config-logging.html

 logging_collector up the log would became
 /var/lib/postgresql/9.4/main/pg_log/postgresql-2015-06-18_X.log .

Just change log_filename to whatever you like.

Ray.


-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PL/pgSQL: How to return two columns and multiple rows

2015-06-18 Thread Sven Geggus
David G. Johnston david.g.johns...@gmail.com wrote:

 Look at the returns table (col1 type, col2 type) form.

If I got this right returns table is not what I want as I need to select
from my function as a virtual table in this case.

Regards

Sven

-- 
Thinking of using NT for your critical apps?
  Isn't there enough suffering in the world?
   (Advertisement of Sun Microsystems in Wall Street Journal)
/me is giggls@ircnet, http://sven.gegg.us/ on the Web


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] valgrind

2015-06-18 Thread Peter Kroon
Hi list,

I've Installed postgresql-9.4 using apt-get as instructed here:
http://www.postgresql.org/download/linux/debian/
Also installed libpq-dev with the suggested dependencies using apt-get.
And created a small program in c. You can find it here:
http://pastebin.com/bRHw3Wud
When I run the progam against valgrind I get:


==3814== Memcheck, a memory error detector
==3814== Copyright (C) 2002-2013, and GNU GPL'd, by Julian Seward et al.
==3814== Using Valgrind-3.10.1 and LibVEX; rerun with -h for copyright info
==3814== Command: ./pgsql_check
==3814==
==3814==
==3814== HEAP SUMMARY:
==3814== in use at exit: 47,288 bytes in 2,864 blocks
==3814==   total heap usage: 5,671 allocs, 2,807 frees, 331,460 bytes
allocated
==3814==
==3814== LEAK SUMMARY:
==3814==definitely lost: 0 bytes in 0 blocks
==3814==indirectly lost: 0 bytes in 0 blocks
==3814==  possibly lost: 0 bytes in 0 blocks
==3814==still reachable: 47,288 bytes in 2,864 blocks
==3814== suppressed: 0 bytes in 0 blocks
==3814== Rerun with --leak-check=full to see details of leaked memory
==3814==
==3814== For counts of detected and suppressed errors, rerun with: -v
==3814== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)

There is still reachable data. Is this a bug or have I forgotten to free
something?

Best,
Peter


Re: [GENERAL] writable cte triggers reverse order

2015-06-18 Thread Tom Lane
=?UTF-8?B?0JzQuNGI0LAg0KLRjtGA0LjQvQ==?= tmih...@bk.ru writes:
 for me case listed below looks like something goes wrong (at least very 
 strange)
 ...
 see on order of events -- INSERTs come first before DELETEs!

The reason for this is that the AFTER INSERT trigger events are queued
within the INSERT statements executed by the test_trg_inh_func function,
and then executed at the completions of those INSERT statements.  On
the other hand, the AFTER DELETE trigger events are queued by the outer
WITH ... DELETE ... INSERT statement, so they fire at the completion
of that statement.

You could probably get the behavior you want if you make all the triggers
DEFERRED so that they all fire at end-of-transaction, rather than at end
of the statement that queued them.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PL/pgSQL: How to return two columns and multiple rows

2015-06-18 Thread David G. Johnston
On Thu, Jun 18, 2015 at 9:52 AM, Sven Geggus li...@fuchsschwanzdomain.de
wrote:

 Raymond O'Donnell r...@iol.ie wrote:

  mydb= select myfunc('foo','bar');
 
  You need to do:
 
 select * from myfunc('foo','bar');

 This has been a misguided example. Reality should more likely look like
 this:

 select myfunc(col1,col2) from mytable;

 And it would of course be undesired if myfunc would be called twice per
 row.
 So how would this look like to avoid the function beeing called twice?


​WITH exec_func AS ( SELECT myfunc(col1,col2) FROM mytable )
SELECT (exec_func.myfunc).* FROM exec_func;

This relies on the fact that currently a CTE introduces an optimization
barrier.

David J.
​


Re: [GENERAL] PL/pgSQL: How to return two columns and multiple rows

2015-06-18 Thread Sven Geggus
David G. Johnston david.g.johns...@gmail.com wrote:
 WITH exec_func AS ( SELECT myfunc(col1,col2) FROM mytable )
 SELECT (exec_func.myfunc).* FROM exec_func;
 
 This relies on the fact that currently a CTE introduces an optimization
 barrier.

Hm, let me summarize. My function seems to work as expected and is only
called once per row:

Here is a working example:

CREATE TYPE t_foobar AS (foo text, bar text);
CREATE TABLE mytable (col1 text, col2 text);
INSERT INTO mytable VALUES ('text1','value1');
INSERT INTO mytable VALUES ('text2','value2');

CREATE or REPLACE FUNCTION myfunc(foo text, bar text)
returns SETOF t_foobar as $$
BEGIN
  RAISE NOTICE 'called with parms foo,bar: % %',foo, bar;
  FOR i IN 1..4 LOOP
RETURN NEXT (foo || ' ' || i::text, bar || ' ' || i::text);
  END LOOP;
  RETURN;
END;
$$ language 'plpgsql';

mydb= select myfunc(col1,col2) from mytable;
NOTICE:  called with parms foo,bar: text1 value1
NOTICE:  called with parms foo,bar: text2 value2
 myfunc 

 (text1 1,value1 1)
 (text1 2,value1 2)
 (text1 3,value1 3)
 (text1 4,value1 4)
 (text2 1,value2 1)
 (text2 2,value2 2)
 (text2 3,value2 3)
 (text2 4,value2 4)
(8 rows)

Using your suggestion the desired two columns are generated, but I consider
this a little bit ugly:

SELECT (exec_func.myfunc).* FROM exec_func;
mydb= WITH exec_func AS ( select myfunc(col1,col2) from mytable )
SELECT (exec_func.myfunc).* FROM exec_func;
HINWEIS:  called with parms foo,bar: text1 value1
HINWEIS:  called with parms foo,bar: text2 value2
   foo   |   bar
-+--
 text1 1 | value1 1
 text1 2 | value1 2
 text1 3 | value1 3
 text1 4 | value1 4
 text2 1 | value2 1
 text2 2 | value2 2
 text2 3 | value2 3
 text2 4 | value2 4
(8 rows)

I would rather have a functiuon which already returns the desired two
columns.

Sven

-- 
Threading is a performance hack.
(The Art of Unix Programming by Eric S. Raymond)

/me is giggls@ircnet, http://sven.gegg.us/ on the Web


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PL/pgSQL: How to return two columns and multiple rows

2015-06-18 Thread David G. Johnston
On Thu, Jun 18, 2015 at 10:31 AM, Sven Geggus li...@fuchsschwanzdomain.de
wrote:

 David G. Johnston david.g.johns...@gmail.com wrote:
  WITH exec_func AS ( SELECT myfunc(col1,col2) FROM mytable )
  SELECT (exec_func.myfunc).* FROM exec_func;
 
  This relies on the fact that currently a CTE introduces an optimization
  barrier.

 Hm, let me summarize. My function seems to work as expected and is only
 called once per row:

 Here is a working example:

 CREATE TYPE t_foobar AS (foo text, bar text);
 CREATE TABLE mytable (col1 text, col2 text);
 INSERT INTO mytable VALUES ('text1','value1');
 INSERT INTO mytable VALUES ('text2','value2');

 CREATE or REPLACE FUNCTION myfunc(foo text, bar text)
 returns SETOF t_foobar as $$
 BEGIN
   RAISE NOTICE 'called with parms foo,bar: % %',foo, bar;
   FOR i IN 1..4 LOOP
 RETURN NEXT (foo || ' ' || i::text, bar || ' ' || i::text);
   END LOOP;
   RETURN;
 END;
 $$ language 'plpgsql';

 mydb= select myfunc(col1,col2) from mytable;
 NOTICE:  called with parms foo,bar: text1 value1
 NOTICE:  called with parms foo,bar: text2 value2
  myfunc
 
  (text1 1,value1 1)
  (text1 2,value1 2)
  (text1 3,value1 3)
  (text1 4,value1 4)
  (text2 1,value2 1)
  (text2 2,value2 2)
  (text2 3,value2 3)
  (text2 4,value2 4)
 (8 rows)

 Using your suggestion the desired two columns are generated, but I consider
 this a little bit ugly:

 SELECT (exec_func.myfunc).* FROM exec_func;
 mydb= WITH exec_func AS ( select myfunc(col1,col2) from mytable )
 SELECT (exec_func.myfunc).* FROM exec_func;
 HINWEIS:  called with parms foo,bar: text1 value1
 HINWEIS:  called with parms foo,bar: text2 value2
foo   |   bar
 -+--
  text1 1 | value1 1
  text1 2 | value1 2
  text1 3 | value1 3
  text1 4 | value1 4
  text2 1 | value2 1
  text2 2 | value2 2
  text2 3 | value2 3
  text2 4 | value2 4
 (8 rows)

 I would rather have a functiuon which already returns the desired two
 columns.


​the function is not the problem - its how you choose to incorporate it
into the query.

Assuming you are on 9.3+ what you want to use is LATERAL

Or you could move the CTE to a sub-query with an OFFSET 0 specification
(again, to prevent optimization).

David J.
​


Re: [GENERAL] PL/pgSQL: How to return two columns and multiple rows

2015-06-18 Thread Alvaro Herrera
Sven Geggus wrote:

 Using your suggestion the desired two columns are generated, but I consider
 this a little bit ugly:
 
 mydb= WITH exec_func AS ( select myfunc(col1,col2) from mytable )
 SELECT (exec_func.myfunc).* FROM exec_func;
 HINWEIS:  called with parms foo,bar: text1 value1
 HINWEIS:  called with parms foo,bar: text2 value2

What's wrong with a plain subselect?

select (myfunc).* from (select myfunc(col1,col2) from mytable) f;


-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PL/pgSQL: How to return two columns and multiple rows

2015-06-18 Thread Sven Geggus
David G. Johnston david.g.johns...@gmail.com wrote:

 Assuming you are on 9.3+ what you want to use is LATERAL

OK, how is such a query supposed to look like?

assuming select myfunc(col1,col2) from mytable works as the inner select?

Sven

-- 
Software patents are the software project equivalent of land mines: Each
design decision carries a risk of stepping on a patent, which can destroy
your project. (Richard M. Stallman)
/me is giggls@ircnet, http://sven.gegg.us/ on the Web


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [PERFORM] PGBOUNCER ISSUE PLEASE HELP(Slowing down the site)

2015-06-18 Thread k...@rice.edu
On Thu, Jun 18, 2015 at 05:09:10PM +, Sheena, Prabhjot wrote:
 Guys
  I have an issue going on with PGBOUNCER which is slowing down the 
 site
 
 PGBOUNCER VERSION:  pgbouncer-1.5.4-2.el6 (Hosted on separate machine) (16 
 cpu) 98GB RAM
 DATABASE VERION: postgresql 9.3
 
 When the total client connections to pgbouncer are close to 1000, site 
 application works fine but when  the total client connections crosses 1150 
 site application starts showing slowness.
 
 Here is an example of output
 
 postgres@symds-pg:~ $ netstat -atnp | grep 5432  | wc
 (Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
 9606720  104640
 
 
 As you can see total connections are like 960 right now my site application 
 is working fine. When connections crosses 1150  and even though I see lot of 
 available connections coz my  default_pool_size is set high to 250  but still 
 the application gets slow. Database performance  on the other end is great 
 with no slow running queries or anything. So the only place I can think the 
 issue is at PGBOUNCER end.
 

Hi Prabhjot,

This is classic behavior when you have a 1024 file limit. When you are below 
that
number, it work fine. Above that number, you must wait for a connection to close
and exit before you can connect which will cause a delay. See what ulimit has to
say?

Regards,
Ken


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PL/pgSQL: How to return two columns and multiple rows

2015-06-18 Thread David G. Johnston
On Thu, Jun 18, 2015 at 12:00 PM, Sven Geggus li...@fuchsschwanzdomain.de
wrote:

 David G. Johnston david.g.johns...@gmail.com wrote:

  Assuming you are on 9.3+ what you want to use is LATERAL

 OK, how is such a query supposed to look like?

 assuming select myfunc(col1,col2) from mytable works as the inner select?


​Syntax, description, and examples for a simple lateral query are
documented here:

http://www.postgresql.org/docs/9.3/static/sql-select.html

David J.
​


[GENERAL] PGBOUNCER ISSUE PLEASE HELP(Slowing down the site)

2015-06-18 Thread Sheena, Prabhjot
Guys
 I have an issue going on with PGBOUNCER which is slowing down the site

PGBOUNCER VERSION:  pgbouncer-1.5.4-2.el6 (Hosted on separate machine) (16 cpu) 
98GB RAM
DATABASE VERION: postgresql 9.3

When the total client connections to pgbouncer are close to 1000, site 
application works fine but when  the total client connections crosses 1150 site 
application starts showing slowness.

Here is an example of output

postgres@symds-pg:~ $ netstat -atnp | grep 5432  | wc
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
9606720  104640


As you can see total connections are like 960 right now my site application is 
working fine. When connections crosses 1150  and even though I see lot of 
available connections coz my  default_pool_size is set high to 250  but still 
the application gets slow. Database performance  on the other end is great with 
no slow running queries or anything. So the only place I can think the issue is 
at PGBOUNCER end.

pgbouncer=# show config;
key|  value   | changeable
---+--+
job_name  | pgbouncer| no
conffile  | /etc/pgbouncer/pgbouncer.ini | yes
logfile   | /var/log/pgbouncer.log| yes
pidfile   | /var/run/pgbouncer/pgbouncer.pid | no
listen_addr   | *| no
listen_port   | 5432 | no
listen_backlog| 128  | no
unix_socket_dir   | /tmp | no
unix_socket_mode  | 511  | no
unix_socket_group |  | no
auth_type | md5  | yes
auth_file | /etc/pgbouncer/userlist.txt  | yes
pool_mode | transaction  | yes
max_client_conn   | 3000 | yes
default_pool_size | 250  | yes
min_pool_size | 0| yes
reserve_pool_size | 0| yes
reserve_pool_timeout  | 5| yes
syslog| 0| yes
syslog_facility   | daemon   | yes
syslog_ident  | pgbouncer| yes
user  |  | no
autodb_idle_timeout   | 3600 | yes
server_reset_query|  | yes
server_check_query| select 1 | yes
server_check_delay| 30   | yes
query_timeout | 0| yes
query_wait_timeout| 0| yes
client_idle_timeout   | 0| yes
client_login_timeout  | 60   | yes
idle_transaction_timeout  | 0| yes
server_lifetime   | 3600 | yes
server_idle_timeout   | 600  | yes
server_connect_timeout| 15   | yes
server_login_retry| 15   | yes
server_round_robin| 0| yes
suspend_timeout   | 10   | yes
ignore_startup_parameters | extra_float_digits   | yes
disable_pqexec| 0| no
dns_max_ttl   | 15   | yes
dns_zone_check_period | 0| yes
max_packet_size   | 2147483647   | yes
pkt_buf   | 2048 | no
sbuf_loopcnt  | 5| yes
tcp_defer_accept  | 1| yes
tcp_socket_buffer | 0| yes
tcp_keepalive | 1| yes
tcp_keepcnt   | 0| yes
tcp_keepidle  | 0| yes
tcp_keepintvl | 0| yes
verbose   | 0| yes
admin_users   | postgres | yes
stats_users   | stats, postgres  | yes
stats_period  | 60   | yes
log_connections   | 1| yes
log_disconnections| 1| yes
log_pooler_errors | 1   

Re: [GENERAL] Issues setting up BDR with multiple databases

2015-06-18 Thread Craig Ringer
On 19 June 2015 at 08:09, Jorge Torralba jorge.torra...@gmail.com wrote:

 Could not add local node to cluster, status PGRES_FATAL_ERROR: ERROR:  node
 identity for local dsn does not match current node

 DETAIL:  The dsn 'dbname=jorge host=10.5.101.179' connects to a node with
 identity (6161869759719318325,2,16389) but the local node is
 (6161869759719318325,2,16385)

Interesting - it looks like it's connecting to the wrong DB at this step.

You've also posted this as a github issue, and I'm following up there
rather than on the mailing list. Lets keep the discussion in one
place.

https://github.com/2ndQuadrant/bdr/issues/88





-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] WAL log archival on standby

2015-06-18 Thread James Sewell
Hey All,

Is it possible to create WAL archive files on standby servers when using
streaming replication?

I know I can use archive_command or pg_receivexlog - but these will both
result in me sending information to the standby servers which has already
been sent via standard streaming replication. This would result in a
doubling of my network traffic.

Ideally I'd prefer to conserve this bandwidth and write the files to a
local archive on any active standby.

Cheers,

James Sewell,
Solutions Architect
__


 Level 2, 50 Queen St, Melbourne VIC 3000

*P *(+61) 3 8370 8000  *W* www.lisasoft.com  *F *(+61) 3 8370 8099

-- 


--
The contents of this email are confidential and may be subject to legal or 
professional privilege and copyright. No representation is made that this 
email is free of viruses or other defects. If you have received this 
communication in error, you may not copy or distribute any part of it or 
otherwise disclose its contents to anyone. Please advise the sender of your 
incorrect receipt of this correspondence.


[GENERAL] Implementing PostgreSQL in High Availability

2015-06-18 Thread אביאל בוסקילה
Hello everyone,

I am currently facing the challenge of giving a High Availability solution
for a new system we want to work with.

I need PostgreSQL to meet the following requirements:

1. Auto failover in case of node is going down or completely destroyed.

2. Scale-Out architecture because the amount of users continuously going to
grow up

3. I must have a Virtual IP referring to the current 'master' node.

4. Full support of PostGIS.

About the environment:

It is virtual servers with RedHat 6.2/6.6 , using pgsql 9.3 or 9.4 .

I have been trying to implement it already with two solution but faced with
some serious issues as I will explain :

1. UCARP + repmgr + pgsql 9.3:

   after writing the fitting vip-up.sh script I have faced a specific
bug with ucarp with after 3 failovers it just started both node as masters
and couldn't find anything on the internet that describes such a problem.

2. Redhat Cluster + pgsql 9.3:

   a. I have succeed to implement it with shared storage to be managed
quite good  with redhat cluster. the thing is that I am missing 2 things:
active standby and I need it in 'share-nothing' architecture

  b. I have followed the following tutorial

https://github.com/smbambling/PGSQL_HA_CLUSTER/wiki/Building-A-Highly-Available-Multi-Node-PostgreSQL-Cluster

but I had no success getting the pgsql service to start on any on the
nodes.

Now my question to you lads, has anyone managed to successfully implement
such an architecture and how?

Best Regards,

Aviel Buskila


Re: [GENERAL] pg_xlog on a hot_stanby slave

2015-06-18 Thread Sameer Kumar
On Thu, 18 Jun 2015 15:17 Xavier 12 mania...@gmail.com wrote:

On 18/06/2015 04:00, Sameer Kumar wrote:



On Wed, 17 Jun 2015 15:24 Xavier 12 mania...@gmail.com wrote:


 On 17/06/2015 03:17, Sameer Kumar wrote:


On Tue, 16 Jun 2015 16:55 Xavier 12 mania...@gmail.com wrote:

  Hi everyone,

Questions about pg_xlogs again...
I have two Postgresql 9.1 servers in a master/slave stream replication
(hot_standby).

Psql01 (master) is backuped with Barman and pg_xlogs is correctly
purged (archive_command is used).

Hower, Psql02 (slave) has a huge pg_xlog (951 files, 15G for 7 days
only, it keeps growing up until disk space is full). I have found
documentation and tutorials, mailing list, but I don't know what is
suitable for a Slave. Leads I've found :

- checkpoints
- archive_command
- archive_cleanup

Master postgresq.conf :

[...]
wal_level = 'hot_standby'
archive_mode = on
archive_command = 'rsync -az /var/lib/postgresql/9.1/main/pg_xlog/%f
bar...@nas.lan:/data/pgbarman/psql01/incoming/%f'
max_wal_senders = 5
wal_keep_segments = 64

   What's this parameter's value on Slave?

  Hm... You have a point.
That autovacuum parameter seems to be useless on a slave.
I'll try to remove it and check pg_xlog.



That was not my point. I was actually asking about wal_keep_segment.
Nevermind I found that I had misses the info (found it below. Please see my
response).
Besides I try to keep my master and standby config as same as possible(so
my advise ia to not switchoff autovacuum). The parameters which are
imeffective on slave anyways won't have an effect. Same goes for parameters
on master.
This helps me when I swap roles or do a failover. I have less parameters to
be worried about.


 Okay


 Can you check the pg_log for log files. They may have se info? I am sorry
if you have already provided that info (after I finish I will try to look
at your previous emails on this thread)


 Nothing...
/var/log/postgresql/postgresql-2015-06-17_31.log is empty (except old
messages at the begining related to a configuration issue - which is now
solved - after rebuilding the cluster yesterday).
/var/log/syslog has nothing but these :

Jun 18 09:10:11 Bdd02 postgres[28400]: [2-1] 2015-06-18 09:10:11 CEST LOG:
paquet de d?marrage incomplet
Jun 18 09:10:41 Bdd02 postgres[28523]: [2-1] 2015-06-18 09:10:41 CEST LOG:
paquet de d?marrage incomplet
Jun 18 09:11:11 Bdd02 postgres[28557]: [2-1] 2015-06-18 09:11:11 CEST LOG:
paquet de d?marrage incomplet
Jun 18 09:11:41 Bdd02 postgres[28652]: [2-1] 2015-06-18 09:11:41 CEST LOG:
paquet de d?marrage incomplet
Jun 18 09:12:11 Bdd02 postgres[28752]: [2-1] 2015-06-18 09:12:11 CEST LOG:
paquet de d?marrage incomplet
Jun 18 09:12:41 Bdd02 postgres[28862]: [2-1] 2015-06-18 09:12:41 CEST LOG:
paquet de d?marrage incomplet
Jun 18 09:13:11 Bdd02 postgres[28891]: [2-1] 2015-06-18 09:13:11 CEST LOG:
paquet de d?marrage incomplet
Jun 18 09:13:40 Bdd02 postgres[28987]: [2-1] 2015-06-18 09:13:40 CEST LOG:
paquet de d?marrage incomplet

These messages are related to Zabbix (psql port check).


You sure these are the only messages you have in the log files?


 Also can you share the vacuum cost parameters in your environm

 en

t?



  I don't understand that part... is this in postgresql.conf ?

 There are vacuum cost parameters in postgresql.conf

http://www.postgresql.org/docs/9.4/static/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-VACUUM-COST



autovacuum = on

Slave postgresql.conf :

[...]
wal_level = minimal
wal_keep_segments = 32



Sorry I missed this somehow earlier. Any reason why you think you need to
retain 32 wal files on slave?


 No but I get the feeling that the parameter is ignored by my slave...
should I try another value ?




AFAIK you don't nees this parameter to set to  0 unless you have cascaded
replica pull wal from stand by or you have backup jobs running to backup
from standby. Set it to 0 on the standby and check.



hot_standby = on

Slave recovery.conf :

standby_mode = 'on'
primary_conninfo = 'host=10.0.0.1 port=5400 user=postgres'
trigger_file = '/var/lib/postgresql/9.1/triggersql'
restore_command='cp /var/lib/postgresql/9.1/wal_archive/%f %p'
archive_cleanup_command =
'/usr/lib/postgresql/9.1/bin/pg_archivecleanup
/var/lib/postgresql/9.1/wal_archive/ %r'



Also consider setting hot_standby_feesback to on.


 I will check that parameter in the documentation,

Thanks


 How can I reduce the number of WAL files on the hot_stanby slave ?


Re: [GENERAL] WAL log archival on standby

2015-06-18 Thread Michael Paquier
On Fri, Jun 19, 2015 at 2:38 PM, James Sewell james.sew...@lisasoft.com
wrote:

 Hey All,

 Is it possible to create WAL archive files on standby servers when using
 streaming replication?


Yes and no, standbys do not archive WAL in ~9.4, but you could use
archive_mode = 'always' with the upcoming 9.5.

I know I can use archive_command or pg_receivexlog - but these will both
 result in me sending information to the standby servers which has already
 been sent via standard streaming replication. This would result in a
 doubling of my network traffic.

Ideally I'd prefer to conserve this bandwidth and write the files to a
 local archive on any active standby.


Both methods you mentioned are the way to go for now I am afraid, or you
wait for 9.5.
Regards,
-- 
Michael


Re: [GENERAL] pg_dump 8.4.9 failing after upgrade to openssl-1.0.1e-30.el6_6.11.x86_64 on redhat linux

2015-06-18 Thread Douglas Stetner
 
 On 18 Jun 2015, at 02:06 , Tom Lane t...@sss.pgh.pa.us wrote:
 
 Douglas Stetner stet...@icloud.com writes:
 Looking for confirmation there is an issue with pg_dump failing after 
 upgrade to openssl-1.0.1e-30.el6_6.11.x86_64 on redhat linux.
 
 Quick thought --- did you restart the Postgres service after upgrading
 openssl?  If not, your server is still using the old library version,
 while pg_dump would be running the new version on the client side.
 I don't know exactly what was done to openssl in the last round of
 revisions, but maybe there is some sort of version compatibility issue.
 
 Also, you really ought to be running something newer than PG 8.4.9.
 
   regards, tom lane


Thanks for the reply Tom.  Unfortunately restart did not help.  Will try an 
upgrade to 8.4.20 (other software depends on 8.4.x) A remote client with 8.4.20 
works, so fingers crossed.

Douglas Stetner
Mobile 0474 082 019
UNIX - Live Free Or Die


Douglas Stetner
Mobile 0474 082 019
UNIX - Live Free Or Die





signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: [GENERAL] Issues trying to run bdr_init_copy with new setup

2015-06-18 Thread Craig Ringer
On 17 June 2015 at 07:49, Jorge Torralba jorge.torra...@gmail.com wrote:
 Just started playing with BDR. Originally setup the environment on two
 separate servers as per the quick start guid and used the sql commands to
 add nodes. Moving on to command line, I am running into some issues.

This has also been opened by Jorge as a github issue; see

https://github.com/2ndQuadrant/bdr/issues/88
https://github.com/2ndQuadrant/bdr/issues/89

so I'm following up there to avoid splitting the discussion.

Jorge, if you post the same thing multiple places, please link between
them! It saves considerable time and hassle.


-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [PERFORM] PGBOUNCER ISSUE PLEASE HELP(Slowing down the site)

2015-06-18 Thread Sheena, Prabhjot
Here is the output of OS limits

postgres@symds-pg:~ $ ulimit -a

core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 790527
max locked memory   (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files  (-n) 4096
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 10240
cpu time   (seconds, -t) unlimited
max user processes  (-u) 16384
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited


Thanks
Prabhjot

-Original Message-
From: k...@rice.edu [mailto:k...@rice.edu] 
Sent: Thursday, June 18, 2015 10:16 AM
To: Sheena, Prabhjot
Cc: pgsql-performa...@postgresql.org; pgsql-general@postgresql.org
Subject: Re: [PERFORM] PGBOUNCER ISSUE PLEASE HELP(Slowing down the site)

On Thu, Jun 18, 2015 at 05:09:10PM +, Sheena, Prabhjot wrote:
 Guys
  I have an issue going on with PGBOUNCER which is slowing down 
 the site
 
 PGBOUNCER VERSION:  pgbouncer-1.5.4-2.el6 (Hosted on separate machine) (16 
 cpu) 98GB RAM
 DATABASE VERION: postgresql 9.3
 
 When the total client connections to pgbouncer are close to 1000, site 
 application works fine but when  the total client connections crosses 1150 
 site application starts showing slowness.
 
 Here is an example of output
 
 postgres@symds-pg:~ $ netstat -atnp | grep 5432  | wc (Not all 
 processes could be identified, non-owned process info will not be 
 shown, you would have to be root to see it all.)
 9606720  104640
 
 
 As you can see total connections are like 960 right now my site application 
 is working fine. When connections crosses 1150  and even though I see lot of 
 available connections coz my  default_pool_size is set high to 250  but still 
 the application gets slow. Database performance  on the other end is great 
 with no slow running queries or anything. So the only place I can think the 
 issue is at PGBOUNCER end.
 

Hi Prabhjot,

This is classic behavior when you have a 1024 file limit. When you are below 
that number, it work fine. Above that number, you must wait for a connection to 
close and exit before you can connect which will cause a delay. See what ulimit 
has to say?

Regards,
Ken


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [PERFORM] PGBOUNCER ISSUE PLEASE HELP(Slowing down the site)

2015-06-18 Thread k...@rice.edu
On Thu, Jun 18, 2015 at 05:41:01PM +, Sheena, Prabhjot wrote:
 Here is the output of OS limits
 
 postgres@symds-pg:~ $ ulimit -a
 
 core file size  (blocks, -c) 0
 data seg size   (kbytes, -d) unlimited
 scheduling priority (-e) 0
 file size   (blocks, -f) unlimited
 pending signals (-i) 790527
 max locked memory   (kbytes, -l) 32
 max memory size (kbytes, -m) unlimited
 open files  (-n) 4096
 pipe size(512 bytes, -p) 8
 POSIX message queues (bytes, -q) 819200
 real-time priority  (-r) 0
 stack size  (kbytes, -s) 10240
 cpu time   (seconds, -t) unlimited
 max user processes  (-u) 16384
 virtual memory  (kbytes, -v) unlimited
 file locks  (-x) unlimited
 
 
 Thanks
 Prabhjot
 

I would bump your open files as was suggested in your pgbouncer start
script.

Regards,
Ken


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [PERFORM] PGBOUNCER ISSUE PLEASE HELP(Slowing down the site)

2015-06-18 Thread k...@rice.edu
On Thu, Jun 18, 2015 at 07:19:13PM +, Sheena, Prabhjot wrote:
 Hi Ken/ Will
 
   I have checked the ulimit value and we are nowhere hitting the max 4096 
 that we have currently set. Is there any other explanation why we should be 
 thinking of bumping it to like ulimit -n 5 ( Add ulimit -n 5 to the 
 start of whatever you use to start pgbouncer (init script, etc..)) even 
 though we are not reaching 4096 max value
 
 Regards
 Prabhjot Singh
 

Hi,

Try attaching to the pgbouncer with strace and see if you are getting any 
particular
errors. Do you have a /etc/security/limits.d directory? And if so, what is in 
it?
We found a nice default ulimit of 1024 for all non-root users. :(

Regards,
Ken


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [PERFORM] PGBOUNCER ISSUE PLEASE HELP(Slowing down the site)

2015-06-18 Thread Sheena, Prabhjot
Hi Ken/ Will

  I have checked the ulimit value and we are nowhere hitting the max 4096 that 
we have currently set. Is there any other explanation why we should be thinking 
of bumping it to like ulimit -n 5 ( Add ulimit -n 5 to the start of 
whatever you use to start pgbouncer (init script, etc..)) even though we are 
not reaching 4096 max value

Regards
Prabhjot Singh



-Original Message-
From: k...@rice.edu [mailto:k...@rice.edu] 
Sent: Thursday, June 18, 2015 11:10 AM
To: Sheena, Prabhjot
Cc: pgsql-performa...@postgresql.org; pgsql-general@postgresql.org
Subject: Re: [PERFORM] PGBOUNCER ISSUE PLEASE HELP(Slowing down the site)

On Thu, Jun 18, 2015 at 05:41:01PM +, Sheena, Prabhjot wrote:
 Here is the output of OS limits
 
 postgres@symds-pg:~ $ ulimit -a
 
 core file size  (blocks, -c) 0
 data seg size   (kbytes, -d) unlimited
 scheduling priority (-e) 0
 file size   (blocks, -f) unlimited
 pending signals (-i) 790527
 max locked memory   (kbytes, -l) 32
 max memory size (kbytes, -m) unlimited
 open files  (-n) 4096
 pipe size(512 bytes, -p) 8
 POSIX message queues (bytes, -q) 819200
 real-time priority  (-r) 0
 stack size  (kbytes, -s) 10240
 cpu time   (seconds, -t) unlimited
 max user processes  (-u) 16384
 virtual memory  (kbytes, -v) unlimited
 file locks  (-x) unlimited
 
 
 Thanks
 Prabhjot
 

I would bump your open files as was suggested in your pgbouncer start script.

Regards,
Ken


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [PERFORM] PGBOUNCER ISSUE PLEASE HELP(Slowing down the site)

2015-06-18 Thread Sheena, Prabhjot
Here is the output of pid

postgres@symds-pg:~ $ cat /proc/15610/limits
Limit Soft Limit   Hard Limit   Units
Max cpu time  unlimitedunlimitedseconds
Max file size unlimitedunlimitedbytes
Max data size unlimitedunlimitedbytes
Max stack size10485760 unlimitedbytes
Max core file size00bytes
Max resident set  unlimitedunlimitedbytes
Max processes 1638416384processes
Max open files4096 4096 files
Max locked memory 3276832768bytes
Max address space unlimitedunlimitedbytes
Max file locksunlimitedunlimitedlocks
Max pending signals   790527   790527   signals
Max msgqueue size 819200   819200   bytes
Max nice priority 00
Max realtime priority 00


Thanks
Prabhjot Singh

-Original Message-
From: Jerry Sievers [mailto:gsiever...@comcast.net] 
Sent: Thursday, June 18, 2015 12:47 PM
To: Sheena, Prabhjot
Cc: k...@rice.edu; Will Platnick; pgsql-performa...@postgresql.org; 
pgsql-general@postgresql.org
Subject: Re: [PERFORM] PGBOUNCER ISSUE PLEASE HELP(Slowing down the site)

Sheena, Prabhjot prabhjot.si...@classmates.com writes:

 Hi Ken/ Will
 
   I have checked the ulimit value and we are nowhere hitting the max
   4096 that we have currently set. Is there any other explanation why
   we should be thinking of bumping it to like ulimit -n 5 ( Add
   ulimit -n 5 to the start of whatever you use to start pgbouncer
   (init script, etc..)) even though we are not reaching 4096 max value

If I can assume you're running on linux, best you get limits readout from...

/proc/$PID-of-bouncer-process/limits

Best not to trust that run time env of interactive shell is same as where 
bouncer launched from.

FWIW


 Regards
 Prabhjot Singh



 -Original Message-
 From: k...@rice.edu [mailto:k...@rice.edu]
 Sent: Thursday, June 18, 2015 11:10 AM
 To: Sheena, Prabhjot
 Cc: pgsql-performa...@postgresql.org; pgsql-general@postgresql.org
 Subject: Re: [PERFORM] PGBOUNCER ISSUE PLEASE HELP(Slowing down the 
 site)

 On Thu, Jun 18, 2015 at 05:41:01PM +, Sheena, Prabhjot wrote:
 Here is the output of OS limits
 
 postgres@symds-pg:~ $ ulimit -a
 
 core file size  (blocks, -c) 0
 data seg size   (kbytes, -d) unlimited
 scheduling priority (-e) 0
 file size   (blocks, -f) unlimited
 pending signals (-i) 790527
 max locked memory   (kbytes, -l) 32
 max memory size (kbytes, -m) unlimited
 open files  (-n) 4096
 pipe size(512 bytes, -p) 8
 POSIX message queues (bytes, -q) 819200
 real-time priority  (-r) 0
 stack size  (kbytes, -s) 10240
 cpu time   (seconds, -t) unlimited
 max user processes  (-u) 16384
 virtual memory  (kbytes, -v) unlimited
 file locks  (-x) unlimited
 
 
 Thanks
 Prabhjot
 

 I would bump your open files as was suggested in your pgbouncer start script.

 Regards,
 Ken

--
Jerry Sievers
Postgres DBA/Development Consulting
e: postgres.consult...@comcast.net
p: 312.241.7800


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [PERFORM] PGBOUNCER ISSUE PLEASE HELP(Slowing down the site)

2015-06-18 Thread Jerry Sievers
Sheena, Prabhjot prabhjot.si...@classmates.com writes:

 Hi Ken/ Will
 
   I have checked the ulimit value and we are nowhere hitting the max
   4096 that we have currently set. Is there any other explanation why
   we should be thinking of bumping it to like ulimit -n 5 ( Add
   ulimit -n 5 to the start of whatever you use to start pgbouncer
   (init script, etc..)) even though we are not reaching 4096 max value

If I can assume you're running on linux, best you get limits readout
from...

/proc/$PID-of-bouncer-process/limits

Best not to trust that run time env of interactive shell is same as
where bouncer launched from.

FWIW


 Regards
 Prabhjot Singh



 -Original Message-
 From: k...@rice.edu [mailto:k...@rice.edu] 
 Sent: Thursday, June 18, 2015 11:10 AM
 To: Sheena, Prabhjot
 Cc: pgsql-performa...@postgresql.org; pgsql-general@postgresql.org
 Subject: Re: [PERFORM] PGBOUNCER ISSUE PLEASE HELP(Slowing down the site)

 On Thu, Jun 18, 2015 at 05:41:01PM +, Sheena, Prabhjot wrote:
 Here is the output of OS limits
 
 postgres@symds-pg:~ $ ulimit -a
 
 core file size  (blocks, -c) 0
 data seg size   (kbytes, -d) unlimited
 scheduling priority (-e) 0
 file size   (blocks, -f) unlimited
 pending signals (-i) 790527
 max locked memory   (kbytes, -l) 32
 max memory size (kbytes, -m) unlimited
 open files  (-n) 4096
 pipe size(512 bytes, -p) 8
 POSIX message queues (bytes, -q) 819200
 real-time priority  (-r) 0
 stack size  (kbytes, -s) 10240
 cpu time   (seconds, -t) unlimited
 max user processes  (-u) 16384
 virtual memory  (kbytes, -v) unlimited
 file locks  (-x) unlimited
 
 
 Thanks
 Prabhjot
 

 I would bump your open files as was suggested in your pgbouncer start script.

 Regards,
 Ken

-- 
Jerry Sievers
Postgres DBA/Development Consulting
e: postgres.consult...@comcast.net
p: 312.241.7800


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [PERFORM] PGBOUNCER ISSUE PLEASE HELP(Slowing down the site)

2015-06-18 Thread Igor Neyman


-Original Message-
From: pgsql-performance-ow...@postgresql.org 
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Sheena, Prabhjot
Sent: Thursday, June 18, 2015 3:19 PM
To: k...@rice.edu; Will Platnick
Cc: pgsql-performa...@postgresql.org; pgsql-general@postgresql.org
Subject: Re: [PERFORM] PGBOUNCER ISSUE PLEASE HELP(Slowing down the site)

Hi Ken/ Will

  I have checked the ulimit value and we are nowhere hitting the max 4096 that 
we have currently set. Is there any other explanation why we should be thinking 
of bumping it to like ulimit -n 5 ( Add ulimit -n 5 to the start of 
whatever you use to start pgbouncer (init script, etc..)) even though we are 
not reaching 4096 max value

Regards
Prabhjot Singh



-Original Message-
From: k...@rice.edu [mailto:k...@rice.edu] 
Sent: Thursday, June 18, 2015 11:10 AM
To: Sheena, Prabhjot
Cc: pgsql-performa...@postgresql.org; pgsql-general@postgresql.org
Subject: Re: [PERFORM] PGBOUNCER ISSUE PLEASE HELP(Slowing down the site)

On Thu, Jun 18, 2015 at 05:41:01PM +, Sheena, Prabhjot wrote:
 Here is the output of OS limits
 
 postgres@symds-pg:~ $ ulimit -a
 
 core file size  (blocks, -c) 0
 data seg size   (kbytes, -d) unlimited
 scheduling priority (-e) 0
 file size   (blocks, -f) unlimited
 pending signals (-i) 790527
 max locked memory   (kbytes, -l) 32
 max memory size (kbytes, -m) unlimited
 open files  (-n) 4096
 pipe size(512 bytes, -p) 8
 POSIX message queues (bytes, -q) 819200
 real-time priority  (-r) 0
 stack size  (kbytes, -s) 10240
 cpu time   (seconds, -t) unlimited
 max user processes  (-u) 16384
 virtual memory  (kbytes, -v) unlimited
 file locks  (-x) unlimited
 
 
 Thanks
 Prabhjot
 

I would bump your open files as was suggested in your pgbouncer start script.

Regards,
Ken

---

Why are you so sure that it is PgBouncer causing slowness?

You, said, bouncer pool_size is set to 250.  How many cores do you have on your 
db server?

Also, why are you running bouncer on a separate machine?  It is very 
light-weight, so running it on the db server wouldn't require much additional 
resource, but will eliminate some network traffic that you have with the 
current configuration.

Regards,
Igor Neyman




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Is there any way to measure disk activity for each query?

2015-06-18 Thread Oleg Serov
Hello!

I'm wondering, if there any way to measure how much disk-io were generated
by a query?



-- 
Best Regards,
Oleg


Re: [GENERAL] Is there any way to measure disk activity for each query?

2015-06-18 Thread Jeff Janes
On Thu, Jun 18, 2015 at 3:05 PM, Oleg Serov sero...@gmail.com wrote:

 Hello!

 I'm wondering, if there any way to measure how much disk-io were generated
 by a query?


For an individual query execution, you can explain it with

explain (analyze, buffers) select .

It will report on the pages hit in the buffer cache versus the pages read.
However, for pages which were hit in the OS filesystem cache, those will be
reported as if they were read from disk.  There is no way (that I know of)
to distinguish at the statement level true disk io from OS caching.  The
best way may be to turn track_io_timing on, then you can see how much time
it spent waiting on pages.  If not much time was spent, then it must be
coming from the OS cache.

If you enable pg_stat_statements extension, you can get the same data
summed over all natural calls of the same query string.  'Natural' meaning
executions from applications, not just queries manually decorated with
'explain (analyze,buffers)'.  This too is best used in conjunction with
track_io_timing.

I've been thinking about making individual statements which
exceed log_min_duration_statement log their track_io_timing numbers and
their rusage numbers into the server log, rather than just their wall-time
durations as it does now.  I'm not sure how that idea is going to work out
yet, though.  Anyway, it wouldn't be until version 9.6 at minimum.

Also, for temp file, see log_temp_files config parameter.

Cheers,

Jeff


Re: [GENERAL] Test for char errors

2015-06-18 Thread 夏高
On Wed, Jun 17, 2015 at 5:15 PM, Gao wrote:

 I don't know why the files are not the same but tests all passed. Helps are
 appreciated, thanks!
 Some tests have multiple expected outputs. In the case of char, there
is not only char.out, but as well char_1.out and char_2.out. In your
case char_1.out seems to match.
--
Michael

Thanks Michael! Could you tell me which option determines what expected
output is used?

Gao

2015-06-17 16:27 GMT+08:00 Michael Paquier michael.paqu...@gmail.com:

 On Wed, Jun 17, 2015 at 5:15 PM, 夏高 wrote:
  I don't know why the files are not the same but tests all passed. Helps
 are
  appreciated, thanks!

 Some tests have multiple expected outputs. In the case of char, there
 is not only char.out, but as well char_1.out and char_2.out. In your
 case char_1.out seems to match.
 --
 Michael



[GENERAL] Issues setting up BDR with multiple databases

2015-06-18 Thread Jorge Torralba
On NODE 1

create database christian;

create user christian with password 'christian';

alter database christian owner to christian;


create database lizzie;

create user lizzie with password 'lizzie';

alter database lizzie owner to lizzie;


create database jorge;

create user jorge with password 'jorge';

alter database lizzie owner to jorge;


\c christian;

create extension btree_gist;

create extension bdr;

select bdr.bdr_group_create ( local_node_name := 'dbadevpgha03',
node_external_dsn := 'host=10.5.101.176 port=5432 dbname=christian' );

SELECT bdr.bdr_node_join_wait_for_ready();

select * from bdr.bdr_nodes;



\c lizzie;

create extension btree_gist;

create extension bdr;

select bdr.bdr_group_create ( local_node_name := 'dbadevpgha03',
node_external_dsn := 'host=10.5.101.176 port=5432 dbname=lizzie' );

SELECT bdr.bdr_node_join_wait_for_ready();

select * from bdr.bdr_nodes;



\c jorge;

create extension btree_gist;

create extension bdr;

select bdr.bdr_group_create ( local_node_name := 'dbadevpgha03',
node_external_dsn := 'host=10.5.101.176 port=5432 dbname=jorge' );

SELECT bdr.bdr_node_join_wait_for_ready();

select * from bdr.bdr_nodes;


On NODE 2


-bash-4.1$ bdr_init_copy -D /var/lib/pgsql/9.4-bdr/data -n dbadevpgha04 -d
host=10.5.101.176 dbname=jorge port=5432
--local-dbname=host=10.5.101.179 dbname=jorge

bdr_init_copy: starting ...

Getting remote server identification ...

Detected 3 BDR database(s) on remote server

Updating BDR configuration on the remote node:

 christian: validating BDR configuration ...

 christian: creating replication slot ...

 christian: creating node entry for local node ...

 jorge: validating BDR configuration ...

 jorge: creating replication slot ...

 jorge: creating node entry for local node ...

 lizzie: validating BDR configuration ...

 lizzie: creating replication slot ...

 lizzie: creating node entry for local node ...

Creating base backup of the remote node...

52504/52504 kB (100%), 1/1 tablespace

Creating restore point on remote node ...

Bringing local node to the restore point ...

Transaction log reset

Initializing BDR on the local node:

 christian: adding the database to BDR cluster ...

Could not add local node to cluster, status PGRES_FATAL_ERROR: ERROR:  node
identity for local dsn does not match current node

DETAIL:  The dsn 'dbname=jorge host=10.5.101.179' connects to a node with
identity (6161869759719318325,2,16389) but the local node is
(6161869759719318325,2,16385)

HINT:  The node_local_dsn (or, for bdr, dsn if node_local_dsn is null)
parameter must refer to the node you're running this function from

CONTEXT:  SQL statement SELECT bdr.internal_begin_join(

'bdr_group_join',

local_node_name,

CASE WHEN node_local_dsn IS NULL THEN node_external_dsn ELSE
node_local_dsn END,

join_using_dsn)

PL/pgSQL function bdr_group_join(text,text,text,text,integer,text[]) line
21 at PERFORM




Thanks for looking.














-- 
Thanks,

Jorge Torralba


Note: This communication may contain privileged or other confidential
information. If you are not the intended recipient, please do not print,
copy, retransmit, disseminate or otherwise use the information. Please
indicate to the sender that you have received this email in error and
delete the copy you received. Thank You.


Re: [GENERAL] Test for char errors

2015-06-18 Thread Michael Paquier
On Fri, Jun 19, 2015 at 8:29 AM, 夏高 xiagao1...@gmail.com wrote:
 Thanks Michael! Could you tell me which option determines what expected
 output is used?

Have a look at results_differ() in pg_regress.c ;) The file selected
as expected output is the one with less lines of diffs.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_xlog on a hot_stanby slave

2015-06-18 Thread Xavier 12



On 18/06/2015 04:00, Sameer Kumar wrote:



On Wed, 17 Jun 2015 15:24 Xavier 12 mania...@gmail.com 
mailto:mania...@gmail.com wrote:


On 17/06/2015 03:17, Sameer Kumar wrote:


On Tue, 16 Jun 2015 16:55 Xavier 12 mania...@gmail.com
mailto:mania...@gmail.com wrote:

Hi everyone,

Questions about pg_xlogs again...
I have two Postgresql 9.1 servers in a master/slave stream
replication
(hot_standby).

Psql01 (master) is backuped with Barman and pg_xlogs is
correctly
purged (archive_command is used).

Hower, Psql02 (slave) has a huge pg_xlog (951 files, 15G
for 7 days
only, it keeps growing up until disk space is full). I
have found
documentation and tutorials, mailing list, but I don't
know what is
suitable for a Slave. Leads I've found :

- checkpoints
- archive_command
- archive_cleanup

Master postgresq.conf :

[...]
wal_level = 'hot_standby'
archive_mode = on
archive_command = 'rsync -az
/var/lib/postgresql/9.1/main/pg_xlog/%f
bar...@nas.lan:/data/pgbarman/psql01/incoming/%f
mailto:bar...@nas.lan:/data/pgbarman/psql01/incoming/%25f'
max_wal_senders = 5
wal_keep_segments = 64

What's this parameter's value on Slave?

Hm... You have a point.
That autovacuum parameter seems to be useless on a slave.
I'll try to remove it and check pg_xlog.

That was not my point. I was actually asking about wal_keep_segment. 
Nevermind I found that I had misses the info (found it below. Please 
see my response).
Besides I try to keep my master and standby config as same as 
possible(so my advise ia to not switchoff autovacuum). The parameters 
which are imeffective on slave anyways won't have an effect. Same goes 
for parameters on master.
This helps me when I swap roles or do a failover. I have less 
parameters to be worried about.




Okay

Can you check the pg_log for log files. They may have se info? I am 
sorry if you have already provided that info (after I finish I will 
try to look at your previous emails on this thread)




Nothing...
/var/log/postgresql/postgresql-2015-06-17_31.log is empty (except 
old messages at the begining related to a configuration issue - which is 
now solved - after rebuilding the cluster yesterday).

/var/log/syslog has nothing but these :

Jun 18 09:10:11 Bdd02 postgres[28400]: [2-1] 2015-06-18 09:10:11 CEST 
LOG:  paquet de d?marrage incomplet
Jun 18 09:10:41 Bdd02 postgres[28523]: [2-1] 2015-06-18 09:10:41 CEST 
LOG:  paquet de d?marrage incomplet
Jun 18 09:11:11 Bdd02 postgres[28557]: [2-1] 2015-06-18 09:11:11 CEST 
LOG:  paquet de d?marrage incomplet
Jun 18 09:11:41 Bdd02 postgres[28652]: [2-1] 2015-06-18 09:11:41 CEST 
LOG:  paquet de d?marrage incomplet
Jun 18 09:12:11 Bdd02 postgres[28752]: [2-1] 2015-06-18 09:12:11 CEST 
LOG:  paquet de d?marrage incomplet
Jun 18 09:12:41 Bdd02 postgres[28862]: [2-1] 2015-06-18 09:12:41 CEST 
LOG:  paquet de d?marrage incomplet
Jun 18 09:13:11 Bdd02 postgres[28891]: [2-1] 2015-06-18 09:13:11 CEST 
LOG:  paquet de d?marrage incomplet
Jun 18 09:13:40 Bdd02 postgres[28987]: [2-1] 2015-06-18 09:13:40 CEST 
LOG:  paquet de d?marrage incomplet


These messages are related to Zabbix (psql port check).



Also can you share the vacuum cost parameters in your environment?



I don't understand that part... is this in postgresql.conf ?



autovacuum = on

Slave postgresql.conf :

[...]
wal_level = minimal
wal_keep_segments = 32

Sorry I missed this somehow earlier. Any reason why you think you need 
to retain 32 wal files on slave?




No but I get the feeling that the parameter is ignored by my slave... 
should I try another value ?




hot_standby = on

Slave recovery.conf :

standby_mode = 'on'
primary_conninfo = 'host=10.0.0.1 port=5400 user=postgres'
trigger_file = '/var/lib/postgresql/9.1/triggersql'
restore_command='cp /var/lib/postgresql/9.1/wal_archive/%f
%p'
archive_cleanup_command =
'/usr/lib/postgresql/9.1/bin/pg_archivecleanup
/var/lib/postgresql/9.1/wal_archive/ %r'

Also consider setting hot_standby_feesback to on.



I will check that parameter in the documentation,

Thanks


How can I reduce the number of WAL files on the hot_stanby
slave ?

Thanks

Regards.

Xavier C.

--
Sent via pgsql-general mailing list
(pgsql-general@postgresql.org
mailto:pgsql-general@postgresql.org)
To make changes to your subscription:

[GENERAL] Select query regarding info

2015-06-18 Thread Yogesh. Sharma
HI Everyone,

Below DB query is showing below error on postgresql9.3.
SELECT '\'' || t2.name || '\'', '\'' || t1.phone_number || '\'', '\'' || 
t1.details || '\'', '\'' || t1.description || '\'', '\'' || (CASE WHEN t1.s_id 
IS NULL THEN 'N/A' ELSE t3.s_type END) || '\'', '\'' || t1.s_id || '\'' FROM 
abc_tble AS t1 LEFT JOIN pqrtable AS t2 ON t1.s_id = nid LEFT JOIN te AS t3 ON 
t1.s_id = t3.s_id;
Invalid command \''. Try \? for help.
But Above query is working fine in postgresql8.3.
Solution is provided by someone:-
The SQL standard defines two single quotes to escape one inside a literal: 
Postgres 8.3 defaulted to a non-standard behavior where it was allowed to 
escape a single quote using a backslash: '\''
This deviation from the SQL standard was always discouraged and can be 
controlled through the configuration parameter 
standard_conforming_stringshttp://www.postgresql.org/docs/current/static/runtime-config-compatible.html#GUC-STANDARD-CONFORMING-STRINGS
With version 9.1 the default for this parameter was changed from off to on. 
Version 8.1 and later would emit a warning when you used the non-standard way 
of escaping single quotes (unless you explicitly turned that off)


Could you please provide below information.
 How to change standard_conforming_strings value of postgresql.conf? I have 
checked but this option is not found in postgresql.conf.
Because according to this option, below query is failed.

Regards,
Yogesh

From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Xavier 12
Sent: Thursday, June 18, 2015 12:47 PM
To: Sameer Kumar; pgsql-general@postgresql.org
Subject: Re: [GENERAL] pg_xlog on a hot_stanby slave


On 18/06/2015 04:00, Sameer Kumar wrote:



On Wed, 17 Jun 2015 15:24 Xavier 12 
mania...@gmail.commailto:mania...@gmail.com wrote:

On 17/06/2015 03:17, Sameer Kumar wrote:

On Tue, 16 Jun 2015 16:55 Xavier 12 
mania...@gmail.commailto:mania...@gmail.com wrote:

Hi everyone,

Questions about pg_xlogs again...
I have two Postgresql 9.1 servers in a master/slave stream replication
(hot_standby).

Psql01 (master) is backuped with Barman and pg_xlogs is correctly
purged (archive_command is used).

Hower, Psql02 (slave) has a huge pg_xlog (951 files, 15G for 7 days
only, it keeps growing up until disk space is full). I have found
documentation and tutorials, mailing list, but I don't know what is
suitable for a Slave. Leads I've found :

- checkpoints
- archive_command
- archive_cleanup

Master postgresq.conf :

[...]
wal_level = 'hot_standby'
archive_mode = on
archive_command = 'rsync -az /var/lib/postgresql/9.1/main/pg_xlog/%f
bar...@nas.lan:/data/pgbarman/psql01/incoming/%fmailto:bar...@nas.lan:/data/pgbarman/psql01/incoming/%25f'
max_wal_senders = 5
wal_keep_segments = 64

What's this parameter's value on Slave?

Hm... You have a point.
That autovacuum parameter seems to be useless on a slave.
I'll try to remove it and check pg_xlog.

That was not my point. I was actually asking about wal_keep_segment. Nevermind 
I found that I had misses the info (found it below. Please see my response).
Besides I try to keep my master and standby config as same as possible(so my 
advise ia to not switchoff autovacuum). The parameters which are imeffective on 
slave anyways won't have an effect. Same goes for parameters on master.
This helps me when I swap roles or do a failover. I have less parameters to be 
worried about.

Okay



Can you check the pg_log for log files. They may have se info? I am sorry if 
you have already provided that info (after I finish I will try to look at your 
previous emails on this thread)

Nothing...
/var/log/postgresql/postgresql-2015-06-17_31.log is empty (except old 
messages at the begining related to a configuration issue - which is now solved 
- after rebuilding the cluster yesterday).
/var/log/syslog has nothing but these :

Jun 18 09:10:11 Bdd02 postgres[28400]: [2-1] 2015-06-18 09:10:11 CEST LOG:  
paquet de d?marrage incomplet
Jun 18 09:10:41 Bdd02 postgres[28523]: [2-1] 2015-06-18 09:10:41 CEST LOG:  
paquet de d?marrage incomplet
Jun 18 09:11:11 Bdd02 postgres[28557]: [2-1] 2015-06-18 09:11:11 CEST LOG:  
paquet de d?marrage incomplet
Jun 18 09:11:41 Bdd02 postgres[28652]: [2-1] 2015-06-18 09:11:41 CEST LOG:  
paquet de d?marrage incomplet
Jun 18 09:12:11 Bdd02 postgres[28752]: [2-1] 2015-06-18 09:12:11 CEST LOG:  
paquet de d?marrage incomplet
Jun 18 09:12:41 Bdd02 postgres[28862]: [2-1] 2015-06-18 09:12:41 CEST LOG:  
paquet de d?marrage incomplet
Jun 18 09:13:11 Bdd02 postgres[28891]: [2-1] 2015-06-18 09:13:11 CEST LOG:  
paquet de d?marrage incomplet
Jun 18 09:13:40 Bdd02 postgres[28987]: [2-1] 2015-06-18 09:13:40 CEST LOG:  
paquet de d?marrage incomplet

These messages are related to Zabbix (psql port check).




Also can you share the vacuum cost parameters in your environment?

I don't understand that part... is this in postgresql.conf ?



autovacuum = on

Slave postgresql.conf :

[...]

[GENERAL] My postgres is not logging anymore

2015-06-18 Thread Maila Fatticcioni
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hello.
I have a postgres cluster made by two nodes with OS Ubuntu 14.04.
About two weeks ago I did a minor update of postgres to the version
9.4.1-1.pgdg70+1. After I had to shutdown the machines and I could
start them just last week.
Since then postgresql has been stopping logging on both nodes. If I
destroy the log file and I do a restart, a new file is created but it
remains empty. I have tried to change some configuration parameters
with no success. I have tried to restart postgres on both nodes and
relocate the service as well - nothing. Apart from this everything is
working fine and my applications don't show any errors during the
connection to the database. Any ideas about how to have back the log?

Here my configuration file postgresql.conf:

- -
listen_addresses = '*'

work_mem = 32MB
wal_level = 'hot_standby'
synchronous_commit = on

checkpoint_segments = 128
checkpoint_completion_target = 0.9

archive_mode = on
archive_command = 'rsync -e ssh -a %p
postgres@172.31.7.1:/var/lib/postgresql/pg_archive/%f'

max_wal_senders = 5
wal_keep_segments = 32

hot_standby = on
max_standby_streaming_delay = 30s
wal_receiver_status_interval = 2s
hot_standby_feedback = on

log_min_messages = error
log_min_error_statement = error
log_min_duration_statement = 1

log_connections = on
log_disconnections = on
log_duration = on
log_error_verbosity = terse
log_statement = 'all'
log_timezone = 'localtime'

datestyle = 'iso, mdy'
timezone = 'localtime'
default_text_search_config = 'pg_catalog.english'
restart_after_crash = off
- -
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)

iEYEARECAAYFAlWCkK0ACgkQi2q3wPb3FcM84ACfQVc4mMWLx/c9cSEBPFsmvDia
vLEAoMj+nWBOueJwM3eafcx8AfMMYcUl
=Gkgk
-END PGP SIGNATURE-


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] My postgres is not logging anymore

2015-06-18 Thread Raymond O'Donnell
On 18/06/2015 10:34, Maila Fatticcioni wrote:
 Hello.
 I have a postgres cluster made by two nodes with OS Ubuntu 14.04.
 About two weeks ago I did a minor update of postgres to the version
 9.4.1-1.pgdg70+1. After I had to shutdown the machines and I could
 start them just last week.
 Since then postgresql has been stopping logging on both nodes. If I
 destroy the log file and I do a restart, a new file is created but it
 remains empty. I have tried to change some configuration parameters
 with no success. I have tried to restart postgres on both nodes and
 relocate the service as well - nothing. Apart from this everything is
 working fine and my applications don't show any errors during the
 connection to the database. Any ideas about how to have back the log?
 
 Here my configuration file postgresql.conf:

What have you got set for the following? - Here's what they're set to on
my laptop (Windows 7, PG 9.4):

log_destination = 'stderr'
logging_collector = on
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'
log_file_mode = 0600
log_truncate_on_rotation = off
log_rotation_age = 1d
log_rotation_size = 10MB

The comments in postgresql.conf indicate that log_destintion and
logging_collector, at least, need to be set as above to generate log files.

Ray.

-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] serialization failure why?

2015-06-18 Thread Filipe Pina
Kevin, assuming you will have some time to confirm that it has been fixed in 
some version some time next week, I’ve compiled the test steps in 
http://pastebin.com/4Uqc2kPv http://pastebin.com/4Uqc2kPv

Thanks once again

 On 17/06/2015, at 14:40, Filipe Pina filipe.p...@impactzero.pt wrote:
 
 Hi Kevin,
 
 I have installed:
 PostgreSQL 9.3.7 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu 
 4.8.2-19ubuntu1) 4.8.2, 64-bit
 
 But as you mentioned it should have been fixed in later versions, I've 
 upgraded to 9.4:
 PostgreSQL 9.4.4 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu 
 4.8.2-19ubuntu1) 4.8.2, 64-bit
 
 Though I still get the failures on 9.4.4..
 
 On Qua, Jun 17, 2015 at 12:52 , Kevin Grittner kgri...@ymail.com wrote:
 Filipe Pina filipe.p...@impactzero.pt wrote:
 
  if drop the foreign key constraint on stuff_ext table there are
  no failures at all…
 
 It is my recollection that we were excluding the queries used to
 enforce referential integrity constraints from the conflict
 tracking, so I am surprised you are seeing this.  What is the exact
 version you are using (as reported by the version() function)?
 
 I am at a conference this week, away from my normal development
 environment; but I will take a look next week.
 
 Kevin Grittner



Re: [GENERAL] Select query regarding info

2015-06-18 Thread Bill Moran
On Thu, 18 Jun 2015 07:29:37 +
Yogesh. Sharma yogesh1.sha...@nectechnologies.in wrote:

 HI Everyone,
 
 Below DB query is showing below error on postgresql9.3.
 SELECT '\'' || t2.name || '\'', '\'' || t1.phone_number || '\'', '\'' || 
 t1.details || '\'', '\'' || t1.description || '\'', '\'' || (CASE WHEN 
 t1.s_id IS NULL THEN 'N/A' ELSE t3.s_type END) || '\'', '\'' || t1.s_id || 
 '\'' FROM abc_tble AS t1 LEFT JOIN pqrtable AS t2 ON t1.s_id = nid LEFT JOIN 
 te AS t3 ON t1.s_id = t3.s_id;
 Invalid command \''. Try \? for help.
 But Above query is working fine in postgresql8.3.
 Solution is provided by someone:-
 The SQL standard defines two single quotes to escape one inside a literal: 
 
 Postgres 8.3 defaulted to a non-standard behavior where it was allowed to 
 escape a single quote using a backslash: '\''
 This deviation from the SQL standard was always discouraged and can be 
 controlled through the configuration parameter 
 standard_conforming_stringshttp://www.postgresql.org/docs/current/static/runtime-config-compatible.html#GUC-STANDARD-CONFORMING-STRINGS
 With version 9.1 the default for this parameter was changed from off to on. 
 Version 8.1 and later would emit a warning when you used the non-standard way 
 of escaping single quotes (unless you explicitly turned that off)
 
 
 Could you please provide below information.
  How to change standard_conforming_strings value of postgresql.conf? I have 
 checked but this option is not found in postgresql.conf.

Add it to the file.

Also, don't reply to unrelated threads with new questions, a lot of
people won't see your question if you do that, and if nobody sees
your question you won't get an answer.

-- 
Bill Moran


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] serialization failure why?

2015-06-18 Thread Simon Riggs
On 17 June 2015 at 13:52, Kevin Grittner kgri...@ymail.com wrote:

 Filipe Pina filipe.p...@impactzero.pt wrote:

  if drop the foreign key constraint on stuff_ext table there are
  no failures at all…

 It is my recollection that we were excluding the queries used to
 enforce referential integrity constraints from the conflict
 tracking, so I am surprised you are seeing this.  What is the exact
 version you are using (as reported by the version() function)?


I don't see any mechanism for excluding anything from serializable checks,
so I can't see how that would work.

I can't find any mention of serializability concerns in the RI code itself.

AFAIK it would be strange to exclude FK checks from serializability checks,
since they represent a valid observation of an intermediate state.

Mat Views are excluded but I don't understand why that should be the case.
There is no documented explanation.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
http://www.2ndquadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


Re: [GENERAL] Select query regarding info

2015-06-18 Thread Thomas Kellerer
 Could you please provide below information.
 
 How to change standard_conforming_strings value of postgresql.conf? 

I would not change that option. You should rather stick to standard conforming 
strings and fix your query. 
That can be done through a simple (and automated) search  replace.

Thomas



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general