Re: [HACKERS] proposal: schema variables

2017-11-01 Thread Gilles Darold
Le 01/11/2017 à 05:15, Pavel Stehule a écrit :
>
>
> 2017-10-31 22:28 GMT+01:00 srielau <se...@rielau.com
> <mailto:se...@rielau.com>>:
>
> Pavel,
>
> There is no
> DECLARE TEMP CURSOR
> or
> DECLARE TEMP variable in PLpgSQL
> and
>
>
> sure .. DECLARE TEMP has no sense, I talked about similarity DECLARE
> and CREATE TEMP
>
>
> CREATE TEMP TABLE has a different meaning from what I understand you
> envision for variables.
>
> But maybe I'm mistaken. Your original post did not describe the entire
> syntax:
> CREATE [TEMP] VARIABLE [IF NOT EXISTS] name AS type
>   [ DEFAULT expression ] [[NOT] NULL]
>   [ ON TRANSACTION END { RESET | DROP } ]
>   [ { VOLATILE | STABLE } ];
>
> Especially the TEMP is not spelled out and how its presence affects or
> doesn't ON TRANSACTION END.
> So may be if you elaborate I understand where you are coming from.
>
>
> TEMP has same functionality (and implementation) like our temp tables
> - so at session end the temp variables are destroyed, but it can be
> assigned to transaction.

Oh ok, I understand thanks for the precision.

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org



Re: [HACKERS] proposal: schema variables

2017-10-31 Thread Gilles Darold
Le 31/10/2017 à 23:36, Gilles Darold a écrit :
> Le 31/10/2017 à 22:28, srielau a écrit :
>> Pavel,
>>
>> There is no
>> DECLARE TEMP CURSOR
>> or
>> DECLARE TEMP variable in PLpgSQL
>> and
>> CREATE TEMP TABLE has a different meaning from what I understand you
>> envision for variables.
>>
>> But maybe I'm mistaken. Your original post did not describe the entire
>> syntax:
>> CREATE [TEMP] VARIABLE [IF NOT EXISTS] name AS type 
>>   [ DEFAULT expression ] [[NOT] NULL]
>>   [ ON TRANSACTION END { RESET | DROP } ]
>>   [ { VOLATILE | STABLE } ];
>>
>> Especially the TEMP is not spelled out and how its presence affects or
>> doesn't ON TRANSACTION END.
>> So may be if you elaborate I understand where you are coming from.
> I think that the TEMP keyword can be removed. If I understand well the
> default scope for variable is the session, every transaction in a
> session will see the same value. For the transaction level, probably the
> reason of the TEMP keyword, I think the [ ON TRANSACTION END { RESET |
> DROP } ] will allow to restrict the scope to this transaction level
> without needing the TEMP keyword. When a variable is created in a
> transaction, it is temporary if "ON TRANSACTION END DROP" is used
> otherwise it will persist after the transaction end. I guess that this
> is the same as using TEMP keyword?

I forgot to say that in the last case the DECLARE statement can be used
so I don't see the reason of this kind of "temporary" variables.

Maybe the variable object like used in DB2 and defined in document :
https://www.ibm.com/support/knowledgecenter/en/SSEPEK_11.0.0/sqlref/src/tpc/db2z_sql_createvariable.html
could be enough to cover our needs.


-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: schema variables

2017-10-31 Thread Gilles Darold
Le 31/10/2017 à 22:28, srielau a écrit :
> Pavel,
>
> There is no
> DECLARE TEMP CURSOR
> or
> DECLARE TEMP variable in PLpgSQL
> and
> CREATE TEMP TABLE has a different meaning from what I understand you
> envision for variables.
>
> But maybe I'm mistaken. Your original post did not describe the entire
> syntax:
> CREATE [TEMP] VARIABLE [IF NOT EXISTS] name AS type 
>   [ DEFAULT expression ] [[NOT] NULL]
>   [ ON TRANSACTION END { RESET | DROP } ]
>   [ { VOLATILE | STABLE } ];
>
> Especially the TEMP is not spelled out and how its presence affects or
> doesn't ON TRANSACTION END.
> So may be if you elaborate I understand where you are coming from.

I think that the TEMP keyword can be removed. If I understand well the
default scope for variable is the session, every transaction in a
session will see the same value. For the transaction level, probably the
reason of the TEMP keyword, I think the [ ON TRANSACTION END { RESET |
DROP } ] will allow to restrict the scope to this transaction level
without needing the TEMP keyword. When a variable is created in a
transaction, it is temporary if "ON TRANSACTION END DROP" is used
otherwise it will persist after the transaction end. I guess that this
is the same as using TEMP keyword?


-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: schema variables

2017-10-27 Thread Gilles Darold
Le 26/10/2017 à 09:21, Pavel Stehule a écrit :
> Hi,
>
> I propose a  new database object - a variable. The variable is
> persistent object, that holds unshared session based not transactional
> in memory value of any type. Like variables in any other languages.
> The persistence is required for possibility to do static checks, but
> can be limited to session - the variables can be temporal.
>
> My proposal is related to session variables from Sybase, MSSQL or
> MySQL (based on prefix usage @ or @@), or package variables from
> Oracle (access is controlled by scope), or schema variables from DB2.
> Any design is coming from different sources, traditions and has some
> advantages or disadvantages. The base of my proposal is usage schema
> variables as session variables for stored procedures. It should to
> help to people who try to port complex projects to PostgreSQL from
> other databases.
>
> The Sybase  (T-SQL) design is good for interactive work, but it is
> weak for usage in stored procedures - the static check is not
> possible. Is not possible to set some access rights on variables.
>
> The ADA design (used on Oracle) based on scope is great, but our
> environment is not nested. And we should to support other PL than
> PLpgSQL more strongly.
>
> There is not too much other possibilities - the variable that should
> be accessed from different PL, different procedures (in time) should
> to live somewhere over PL, and there is the schema only.
>
> The variable can be created by CREATE statement:
>
> CREATE VARIABLE public.myvar AS integer;
> CREATE VARIABLE myschema.myvar AS mytype;
>
> CREATE [TEMP] VARIABLE [IF NOT EXISTS] name AS type
>   [ DEFAULT expression ] [[NOT] NULL]
>   [ ON TRANSACTION END { RESET | DROP } ]
>   [ { VOLATILE | STABLE } ];
>
> It is dropped by command DROP VARIABLE  [ IF EXISTS] varname.
>
> The access rights is controlled by usual access rights - by commands
> GRANT/REVOKE. The possible rights are: READ, WRITE
>
> The variables can be modified by SQL command SET (this is taken from
> standard, and it natural)
>
> SET varname = expression;
>
> Unfortunately we use the SET command for different purpose. But I am
> thinking so we can solve it with few tricks. The first is moving our
> GUC to pg_catalog schema. We can control the strictness of SET
> command. In one variant, we can detect custom GUC and allow it, in
> another we can disallow a custom GUC and allow only schema variables.
> A new command LET can be alternative.
>
> The variables should be used in queries implicitly (without JOIN)
>
> SELECT varname;
>
> The SEARCH_PATH is used, when varname is located. The variables can be
> used everywhere where query parameters are allowed.
>
> I hope so this proposal is good enough and simple.
>
> Comments, notes?
>
> regards
>
> Pavel
>
>

Great feature that will help for migration. How will you handle CONSTANT
declaration? With Oracle it is possible to declare a constant as follow:


  varname     CONSTANT INTEGER    := 500;


for a variable that can't be changed. Do you plan to add a CONSTANT or
READONLY keyword or do you want use GRANT on the object to deal with
this case?


Regards

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Issue with circular references in VIEW

2017-07-24 Thread Gilles Darold
Le 24/07/2017 à 21:18, Tom Lane a écrit :
> Gilles Darold <gilles.dar...@dalibo.com> writes:
>> Le 24/07/2017 à 19:19, Tom Lane a écrit :
>>> ... I'm inclined to think in terms of fixing it at that level
>>> rather than in pg_dump.  It doesn't look like it would be hard to fix:
>>> both functions ultimately call get_query_def(), it's just that one passes
>>> down a tuple descriptor for the view while the other currently doesn't.
>> I was thinking that this was intentional that pg_get_ruledef() returns
>> the raw code typed by the user. I will fix it and send a patch following
>> your explanation.
> Oh, I just committed a patch.

That's fine, I'm sure it is better than the one I could produce :-)
Thanks for fixing this issue.

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Issue with circular references in VIEW

2017-07-24 Thread Gilles Darold
Le 24/07/2017 à 19:19, Tom Lane a écrit :
> Gilles Darold <gilles.dar...@dalibo.com> writes:
>> There is an issue with version prior to 10 when dumping views with circular
>> references. I know that these views are now exported as views in 10 but they
>> are still exported as TABLE + RULE in prior versions. This conduct to the
>> following error when columns of sub-queries doesn't have the same aliases
>> names:
> The core of this issue, I think, is that pg_get_viewdef() knows that it
> should make what it prints have output column names that match the view,
> whereas pg_get_ruledef() does not, even when it is printing an ON SELECT
> rule.  This is a little bit surprising --- you'd really expect those
> functions to produce identical SELECT statements --- and I think it's
> likely to break other tools even if pg_dump has managed to skirt the
> issue.  So I'm inclined to think in terms of fixing it at that level
> rather than in pg_dump.  It doesn't look like it would be hard to fix:
> both functions ultimately call get_query_def(), it's just that one passes
> down a tuple descriptor for the view while the other currently doesn't.

I was thinking that this was intentional that pg_get_ruledef() returns
the raw code typed by the user. I will fix it and send a patch following
your explanation.

Thanks.

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Issue with circular references in VIEW

2017-07-24 Thread Gilles Darold
Hi,

There is an issue with version prior to 10 when dumping views with circular
references. I know that these views are now exported as views in 10 but they
are still exported as TABLE + RULE in prior versions. This conduct to the
following error when columns of sub-queries doesn't have the same aliases
names:

ERROR:  SELECT rule's target entry 1 has different column name from
column "col_a"
DETAIL:  SELECT target entry is named "other_name1".

Here is the steps to reproduce:

CREATE TABLE t1 (f1 INT PRIMARY KEY, f2 text);
 
CREATE VIEW v_t1 (col_a, col_b) AS
 WITH win_query AS (
   SELECT
 1::INTEGER AS col1,
 'b' ::text AS col2
)
SELECT
   imp.col1 AS other_name1,
   imp.col2 AS other_name2
FROM win_query imp
UNION
SELECT
   2::INTEGER AS col1,
   'z'::text AS col2
UNION
SELECT * FROM t1 GROUP BY f1 ;

This is translated into the following code by pg_dump with PostgreSQL 9.x:

CREATE TABLE t1 (
f1 integer NOT NULL,
f2 text
);

CREATE TABLE v_t1 (
col_a integer,
col_b text
);

COPY t1 (f1, f2) FROM stdin;
\.

CREATE RULE "_RETURN" AS
ON SELECT TO v_t1 DO INSTEAD  WITH win_query AS (
 SELECT 1 AS col1,
'b'::text AS col2
)
 SELECT imp.col1 AS other_name1,
imp.col2 AS other_name2
   FROM win_query imp
UNION
 SELECT 2 AS col1,
'z'::text AS col2
UNION
 SELECT t1.f1,
t1.f2
   FROM t1
  GROUP BY t1.f1;

and this dump can't be restored because of the error reported above.

It is clear that the user is responsible of using wrong aliases but
this doesn't generate error at creation time, and looking at the view
through the call of pg_get_viewdef(), aliases are correctly rewritten:

test_view=# \d+ v_t1
  View "public.v_t1"
 Column |  Type   | Modifiers | Storage  | Description
+-+---+--+-
 col_a  | integer |   | plain|
 col_b  | text|   | extended |
View definition:
 WITH win_query AS (
 SELECT 1 AS col1,
'b'::text AS col2
)
 SELECT imp.col1 AS col_a,
imp.col2 AS col_b
   FROM win_query imp
UNION
 SELECT 2 AS col_a,
'z'::text AS col_b
UNION
 SELECT t1.f1 AS col_a,
t1.f2 AS col_b
   FROM t1
  GROUP BY t1.f1;


The rule code retrieved using pg_get_ruledef() reports the use of original
incorrect column's aliases:

CREATE RULE "_RETURN" AS
ON SELECT TO v_t1 DO INSTEAD  WITH win_query AS (
 SELECT 1 AS col1,
'b'::text AS col2
)
 SELECT imp.col1 AS other_name1,
imp.col2 AS other_name2
   FROM win_query imp
UNION
 SELECT 2 AS col1,
'z'::text AS col2
UNION
 SELECT t1.f1,
t1.f2
   FROM t1
  GROUP BY t1.f1;

PostgreSQL 10 now use views and no more table+rule, so call to
pg_get_viewdef() self fix this issue. My question is do this
method to export views will be back-ported to prior version or
should we have to fix it an other way?

In the last case does the use of pg_get_viewdef() to reconstruct the
_RETURN rule could be a simple fix? For example:

'CREATE RULE "_RETURN" AS
ON SELECT TO v_t1 DO INSTEAD %s;', pg_get_viewdef(...)

Of course manually rewriting the view and replace it fixes the issue
but I think that generating dump that can't be restored easily can
confuse users.

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to implement pg_current_logfile() function

2017-02-19 Thread Gilles Darold
Le 16/02/2017 à 16:13, Robert Haas a écrit :
> On Wed, Feb 15, 2017 at 4:03 PM, Alvaro Herrera
> <alvhe...@2ndquadrant.com> wrote:
>> So what is going on here is that SysLogger_Start() wants to unlink the
>> current-logfile file if the collector is not enabled.  This should
>> probably be split out into a separate new function, for two reasons:
>> first, it doesn't seem good idea to have SysLogger_Start do something
>> other than start the logger; and second, so that we don't have a syscall
>> on each ServerLoop iteration.  That new function should be called from
>> some other place -- perhaps reaper() and just before entering
>> ServerLoop, so that the file is deleted if the syslogger goes away or is
>> not started.
> I think it's sufficient to just remove the file once on postmaster
> startup before trying to launch the syslogger for the first time.
> logging_collector is PGC_POSTMASTER, so if it's not turned on when the
> cluster first starts, it can't be activated later.  If it dies, that
> doesn't seem like a reason to remove the file.  We're going to restart
> the syslogger, and when we do, it can update the file.
>

I've attached a new full v30 patch that include last patch from Karl.
Now the current_logfile file is removed only at postmaster startup, just
before launching the syslogger for the first time.


-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 95afc2c..a668456 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -4280,6 +4280,11 @@ SELECT * FROM parent WHERE key = 2400;
   where to log
  
 
+ 
+   current_logfiles
+   and the log_destination configuration parameter
+ 
+
  
 
  
@@ -4310,6 +4315,27 @@ SELECT * FROM parent WHERE key = 2400;
  must be enabled to generate
 CSV-format log output.

+   
+When either stderr or
+csvlog are included, the file
+current_logfiles is created to record the location
+of the log file(s) currently in use by the logging collector and the
+associated logging destination. This provides a convenient way to
+find the logs currently in use by the instance. Here is an example of
+this file's content:
+
+stderr pg_log/postgresql.log
+csvlog pg_log/postgresql.csv
+
+
+current_logfiles is recreated when a new log file
+is created as an effect of rotation, and
+when log_destination is reloaded.  It is removed when
+neither stderr
+nor csvlog are included
+in log_destination, and when the logging collector is
+disabled.
+   
 

 
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index d7738b1..cfa6349 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -15468,6 +15468,13 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n);
   
 
   
+   pg_current_logfile(text)
+   text
+   Primary log file name, or log in the requested format,
+   currently in use by the logging collector
+  
+
+  
pg_my_temp_schema()
oid
OID of session's temporary schema, or 0 if none
@@ -15686,6 +15693,45 @@ SET search_path TO schema , schema, ..

 

+pg_current_logfile
+   
+
+   
+Logging
+pg_current_logfile function
+   
+
+   
+ current_logfiles
+ and the pg_current_logfile function
+   
+
+   
+Logging
+current_logfiles file and the pg_current_logfile
+function
+   
+
+   
+pg_current_logfile returns, as text,
+the path of the log file(s) currently in use by the logging collector.
+The path includes the  directory
+and the log file name.  Log collection must be enabled or the return value
+is NULL.  When multiple log files exist, each in a
+different format, pg_current_logfile called
+without arguments returns the path of the file having the first format
+found in the ordered list: stderr, csvlog.
+NULL is returned when no log file has any of these
+formats.  To request a specific file format supply, as text,
+either csvlog or stderr as the value of the
+optional parameter. The return value is NULL when the
+log format requested is not a configured
+.  The
+pg_current_logfiles reflects the contents of the
+current_logfiles file.
+   
+
+   
 pg_my_temp_schema

 
diff --git a/doc/src/sgml/storage.sgml b/doc/src/sgml/storage.sgml
index 127b759..262adea 100644
--- a/doc/src/sgml/storage.sgml
+++ b/doc/src/sgml/storage.sgml
@@ -36,6 +36,10 @@ these required items, the cluster configuration files
 PGDATA, although it is possible to place them elsewhere.
 
 
+
+  current_logfiles
+
+
 
 Contents of PGDATA
 
@@ -61,6 +65,12 @@ Item
 
 
 
+ current_logfiles
+ File recording the log file(s) currently writt

Re: [HACKERS] Patch to implement pg_current_logfile() function

2017-01-17 Thread Gilles Darold
Le 17/01/2017 à 19:58, Karl O. Pinc a écrit :
> On Tue, 17 Jan 2017 19:06:22 +0100
> Gilles Darold <gilles.dar...@dalibo.com> wrote:
>
>> Le 17/01/2017 à 03:22, Michael Paquier a écrit :
>>> On Tue, Jan 17, 2017 at 2:21 AM, Karl O. Pinc <k...@meme.com>
>>> wrote:  
>>>> On January 15, 2017 11:47:51 PM CST, Michael Paquier
>>>> <michael.paqu...@gmail.com> wrote:  
>
>>>>> Also, I would rather see an ERROR returned to the user in case of
>>>>> bad data in current_logfiles, I did not change that either as
>>>>> that's the original intention of Gilles.  
>> I'm not against adding a warning or error message here even if it may
>> never occurs, but we need a new error code as it seems to me that no
>> actual error code can be used.  
> Seems to me the XX001 "data_corrupted" sub-category of
> XX000 "internal_error" is appropriate.

I don't think so, this file doesn't contain any data and we must not
report such error in this case. Somethink like "bad/unknow file format"
would be better.

 

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to implement pg_current_logfile() function

2017-01-17 Thread Gilles Darold
Le 17/01/2017 à 03:22, Michael Paquier a écrit :
> On Tue, Jan 17, 2017 at 2:21 AM, Karl O. Pinc <k...@meme.com> wrote:
>> On January 15, 2017 11:47:51 PM CST, Michael Paquier 
>> <michael.paqu...@gmail.com> wrote:
>>> On Mon, Jan 16, 2017 at 1:14 PM, Karl O. Pinc <k...@meme.com> wrote:
>>> With all those issues fixed, I finish with the attached, that I am
>>> fine to pass down to a committer. I still think that this should be
>>> only one function using a SRF with rows shaped as (type, path) for
>>> simplicity, but as I am visibly outnumbered I won't insist further.
>> That also makes a certain amount of sense to me but I can't say I have 
>> thought deeply about it. Haven't paid any attention to this issue and am 
>> fine letting things move forward as is.
> Gilles, what's your opinion here? As the author that's your call at
> the end. What the point here is would be to change
> pg_current_logfiles() to something like that:
> =# SELECT * FROM pg_current_logfiles();
>  method | file
> |
>  stderr | pg_log/postgresql.log
>  csvlog | pg_log/postgresql.csv
> And using this SRF users can filter the method with a WHERE clause.
> And as a result the 1-arg version is removed. No rows are returned if
> current_logfiles does not exist. I don't think there is much need for
> a system view either.

This have already been discuted previoulsy in this thread, one of my
previous patch version has implemented this behavior but we decide that
what we really want is to be able to use the function using the
following simple query:

SELECT pg_read_file(pg_current_logfiles());

and not something like

SELECT pg_read_file(SELECT file FROM pg_current_logfiles() LIMIT 1);
or
SELECT pg_read_file(SELECT file FROM pg_current_logfiles() WHERE
method='stderr');

You can also obtain the "kind" of output from the SRF function using:

SELECT pg_read_file('current_logfiles');


>>> Also, I would rather see an ERROR returned to the user in case of bad
>>> data in current_logfiles, I did not change that either as that's the
>>> original intention of Gilles.
>> I could be wrong but I seem to recall that Robert recommended against an 
>> error message. If there is bad data then something is really wrong, up to 
>> some sort of an attack on the back end. Because this sort of thing simply 
>> shouldn't happen it's enough in my opinion to guard against buffer overruns 
>> etc and just get on with life. If something goes unexpectedly and badly 
>> wrong with internal data structures in general there would have to be all 
>> kinds of additional code to cover every possible problem and that doesn't 
>> seem to make sense.
> Hm... OK. At the same time not throwing at least a WARNING is
> confusing, because a NULL result is returned as well even if the input
> method is incorrect and even if the method is correct but it is not
> present in current_logfiles. As the user is thought as a trusted user
> as it has access to this function, I would think that being verbose on
> the error handling, or at least warnings, would make things easier to
> analyze.

I'm not against adding a warning or error message here even if it may
never occurs, but we need a new error code as it seems to me that no
actual error code can be used.  


-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to implement pg_current_logfile() function

2017-01-15 Thread Gilles Darold
Le 15/01/2017 à 06:54, Michael Paquier a écrit :
> On Sat, Jan 14, 2017 at 10:02 PM, Gilles Darold
> <gilles.dar...@dalibo.com> wrote:
>> Le 13/01/2017 à 14:09, Michael Paquier a écrit :
>>> On Fri, Jan 13, 2017 at 5:48 PM, Gilles Darold <gilles.dar...@dalibo.com> 
>>> wrote:
>>>> Le 13/01/2017 à 05:26, Michael Paquier a écrit :
>>>>> Surely the temporary file of current_logfiles should not be included
>>>>> in base backups (look at basebackup.c). Documentation needs to reflect
>>>>> that as well. Also, should we have current_logfiles in a base backup?
>>>>> I would think no.
>>>> Done but can't find any documentation about the file exclusion, do you
>>>> have a pointer?
>>> protocol.sgml, in the protocol-replication part. You could search for
>>> the paragraph that contains postmaster.opts. There is no real harm in
>>> including current_logfiles in base backups, but that's really in the
>>> same bag as postmaster.opts or postmaster.pid, particularly if the log
>>> file name has a timestamp.
>> Thanks for the pointer. After reading this part I think it might only
>> concern files that are critical at restore time. I still think that the
>> file might not be removed automatically from the backup. If it is
>> restored it has strictly no impact at all, it will be removed or
>> overwritten at startup. We can let the user choose to remove it from the
>> backup or not, the file can be an help to find the latest log file
>> related to a backup.
> OK, no problem for me. I can see that your patch does the right thing
> to ignore the current_logfiles.tmp. Could you please update
> t/010_pg_basebackup.pl and add this file to the list of files filled
> with DONOTCOPY?
Applied in attached patch v22.

>>>>> pg_current_logfile() can be run by *any* user. We had better revoke
>>>>> its access in system_views.sql!
>>>> Why? There is no special information returned by this function unless
>>>> the path but it can be retrieve using show log_directory.
>>> log_directory could be an absolute path, and we surely don't want to
>>> let normal users have a look at it. For example, "show log_directory"
>>> can only be seen by superusers. As Stephen Frost is for a couple of
>>> months (years?) on a holly war path against super-user checks in
>>> system functions, I think that revoking the access to this function is
>>> the best thing to do. It is as well easier to restrict first and
>>> relax, the reverse is harder to justify from a compatibility point of
>>> view.
>> Right, I've append a REVOKE to the function in file
>> backend/catalog/system_views.sql, patch v21 reflect this change.
> Thanks. That looks good.
>
> +   {
> +   /* Logging collector is not enabled. We don't know where messages are
> +* logged.  Remove outdated file holding the current log filenames.
> +*/
> +   unlink(LOG_METAINFO_DATAFILE);
> return 0
> This comment format is not postgres-like.

Fixed too.


-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 07afa3c..c44984d 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -4211,6 +4211,17 @@ SELECT * FROM parent WHERE key = 2400;
  server log
 
 
+
+ When logs are written to the file system, the  file includes the type,
+ the location and the name of the logs currently in use. This provides a
+ convenient way to find the logs currently in used by the instance.
+
+$ cat $PGDATA/current_logfiles
+stderr pg_log/postgresql-2017-01-13_085812.log
+
+
+
 
  Where To Log
 
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 10e3186..693669b 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -15441,6 +15441,13 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n);
   
 
   
+   pg_current_logfile(text)
+   text
+   Primary log file name, or log in the requested format,
+   currently in use by the logging collector
+  
+
+  
pg_my_temp_schema()
oid
OID of session's temporary schema, or 0 if none
@@ -15658,6 +15665,39 @@ SET search_path TO schema , schema, ..
 the time when the postmaster process re-read the configuration files.)

 
+
+pg_current_logile
+   
+
+   
+Logging
+pg_current_logfile function
+   
+
+   
+pg_current_logfile returns, as text,
+the path of either the csv or stderr log file currently in use by the
+logging collector.  This is a path including the
+ directory and the log

Re: [HACKERS] Patch to implement pg_current_logfile() function

2017-01-14 Thread Gilles Darold
Le 13/01/2017 à 14:09, Michael Paquier a écrit :
> On Fri, Jan 13, 2017 at 5:48 PM, Gilles Darold <gilles.dar...@dalibo.com> 
> wrote:
>> Le 13/01/2017 à 05:26, Michael Paquier a écrit :
>>> Surely the temporary file of current_logfiles should not be included
>>> in base backups (look at basebackup.c). Documentation needs to reflect
>>> that as well. Also, should we have current_logfiles in a base backup?
>>> I would think no.
>> Done but can't find any documentation about the file exclusion, do you
>> have a pointer?
> protocol.sgml, in the protocol-replication part. You could search for
> the paragraph that contains postmaster.opts. There is no real harm in
> including current_logfiles in base backups, but that's really in the
> same bag as postmaster.opts or postmaster.pid, particularly if the log
> file name has a timestamp.
Thanks for the pointer. After reading this part I think it might only
concern files that are critical at restore time. I still think that the
file might not be removed automatically from the backup. If it is
restored it has strictly no impact at all, it will be removed or
overwritten at startup. We can let the user choose to remove it from the
backup or not, the file can be an help to find the latest log file
related to a backup.

>
>>> pg_current_logfile() can be run by *any* user. We had better revoke
>>> its access in system_views.sql!
>> Why? There is no special information returned by this function unless
>> the path but it can be retrieve using show log_directory.
> log_directory could be an absolute path, and we surely don't want to
> let normal users have a look at it. For example, "show log_directory"
> can only be seen by superusers. As Stephen Frost is for a couple of
> months (years?) on a holly war path against super-user checks in
> system functions, I think that revoking the access to this function is
> the best thing to do. It is as well easier to restrict first and
> relax, the reverse is harder to justify from a compatibility point of
> view.

Right, I've append a REVOKE to the function in file
backend/catalog/system_views.sql, patch v21 reflect this change.

Thanks.

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 30dd54c..393195f 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -4211,6 +4211,17 @@ SELECT * FROM parent WHERE key = 2400;
  server log
 
 
+
+ When logs are written to the file system, the  file includes the type,
+ the location and the name of the logs currently in use. This provides a
+ convenient way to find the logs currently in used by the instance.
+
+$ cat $PGDATA/current_logfiles
+stderr pg_log/postgresql-2017-01-13_085812.log
+
+
+
 
  Where To Log
 
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 10e3186..693669b 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -15441,6 +15441,13 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n);
   
 
   
+   pg_current_logfile(text)
+   text
+   Primary log file name, or log in the requested format,
+   currently in use by the logging collector
+  
+
+  
pg_my_temp_schema()
oid
OID of session's temporary schema, or 0 if none
@@ -15658,6 +15665,39 @@ SET search_path TO schema , schema, ..
 the time when the postmaster process re-read the configuration files.)

 
+
+pg_current_logile
+   
+
+   
+Logging
+pg_current_logfile function
+   
+
+   
+pg_current_logfile returns, as text,
+the path of either the csv or stderr log file currently in use by the
+logging collector.  This is a path including the
+ directory and the log file name.
+Log collection must be active or the return value
+is NULL.  When multiple log files exist, each in a
+different format, pg_current_logfile called
+without arguments returns the path of the file having the first format
+found in the ordered
+list: stderr, csvlog.
+NULL is returned when no log file has any of these
+formats.  To request a specific file format supply,
+as text, either csvlog
+or stderr as the value of the optional parameter. The
+return value is NULL when the log format requested
+is not a configured .
+
+pg_current_logfiles reflects the content of the
+ file.  All caveats
+regards current_logfiles content are applicable
+to pg_current_logfiles' return value.
+   
+

 pg_my_temp_schema

diff --git a/doc/src/sgml/storage.sgml b/doc/src/sgml/storage.sgml
index 5c52824..3ce2a7e 100644
--- a/doc/src/sgml/storage.sgml
+++ b/doc/src/sgml/storage.sgml
@@ -60,6 +60,38 @@ Item
  Subdirectory containing per-database subdirectories
 
 
+
+ 
+  
+   cu

Re: [HACKERS] Patch to implement pg_current_logfile() function

2017-01-13 Thread Gilles Darold
Le 13/01/2017 à 05:26, Michael Paquier a écrit :
> OK. I have been looking at this patch.
>
> git diff --check is very unhappy, particularly because of
> update_metainfo_datafile().
Sorry, fixed.

> +
> +When logs are written to the file-system their paths, names, and
> +types are recorded in
> +the  file.  This provides
> +a convenient way to find and access log content without establishing a
> +database connection.
> +
> File system is used as a name here. In short, "file-system" -> "file
> system". I think that it would be a good idea to show up the output of
> this file. This could be reworded a bit:
> "When logs are written to the file system, the  linkend="storage-pgdata-current-logfiles"> file includes the location,
> the name and the type of the logs currently in use. This provides a
> convenient way to find the logs currently in used by the instance."
Changes applied. Output example added:

 
$ cat $PGDATA/current_logfiles
stderr pg_log/postgresql-2017-01-13_085812.log



> @@ -15441,6 +15441,19 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY
> AS t(ls,n);
>
>
>
> +   
> pg_current_logfile()
> +   text
> +   primary log file name in use by the logging collector
> +  
> +
> +  
> +   
> pg_current_logfile(text)
> +   text
> +   log file name, of log in the requested format, in use by the
> +   logging collector
> +  
> You could just use one line, using squared brackets for the additional
> argument. The description looks imprecise, perhaps something like that
> would be better: "Log file name currently in use by the logging
> collector".
OK, back to a single row entry with optional parameter.

> +/* in backend/utils/adt/misc.c and backend/postmaster/syslogger.c */
> +/*
> + * Name of file holding the paths, names, and types of the files where 
> current
> + * log messages are written, when the log collector is enabled.  Useful
> + * outside of Postgres when finding the name of the current log file in the
> + * case of time-based log rotation.
> + */
> Not mentioning the file names here would be better. If this spreads in
> the future updates would likely be forgotten. This paragraph could be
> reworked: "configuration file saving meta-data information about the
> log files currently in use by the system logging process".
Removed.

> --- a/src/include/miscadmin.h
> +++ b/src/include/miscadmin.h
> @@ -466,5 +466,4 @@ extern bool has_rolreplication(Oid roleid);
>  /* in access/transam/xlog.c */
>  extern bool BackupInProgress(void);
>  extern void CancelBackup(void);
> -
>  #endif   /* MISCADMIN_H */
> Unrelated diff.
Removed

> +   /*
> +* Force rewriting last log filename when reloading configuration,
> +* even if rotation_requested is false, log_destination may have
> +* been changed and we don't want to wait the next file rotation.
> +*/
> +   update_metainfo_datafile();
> +
> }
> I know I am famous for being noisy on such things, but please be
> careful with newline use..
That's ok for me, unnecessary newline removed.

> +   else
> +   {
> +   /* Unknown format, no space. Return NULL to caller. */
> +   lbuffer[0] = '\0';
> +   break;
> +   }
> Hm. I would think that an ERROR is more useful to the user here.
The problem addressed here might never happen unless file corruption but
if you know an error code that can be use in this case I will add the
error message. I can not find any error code usable in this case.

> Perhaps pg_current_logfile() should be marked as STRICT?
I don't think so, the log format parameter is optional, and passing NULL
might be like passing no parameter.

> Would it make sense to have the 0-argument version be a SRF returning
> rows of '(log_destination,location)'? We could just live with this
> function I think, and let the caller filter by logging method.
No, this case have already been eliminated during the previous review.

> +is NULL.  When multiple logfiles exist, each in a
> +different format, pg_current_logfile called
> s/logfiles/log files/.
Applied.

> Surely the temporary file of current_logfiles should not be included
> in base backups (look at basebackup.c). Documentation needs to reflect
> that as well. Also, should we have current_logfiles in a base backup?
> I would think no.
Done but can't find any documentation about the file exclusion, do you
have a pointer?

> pg_current_logfile() can be run by *any* user. We had better revoke
> its access in system_views.sql!
Why? There is no special information returned by this function u

Re: [HACKERS] Patch to implement pg_current_logfile() function

2017-01-12 Thread Gilles Darold
Le 11/01/2017 à 08:39, Michael Paquier a écrit :
> On Wed, Dec 14, 2016 at 12:19 AM, Robert Haas <robertmh...@gmail.com> wrote:
>> On Mon, Dec 12, 2016 at 4:31 AM, Gilles Darold <gilles.dar...@dalibo.com> 
>> wrote:
>>> Applied in last version of the patch v18 as well as removing of an
>>> unused variable.
>> OK, this looks a lot closer to being committable.  But:
>>
>> [long review]
>  
> Gilles, could you update the patch based on this review from Robert? I
> am marking the patch as "waiting on author" for the time being. I
> looked a bit at the patch but did not notice anything wrong with it,
> but let's see with a fresh version...
Hi,

My bad, I was thinking that Karl has planned an update of the patch in
his last post, sorry for my misunderstanding.

Here is attached v19 of the patch that might fix all comment from
Robert's last review.

Best regards,

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 30dd54c..dd23932 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -4211,6 +4211,14 @@ SELECT * FROM parent WHERE key = 2400;
  server log
 
 
+
+When logs are written to the file-system their paths, names, and
+types are recorded in
+the  file.  This provides
+a convenient way to find and access log content without establishing a
+database connection.
+
+
 
  Where To Log
 
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 10e3186..e4723f4 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -15441,6 +15441,19 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n);
   
 
   
+   pg_current_logfile()
+   text
+   primary log file name in use by the logging collector
+  
+
+  
+   pg_current_logfile(text)
+   text
+   log file name, of log in the requested format, in use by the
+   logging collector
+  
+
+  
pg_my_temp_schema()
oid
OID of session's temporary schema, or 0 if none
@@ -15658,6 +15671,39 @@ SET search_path TO schema , schema, ..
 the time when the postmaster process re-read the configuration files.)

 
+
+pg_current_logile
+   
+
+   
+Logging
+pg_current_logfile function
+   
+
+   
+pg_current_logfile returns, as text,
+the path of either the csv or stderr log file currently in use by the
+logging collector.  This is a path including the
+ directory and the log file name.
+Log collection must be active or the return value
+is NULL.  When multiple logfiles exist, each in a
+different format, pg_current_logfile called
+without arguments returns the path of the file having the first format
+found in the ordered
+list: stderr, csvlog.
+NULL is returned when no log file has any of these
+formats.  To request a specific file format supply,
+as text, either csvlog
+or stderr as the value of the optional parameter. The
+return value is NULL when the log format requested
+is not a configured .
+
+pg_current_logfiles reflects the content of the
+ file.  All caveats
+regards current_logfiles content are applicable
+to pg_current_logfiles' return value.
+   
+

 pg_my_temp_schema

diff --git a/doc/src/sgml/storage.sgml b/doc/src/sgml/storage.sgml
index 5c52824..3ce2a7e 100644
--- a/doc/src/sgml/storage.sgml
+++ b/doc/src/sgml/storage.sgml
@@ -60,6 +60,38 @@ Item
  Subdirectory containing per-database subdirectories
 
 
+
+ 
+  
+   current_logfiles
+  
+  
+   Logging
+   current_logfiles file
+  
+  current_logfiles
+ 
+ 
+  
+  A file recording the log file(s) currently written to by the syslogger
+  and the file's log formats, stderr
+  or csvlog.  Each line of the file is a space separated list of
+  two elements: the log format and the full path to the log file including the
+  value of .  The log format must be present
+  in  to be listed in
+  current_logfiles.
+  
+
+  
+  The current_logfiles file
+  is present only when  is
+  activated and when at least one of stderr or
+  csvlog value is present in
+  .
+  
+ 
+
+
 
  global
  Subdirectory containing cluster-wide tables, such as
diff --git a/src/backend/postmaster/syslogger.c b/src/backend/postmaster/syslogger.c
index 13a0301..ecaeeef 100644
--- a/src/backend/postmaster/syslogger.c
+++ b/src/backend/postmaster/syslogger.c
@@ -146,7 +146,7 @@ static char *logfile_getname(pg_time_t timestamp, const char *suffix);
 static void set_next_rotation_time(void);
 static void sigHupHandler(SIGNAL_ARGS);
 static void sigUsr1Handler(SIGNAL_ARGS);
-
+static void update_metainfo_datafile(void);
 
 /*
  * Main entry point for syslogger process
@@ -348,6 +348,13 @@ SysLoggerMain(int argc, char *argv[])
 rotation_disabled = false;
 rotation_requested = true;
 			}
+

Re: [HACKERS] Packages: Again

2017-01-11 Thread Gilles Darold
 my
best to get out more releases, your reports/feedbacks help me a lot to
add more automatic migration code. I don't think it is possible to have
a 100% automatic migration because there will always be some things that
need manual rewrite, like point 3, I don't think we want stuff like NULL
equal EMPTY. There is also tons of external modules like DBMS_* that can
be compared to extension, but if every one share is work on migration
perhaps we can save more of time.


Regards,



-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org



Re: [HACKERS] Patch to implement pg_current_logfile() function

2016-12-12 Thread Gilles Darold
Le 11/12/2016 à 04:38, Karl O. Pinc a écrit :
> On Sat, 10 Dec 2016 19:41:21 -0600
> "Karl O. Pinc" <k...@meme.com> wrote:
>
>> On Fri, 9 Dec 2016 23:36:12 -0600
>> "Karl O. Pinc" <k...@meme.com> wrote:
>>
>>> Instead I propose (code I have not actually executed):
>>> ...
>>> charlbuffer[MAXPGPATH];
>>> char*log_format = lbuffer;
>>> ...
>>>
>>> /* extract log format and log file path from the line */
>>> log_filepath = strchr(lbuffer, ' ');  /* lbuffer == log_format
>>> */ *log_filepath = '\0'; /* terminate log_format */
>>> log_filepath++;   /* start of file path */
>>> log_filepath[strcspn(log_filepath, "\n")] = '\0';  
>> Er, I guess I prefer the more paranoid, just because who knows
>> what might have manged to somehow write the file that's read
>> into lbuffer:
>>
>> ...
>> charlbuffer[MAXPGPATH];
>> char*log_format = lbuffer;
>> ...
>>
>> /* extract log format and log file path from the line */
>> if (log_filepath = strchr(lbuffer, ' ')) /* lbuffer == log_format
>> */ *log_filepath = '\0';/* terminate log_format */
>> log_filepath++;  /* start of file path */
>> log_filepath[strcspn(log_filepath, "\n")] = '\0';
> *sigh*
>
>
> ...
> charlbuffer[MAXPGPATH];
> char*log_format = lbuffer;
> ...
>
> /* extract log format and log file path from the line */
> /* lbuffer == log_format, they share storage */
> if (log_filepath = strchr(lbuffer, ' '))
> *log_filepath = '\0';   /* terminate log_format */
> else
> {
> /* Unknown format, no space. Return NULL to caller. */
> lbuffer[0] = '\0';
> break;
> }
> log_filepath++;  /* start of file path   */
> log_filepath[strcspn(log_filepath, "\n")] = '\0';
>

Applied in last version of the patch v18 as well as removing of an
unused variable.


-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 0fc4e57..871efec 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -4169,6 +4169,14 @@ SELECT * FROM parent WHERE key = 2400;
  server log
 
 
+
+When logs are written to the file-system their paths, names, and
+types are recorded in
+the  file.  This provides
+a convenient way to find and access log content without establishing a
+database connection.
+
+
 
  Where To Log
 
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index eca98df..20de903 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -15444,6 +15444,19 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n);
   
 
   
+   pg_current_logfile()
+   text
+   primary log file name in use by the logging collector
+  
+
+  
+   pg_current_logfile(text)
+   text
+   log file name, of log in the requested format, in use by the
+   logging collector
+  
+
+  
pg_my_temp_schema()
oid
OID of session's temporary schema, or 0 if none
@@ -15661,6 +15674,39 @@ SET search_path TO schema , schema, ..
 the time when the postmaster process re-read the configuration files.)

 
+
+pg_current_logile
+   
+
+   
+Logging
+pg_current_logfile function
+   
+
+   
+pg_current_logfile returns, as text,
+the path of either the csv or stderr log file currently in use by the
+logging collector.  This is a path including the
+ directory and the log file name.
+Log collection must be active or the return value
+is NULL.  When multiple logfiles exist, each in a
+different format, pg_current_logfile called
+without arguments returns the path of the file having the first format
+found in the ordered
+list: stderr, csvlog.
+NULL is returned when no log file has any of these
+formats.  To request a specific file format supply,
+as text, either csvlog
+or stderr as the value of the optional parameter. The
+return value is NULL when the log format requested
+is not a configured .
+
+pg_current_logfiles reflects the content of the
+ file.  All caveats
+regards current_logfiles content are applicable
+to pg_current_logfiles' return value.
+   
+

 pg_my_temp_schema

diff --git a/doc/src/sgml/storage.sgml b/doc/src/sgml/storage.sgml
index 5c52824..3b003f5 100644
--- a/doc/src/sgml/storage.sgml
+++ b/doc/src/sgml/storage.sgml
@@ -60,6 +60,81 @@ Item
  Subdirectory containing per-database subdirectories
 
 
+
+ 
+  
+   current_logfile

Re: [HACKERS] Patch to implement pg_current_logfile() function

2016-12-09 Thread Gilles Darold
Le 08/12/2016 à 00:52, Robert Haas a écrit :
> On Tue, Dec 6, 2016 at 6:11 PM, Gilles Darold <gilles.dar...@dalibo.com> 
> wrote:
>> It seems that all fixes have been included in this patch.
> Yikes.  This patch has certainly had a lot of review, but it has lots
> of basic inconsistencies with PostgreSQL coding style, which one would
> think somebody would have noticed by now, and there are multiple
> places where the logic seems to do in a complicated way something that
> could be done significantly more simply.  I don't have any objection
> to the basic idea of this patch, but it's got to look and feel like
> the surrounding PostgreSQL code.  And not be unnecessarily
> complicated.
>
> Detailed comments below:
>
> The SGML doesn't match the surrounding style - for example, 
> typically appears on a line by itself.
Fixed.

> +if (!Logging_collector) {
>
> Formatting.
Fixed.

> rm_log_metainfo() could be merged into logfile_writename(), since
> they're always called in conjunction and in the same pattern.  The
> function is poorly named; it should be something like
> update_metainfo_datafile().
Merge and logfile_writename() function renamed as suggested.

> +if (errno == ENFILE || errno == EMFILE)
> +ereport(LOG,
> +(errmsg("system is too busy to write logfile meta
> info, %s will be updated on next rotation (or use SIGHUP to retry)",
> LOG_METAINFO_DATAFILE)));
>
> This seems like a totally unprincipled reaction to a purely arbitrary
> subset of errors.  EMFILE or ENFILE doesn't represent a general "too
> busy" condition, and logfile_open() has already logged the real error.
Removed.

> +snprintf(tempfn, sizeof(tempfn), "%s.tmp", LOG_METAINFO_DATAFILE);
>
> You don't really need to use snprintf() here.  You could #define
> LOG_METAINFO_DATAFILE_TMP LOG_METAINFO_DATAFILE ".tmp" and compute
> this at compile time instead of runtime.
Done and added to syslogger.h

> +if (PG_NARGS() == 1) {
> +fmt = PG_ARGISNULL(0) ? NULL : PG_GETARG_TEXT_PP(0);
> +if (fmt != NULL) {
> +logfmt = text_to_cstring(fmt);
> +if ( (strcmp(logfmt, "stderr") != 0) &&
> +(strcmp(logfmt, "csvlog") != 0) ) {
> +ereport(ERROR,
> +(errmsg("log format %s not supported, possible values are
> stderr or csvlog", logfmt)));
> +PG_RETURN_NULL();
> +}
> +}
> +} else {
> +logfmt = NULL;
> +}
>
> Formatting.  This is unlike PostgreSQL style in multiple ways,
> including cuddled braces, extra parentheses and spaces, and use of
> braces around a single-line block.  Also, this could be written in a
> much less contorted way, like:
>
> if (PG_NARGS() == 0 || PG_ARGISNULL(0))
> logfmt = NULL;
> else
> {
> logfmt = text_to_cstring(PG_GETARG_TEXT_PP(0));
> if (strcmp(logfmt, "stderr") != 0 && strcmp(logfmt, "csvlog") != 0)
>  ereport(...);
> }
Applied.

> Also, the ereport() needs an errcode(), not just an errmsg().
> Otherwise it defaults to ERRCODE_INTERNAL_ERROR, which is not correct.
Add errcode(ERRCODE_INVALID_PARAMETER_VALUE) to the ereport call.

> +/* Check if current log file is present */
> +if (stat(LOG_METAINFO_DATAFILE, _buf) != 0)
> +PG_RETURN_NULL();
>
> Useless test.  The code that follows catches this case anyway and
> handles it the same way.  Which is good, because this has an inherent
> race condition. The previous if (!Logging_collector) test sems fairly
> useless too; unless there's a bug in your syslogger.c code, the file
> won't exist anyway, so we'll return NULL for that reason with no extra
> code needed here.
That's right, both test have been removed.

> +/*
> +* Read the file to gather current log filename(s) registered
> +* by the syslogger.
> +*/
>
> Whitespace isn't right.
>
> +while (fgets(lbuffer, sizeof(lbuffer), fd) != NULL) {
> +charlog_format[10];
> +int i = 0, space_pos = 0;
> +
> +/* Check for a read error. */
> +if (ferror(fd)) {
>
> More coding style issues.
>
> +ereport(ERROR,
> +(errcode_for_file_access(),
> +errmsg("could not read file \"%s\": %m",
> LOG_METAINFO_DATAFILE)));
> +FreeFile(fd);
> +break;
>
> ereport(ERROR) doesn't return, so the code that follows is pointless.
All issues above are fixed.

> +if (strchr(lbuffer, '\n') != NULL)
> +*strchr(lbuffer, '\n') = '\0';
>
> Probably worth adding 

Re: [HACKERS] Patch to implement pg_current_logfile() function

2016-12-06 Thread Gilles Darold
Le 02/12/2016 à 02:08, Karl O. Pinc a écrit :
> On Sun, 27 Nov 2016 21:54:46 +0100
> Gilles Darold <gilles.dar...@dalibo.com> wrote:
>
>> I've attached the v15 of the patch 
>> I've not applied patch patch_pg_current_logfile-v14.diff.backoff to
>> prevent constant call of logfile_writename() on a busy system (errno =
>> ENFILE | EMFILE).
> I don't think it should be applied and included in the basic
> functionality patch in any case. I think it needs to be submitted as a
> separate patch along with the basic functionality patch.  Backing off
> the retry of the current_logfiles write could be overly fancy and
> simply not needed.
>
>> I think this can be done quite simply by testing if
>> log rotate is still enabled. This is possible because function
>> logfile_rotate() is already testing if  errno = ENFILE | EMFILE and in
>> this case rotation_disabled is set to true. So the following test
>> should do the work:
>>
>>if (log_metainfo_stale && !rotation_disabled)
>>logfile_writename();
>>
>> This is included in v15 patch.
> I don't see this helping much, if at all.
>
> First, it's not clear just when rotation_disabled can be false
> when log_metainfo_stale is true.  The typical execution
> path is for logfile_writename() to be called after rotate_logfile()
> has already set rotataion_disabled to true.   logfile_writename()
> is the only place setting log_metainfo_stale to true and
> rotate_logfile() the only place settig rotation_disabled to false.
>
> While it's possible
> that log_metainfo_stale might be set to true when logfile_writename()
> is called from within open_csvlogfile(), this does not help with the
> stderr case.  IMO better to just test for log_metaifo_stale at the
> code snippet above.
>
> Second, the whole point of retrying the logfile_writename() call is
> to be sure that the current_logfiles file is updated before the logs
> rotate.  Waiting until logfile rotation is enabled defeats the purpose.

Ok, sorry I've misunderstood your previous post. Current v16
attached patch removed your change about log_meta_info
stale and fix the use of sscanf to read the file.

It seems that all fixes have been included in this patch.

Regards

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 0fc4e57..41144cb 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -4169,6 +4169,12 @@ SELECT * FROM parent WHERE key = 2400;
  server log
 
 
+When logs are written to the file-system their paths, names, and
+types are recorded in
+the  file.  This provides
+a convenient way to find and access log content without establishing a
+database connection.
+
 
  Where To Log
 
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index eca98df..47ca846 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -15444,6 +15444,19 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n);
   
 
   
+   pg_current_logfile()
+   text
+   primary log file name in use by the logging collector
+  
+
+  
+   pg_current_logfile(text)
+   text
+   log file name, of log in the requested format, in use by the
+   logging collector
+  
+
+  
pg_my_temp_schema()
oid
OID of session's temporary schema, or 0 if none
@@ -15661,6 +15674,39 @@ SET search_path TO schema , schema, ..
 the time when the postmaster process re-read the configuration files.)

 
+
+pg_current_logile
+   
+
+   
+Logging
+pg_current_logfile function
+   
+
+   
+pg_current_logfile returns, as text,
+the path of either the csv or stderr log file currently in use by the
+logging collector.  This is a path including the
+ directory and the log file name.
+Log collection must be active or the return value
+is NULL.  When multiple logfiles exist, each in a
+different format, pg_current_logfile called
+without arguments returns the path of the file having the first format
+found in the ordered
+list: stderr, csvlog.
+NULL is returned when no log file has any of these
+formats.  To request a specific file format supply,
+as text, either csvlog
+or stderr as the value of the optional parameter. The
+return value is NULL when the log format requested
+is not a configured .
+
+   pg_current_logfiles reflects the content of the
+ file.  All caveats
+regards current_logfiles content are applicable
+to pg_current_logfiles' return value.
+   
+

 pg_my_temp_schema

diff --git a/doc/src/sgml/storage.sgml b/doc/src/sgml/storage.sgml
index 5c52824..928133c 100644
--- a/doc/src/sgml/storage.sgml
+++ b/doc/

Re: [HACKERS] Patch to implement pg_current_logfile() function

2016-11-28 Thread Gilles Darold
Le 28/11/2016 à 18:54, Karl O. Pinc a écrit :
> Hi Gilles,
>
> On Sun, 27 Nov 2016 21:54:46 +0100
> Gilles Darold <gilles.dar...@dalibo.com> wrote:
>
>> I've attached the v15 of the patch that applies your changes/fixes and
>> remove the call to strtok().
> I like the idea of replacing the strtok() call with sscanf()
> but I see problems.
>
> It won't work if log_filename begins with a space because
> (the docs say) that a space in the sscanf() format represents
> any amount of whitespace.
Right

> As written, there's a potential buffer overrun in log_format.
> You could fix this by making log_format as large as lbuffer,
> but this seems clunky.

Sorry, Isee that I forgot to apply the control in number of character
read by sscanf.

The problem about white space make me though that the use of sscanf is
not the right solution, I will rewrite this part but I can not do that
before the end of the week.


-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [patch] psql tab completion for ALTER DEFAULT PRIVILEGES

2016-11-28 Thread Gilles Darold
Le 20/11/2016 à 15:46, Gilles Darold a écrit :
> Hi,
>
> When tab-completing after ALTER DEFAULT PRIVILEGES ... GRANT|REVOKE,
> currently psql injects completion from the GRANT|REVOKE order, rather
> than the one expected.
>
> A patch is attached. It adds the right completion to GRANT|REVOKE after
> ALTER DEFAULT PRIVILEGES and after FOR ROLE|USER + IN SCHEMA.
>
> If there's no objection I will add it to next commit fest.
>
> Best regards,


Added to next commitfest. To explain more this patch, the completion of
SQL command:

ALTER DEFAULT PRIVILEGES FOR ROLE xxx [tab]

propose:

GRANT   REVOKE

and it should also propose IN SCHEMA. Same with ALTER DEFAULT PRIVILEGES
IN SCHEMA it should propose FOR ROLE. For example:

gilles=# ALTER DEFAULT PRIVILEGES IN SCHEMA public FOR ROLE user1 GRANT
ALL ON TABLES TO user2;
ALTER DEFAULT PRIVILEGES

is valid but current completion doesn't help.


The second issue addressed is the completion after GRANT|REVOKE, which
show completion for the GRANT|REVOKE command but the element are not the
same in the ALTER DEFAULT PRIVILEGES command.


I mean completion on command

ALTER DEFAULT PRIVILEGES IN SCHEMA public FOR ROLE user1 GRANT ALL [tab]

propose the following keywords:

ALL FUNCTIONS IN SCHEMA
ALL SEQUENCES IN SCHEMA
DOMAIN
LANGUAGE
LARGE OBJECT
TABLE
TABLESPACE
ALL TABLES IN SCHEMA
FOREIGN DATA WRAPPER
FOREIGN SERVER
SCHEMA
FUNCTION
SEQUENCE
TYPE
DATABASE  

which is wrong. Keywords should only be

ON TABLES
ON SEQUENCES
ON FUNCTIONS
ON TYPES

This is what the patch is trying to fix.

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to implement pg_current_logfile() function

2016-11-27 Thread Gilles Darold
Hi Karl,

I've attached the v15 of the patch that applies your changes/fixes and
remove the call to strtok().

I've not applied patch patch_pg_current_logfile-v14.diff.backoff to
prevent constant call of logfile_writename() on a busy system (errno =
ENFILE | EMFILE). I think this can be done quite simply by testing if
log rotate is still enabled. This is possible because function
logfile_rotate() is already testing if  errno = ENFILE | EMFILE and in
this case rotation_disabled is set to true. So the following test should
do the work:

   if (log_metainfo_stale && !rotation_disabled)
   logfile_writename();

This is included in v15 patch.

I've also not added patches
patch_pg_current_logfile-v14.diff.conditional,
patch_pg_current_logfile-v14.diff.logdest_change and
patch_pg_current_logfile-v14.diff.logdest_change-part2 because the case
your are trying to fix with lot of code can not appears. You said that
when csv logging is turned back on, the stale csv path is written to
current_logfiles. This is not the case, last_csv_file_name is
immediately changed when the log file is open, see open_csvlogfile().
After running your use case I was not able to reproduce the bug you are
describing.

Again, patches patch_pg_current_logfile-v14.diff.doc_linux_default-v2
have not been included because I don't see any reason to talk especially
about systemd. If you talk about systemd you must talk about other
stderr handler by all systems. IMO saying that current_logfile is
present only if logging_collector is enabled and log_destination include
stderr or/and csvlog is enough, no need to talk about systemd and
behavior of Linux distributions.

Regards

Le 23/11/2016 à 10:21, Karl O. Pinc a écrit :
> Hi Gilles,
>
> On Sat, 19 Nov 2016 12:58:47 +0100
> Gilles Darold <gilles.dar...@dalibo.com> wrote:
>
>> ... attached v14 of the patch. 
> Attached are patches for your consideration and review.
> (Including your latest v14 patch for completeness.)
>
> Some of the attached patches (like the GUC symbol
> patch you've seen before) are marked to be submitted
> as separate patches to the maintainers when we send
> them code for review.  These need looking over by
> somebody, I hope you, before they get sent on so
> please comment on these if you're not going to look
> at them or if you see a problem with them.   (Or if
> you like them.  :)  Thanks.
>
> I also have comments at the bottom regards problems
> I see but haven't patched.
>
> ---
>
> patch_pg_current_logfile-v14.diff
>
> Applies on top of master.
>
> The current patch.
>
> ---
>
> patch_pg_current_logfile-v14.diff.startup_docs
>
> For consideration of inclusion in "main" patch.
>
> Applies on top of patch_pg_current_logfile-v14.diff
>
> A documentation fix.
>
> (Part of) my previous doc patch was wrong; current_logfiles is not
> unconditionally written on startup.
>
> ---
>
> patch_pg_current_logfile-v14.diff.bool_to_int
>
> Do not include in "main" patch, submit to maintainers separately.
>
> Applies on top of patch_pg_current_logfile-v14.diff
>
> The bool types on the stack in logfile_rotate() are
> my work.  Bools on the stack don't make sense as far
> as hardware goes, so the compiler's optimizer should change
> them to int.  This patch changes the bools to ints
> should that be to someone's taste.
>
> ---
>
> patch_pg_current_logfile-v14.diff.logdest_change
>
> For consideration of inclusion in "main" patch.
>
> Applies on top of patch_pg_current_logfile-v14.diff.
>
> Fixes a bug where, when log_destination changes and the config
> file is reloaded, a no-longer-used logfile path may be written
> to the current_logfiles file.  The chain of events would be
> as follows:
>
> 1) PG configured with csvlog in log_destination. Logs are written.
>
> This makes last_csv_file_name non-NULL.
>
>
> 2) PG config changed to remove csvlog from log_destination
> and SIGHUP sent.
>
> This removes the csvlog path from current_logfiles but does not
> make last_csv_file_name NULL.  last_csv_file_name has the old
> path in it.
>
>
> 3) PG configured to add csvlog back to log_destination and
> SIGHUP sent.
>
> When csvlogging is turned back on, the stale csv path is written
> to current_logfiles.  This is overwritten as soon as some csv logs
> are written because the new csv logfile must be opened, but this is
> still a problem.  Even if it happens to be impossible at the moment
> to get past step 2 to step 3 without having some logs written it seems
> a lot more robust to manually "expire" the last*_file_name variable
> content when log_destination changes.
>
>
> So what the patch does is "scrub" the "last

Re: [HACKERS] Patch to implement pg_current_logfile() function

2016-11-23 Thread Gilles Darold
Le 23/11/2016 à 16:26, Tom Lane a écrit :
> "Karl O. Pinc" <k...@meme.com> writes:
>> Maybe on the 2nd call to strtok() you could pass ""
>> as the 2nd argument?  That'd be a little wonky but
>> the man page does not say you can't have an empty
>> set of delimiters.  On the other hand strtok() is
>> not a perfect choice, you don't want to "collapse"
>> adjacent delimiters in the parsed string or ignore
>> leading spaces.  I'd prefer a strstr() solution.
> I'd stay away from strtok() no matter what.  The process-wide static
> state it implies is dangerous: if you use it, you're betting that
> you aren't interrupting some caller's use, nor will any callee decide
> to use it.  In a system as large as Postgres, that's a bad bet, or
> would be if we didn't discourage use of strtok() pretty hard.
>
> As far as I can find, there are exactly two users of strtok() in
> the backend, and they're already playing with fire because one
> calls the other (look in utils/misc/tzparser.c).  I don't want the
> hazard to get any larger.
>
>   regards, tom lane

Understood, I will remove and replace this call from the patch and more
generally from my programming.

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [patch] psql tab completion for ALTER DEFAULT PRIVILEGES

2016-11-20 Thread Gilles Darold
Hi,

When tab-completing after ALTER DEFAULT PRIVILEGES ... GRANT|REVOKE,
currently psql injects completion from the GRANT|REVOKE order, rather
than the one expected.

A patch is attached. It adds the right completion to GRANT|REVOKE after
ALTER DEFAULT PRIVILEGES and after FOR ROLE|USER + IN SCHEMA.

If there's no objection I will add it to next commit fest.

Best regards,

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org

diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c
index b556c00..d0c8eda 100644
--- a/src/bin/psql/tab-complete.c
+++ b/src/bin/psql/tab-complete.c
@@ -1574,9 +1574,23 @@ psql_completion(const char *text, int start, int end)
 	/* ALTER DEFAULT PRIVILEGES FOR */
 	else if (Matches4("ALTER", "DEFAULT", "PRIVILEGES", "FOR"))
 		COMPLETE_WITH_LIST2("ROLE", "USER");
-	/* ALTER DEFAULT PRIVILEGES { FOR ROLE ... | IN SCHEMA ... } */
-	else if (Matches6("ALTER", "DEFAULT", "PRIVILEGES", "FOR", "ROLE|USER", MatchAny) ||
-		Matches6("ALTER", "DEFAULT", "PRIVILEGES", "IN", "SCHEMA", MatchAny))
+	else if (Matches4("ALTER", "DEFAULT", "PRIVILEGES", "IN"))
+		COMPLETE_WITH_CONST("SCHEMA");
+	/* ALTER DEFAULT PRIVILEGES FOR ROLE ... */
+	else if (Matches6("ALTER", "DEFAULT", "PRIVILEGES", "FOR", "ROLE|USER",
+MatchAny))
+		COMPLETE_WITH_LIST3("GRANT", "REVOKE", "IN SCHEMA");
+	/* ALTER DEFAULT PRIVILEGES IN SCHEMA ... */
+	else if (Matches6("ALTER", "DEFAULT", "PRIVILEGES", "IN", "SCHEMA",
+MatchAny))
+		COMPLETE_WITH_LIST4("GRANT", "REVOKE", "FOR ROLE", "FOR USER");
+	else if (Matches7("ALTER", "DEFAULT", "PRIVILEGES", "IN", "SCHEMA",
+MatchAny, "FOR"))
+		COMPLETE_WITH_LIST2("ROLE", "USER");
+	else if (Matches9("ALTER", "DEFAULT", "PRIVILEGES", "FOR", "ROLE|USER",
+	MatchAny, "IN", "SCHEMA", MatchAny) ||
+		Matches9("ALTER", "DEFAULT", "PRIVILEGES", "IN", "SCHEMA",
+	MatchAny, "FOR", "ROLE|USER", MatchAny))
 		COMPLETE_WITH_LIST2("GRANT", "REVOKE");
 	/* ALTER DOMAIN  */
 	else if (Matches3("ALTER", "DOMAIN", MatchAny))
@@ -2541,10 +2555,18 @@ psql_completion(const char *text, int start, int end)
 	else if (TailMatches2("FOREIGN", "SERVER"))
 		COMPLETE_WITH_QUERY(Query_for_list_of_servers);
 
-/* GRANT && REVOKE --- is allowed inside CREATE SCHEMA, so use TailMatches */
+/* GRANT && REVOKE --- is allowed inside CREATE SCHEMA and
+   ALTER DEFAULT PRIVILEGES, so use TailMatches */
 	/* Complete GRANT/REVOKE with a list of roles and privileges */
 	else if (TailMatches1("GRANT|REVOKE"))
-		COMPLETE_WITH_QUERY(Query_for_list_of_roles
+		/* With ALTER DEFAULT PRIVILEGES, restrict completion
+			to authorized keywords */
+		if (HeadMatches3("ALTER","DEFAULT","PRIVILEGES"))
+			COMPLETE_WITH_LIST10("SELECT", "INSERT", "UPDATE",
+"DELETE", "TRUNCATE", "REFERENCES", "TRIGGER",
+		"EXECUTE", "USAGE", "ALL");
+		else
+			COMPLETE_WITH_QUERY(Query_for_list_of_roles
 			" UNION SELECT 'SELECT'"
 			" UNION SELECT 'INSERT'"
 			" UNION SELECT 'UPDATE'"
@@ -2585,7 +2607,12 @@ psql_completion(const char *text, int start, int end)
 	 * privilege.
 	 */
 	else if (TailMatches3("GRANT|REVOKE", MatchAny, "ON"))
-		COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tsvmf,
+		/* With ALTER DEFAULT PRIVILEGES, restrict completion
+			to authorized keywords */
+		if (HeadMatches3("ALTER","DEFAULT","PRIVILEGES"))
+			COMPLETE_WITH_LIST4("TABLES", "SEQUENCES", "FUNCTIONS", "TYPES");
+		else
+			COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tsvmf,
    " UNION SELECT 'ALL FUNCTIONS IN SCHEMA'"
    " UNION SELECT 'ALL SEQUENCES IN SCHEMA'"
    " UNION SELECT 'ALL TABLES IN SCHEMA'"
@@ -2648,7 +2675,8 @@ psql_completion(const char *text, int start, int end)
 	else if ((HeadMatches1("GRANT") && TailMatches1("TO")) ||
 			 (HeadMatches1("REVOKE") && TailMatches1("FROM")))
 		COMPLETE_WITH_QUERY(Query_for_list_of_grant_roles);
-
+	else if (HeadMatches3("ALTER","DEFAULT", "PRIVILEGES") && TailMatches1("TO|FROM"))
+		COMPLETE_WITH_QUERY(Query_for_list_of_grant_roles);
 	/* Complete "GRANT/REVOKE ... ON * *" with TO/FROM */
 	else if (HeadMatches1("GRANT") && TailMatches3("ON", MatchAny, MatchAny))
 		COMPLETE_WITH_CONST("TO");

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to implement pg_current_logfile() function

2016-11-19 Thread Gilles Darold
Le 19/11/2016 à 16:22, Karl O. Pinc a écrit :
> Hi Gilles,
>
> On Tue, 15 Nov 2016 15:15:52 -0600
> "Karl O. Pinc" <k...@meme.com> wrote:
>
>>> On Mon, 7 Nov 2016 23:29:28 +0100
>>> Gilles Darold <gilles.dar...@dalibo.com> wrote:  
>>>>   - Do not write current_logfiles when log_collector is activated
>>>> but log_destination doesn't contained stderr or csvlog. This was
>>>> creating an empty file that can confuse the user. 
>> Whether to write current_logfiles only when there's a stderr or csvlog
>> seems dependent up on whether the current_logfiles file is for human
>> or program use.  If for humans, don't write it unless it has content.
>> If for programs, why make the program always have to check to see
>> if the file exists before reading it?  Failing to check is just one
>> more cause of bugs.
> What are your thoughts on this?  I'm leaning toward current_logfiles
> being for programs, not people.  So it should be present whenever
> logging_collector is on.

My though is that it is better to not have an empty file even if
log_collector is on. Programs can not be confused but human yes, if the
file is present but empty, someone may want to check why this file is
empty. Also having a file containing two lines with just the log format
without path is worst for confusion than having an empty file.

In other words, from a program point of view, to gather last log
filename, existence of the file or not doesn't make any difference. This
is the responsibility of the programer to cover all cases. In a non
expert user point of view, there will always the question: why this file
is empty? If the file is not present, the question will stay: how I to
get current log filename?


-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to implement pg_current_logfile() function

2016-11-19 Thread Gilles Darold
Le 16/11/2016 à 17:38, Karl O. Pinc a écrit :
> On Mon, 7 Nov 2016 23:29:28 +0100
> Gilles Darold <gilles.dar...@dalibo.com> wrote:
>
>> Here is the v13 of the patch, ...
> Attached is a doc patch to apply on top of v13.
>
> It adds a lot more detail regards just what is
> in the current_logfiles file and when it's
> in current_logfiles.  I'd like review both for
> language and accuracy, but I'd also like to
> confirm that we really want the documented
> behavior regards what's there at backend startup
> v.s. what's there during normal runtime.  Seems
> ok the way it is to me but...
>
> This patch is also starting to put a lot of information
> inside a graphical table.  I'm fine with this but
> it's a bit of a departure from the other cells of
> the table so maybe somebody else has a better
> suggestion.
>
> Regards,
>
> Karl <k...@meme.com>
> Free Software:  "You don't pay back, you pay forward."
>  -- Robert A. Heinlein
>

All patches you've submitted on tha v13 patch have been applied and are
present in attached v14 of the patch. I have not included the patches
about GUC changes because I'm not sure that adding a new file
(include/utils/guc_values.h) just for that will be accepted or that it
will not require a more global work to add other GUC values. However
perhaps this patch can be submitted separately if the decision is not
taken here.

Regards,

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index adab2f8..645cbb9 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -4143,6 +4143,12 @@ SELECT * FROM parent WHERE key = 2400;
  server log
 
 
+When logs are written to the file-system their paths, names, and
+types are recorded in
+the  file.  This provides
+a convenient way to find and access log content without establishing a
+database connection.
+
 
  Where To Log
 
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 2e64cc4..f5bfe59 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -15443,6 +15443,19 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n);
   
 
   
+   pg_current_logfile()
+   text
+   primary log file name in use by the logging collector
+  
+
+  
+   pg_current_logfile(text)
+   text
+   log file name, of log in the requested format, in use by the
+   logging collector
+  
+
+  
pg_my_temp_schema()
oid
OID of session's temporary schema, or 0 if none
@@ -15660,6 +15673,34 @@ SET search_path TO schema , schema, ..
 the time when the postmaster process re-read the configuration files.)

 
+
+pg_current_logile
+   
+
+   
+Logging
+pg_current_logfile function
+   
+
+   
+pg_current_logfile returns, as text,
+the path of either the csv or stderr log file currently in use by the
+logging collector.  This is a path including the
+ directory and the log file name.
+Log collection must be active or the return value
+is NULL.  When multiple logfiles exist, each in a
+different format, pg_current_logfile called
+without arguments returns the path of the file having the first format
+found in the ordered
+list: stderr, csvlog.
+NULL is returned when no log file has any of these
+formats.  To request a specific file format supply,
+as text, either csvlog
+or stderr as the value of the optional parameter. The
+return value is NULL when the log format requested
+is not a configured .
+   
+

 pg_my_temp_schema

diff --git a/doc/src/sgml/storage.sgml b/doc/src/sgml/storage.sgml
index 5c52824..256a14f 100644
--- a/doc/src/sgml/storage.sgml
+++ b/doc/src/sgml/storage.sgml
@@ -60,6 +60,48 @@ Item
  Subdirectory containing per-database subdirectories
 
 
+
+ 
+  
+   current_logfiles
+  
+  
+   Logging
+   current_logfiles file
+  
+  current_logfiles
+ 
+ 
+  A file recording the log file(s) currently written to by the syslogger
+  and the file's log formats, stderr
+  or csvlog. Each line of the file is a space separated list of
+  two elements: the log format and the full path to the log file including the
+  value of .
+
+  During server startup current_logfiles reports
+  the default server log destination.  After server startup the log format
+  must be present in  to be listed in
+  current_logfiles.
+
+  
+Non-PostgreSQL log sources, such as 3rd
+party libraries, which deliver error messages directly to stderr are
+always logged by PostgreSQL to stderr.  Stderr
+is also the destination for any incomplete log messages produced by
+PostgreSQL.  When
+stderr is not in
+,
+current_logfiles does not not contain the name of the
+file where these sorts of log messages are wr

Re: [HACKERS] Patch to implement pg_current_logfile() function

2016-11-12 Thread Gilles Darold
Le 09/11/2016 à 02:51, Robert Haas a écrit :
> On Thu, Nov 3, 2016 at 7:34 PM, Karl O. Pinc <k...@meme.com> wrote:
>> To me, the CURRENT_LOG_FILENAME symbol should contain the name
>> of the current log file.  It does not.  The CURRENT_LOG_FILENAME symbol
>> holds the name of the file which itself contains the name of
>> the log file(s) being written, plus the log file
>> structure of each log file.
> I agree that this is confusing.
>
>> IMO, the name of the log files being written, as well as
>> the type of data structure written into each log file,
>> are meta-information about the logging data.  So maybe
>> the right name is LOG_METAINFO_DATAFILE.
> +1 for something like this.
>

Ok, it will be changed it in next patch.

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to implement pg_current_logfile() function

2016-11-07 Thread Gilles Darold
Le 04/11/2016 à 21:07, Karl O. Pinc a écrit :
> On Fri, 4 Nov 2016 16:58:45 +0100
> Gilles Darold <gilles.dar...@dalibo.com> wrote:
>
>> I attached a v12 patch 
> Attached is a comment patch which improves the comment
> describing CURRENT_LOG_FILENAME.  It's been bugging me.
> I should have made this change long ago when I looked
> at all the other code comments but neglected to.
>
> The comment now matches other documentation.
>
> This applies on top of your v12 patch.

Here is the v13 of the patch, it applies your last change in comment and
some other changes:

  - I've reverted the patch removing the call to logfile_writename in
open_csvlogfile function, it is required to write log filename at
startup when log_destination is set to csvlog. I could not find a better
place right now, but will try to see if we can avoid the call to
logfile_writename().
  - Do not write current_logfiles when log_collector is activated but
log_destination doesn't contained stderr or csvlog. This was creating an
empty file that can confuse the user.
  - I've changed the comment in doc/src/sgml/storage.sgml by addind the
following sentence: "This file is present only when logging_collector is
activated and when at least one of stderr or csvlog value is present in
log_destination."


-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index adab2f8..645cbb9 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -4143,6 +4143,12 @@ SELECT * FROM parent WHERE key = 2400;
  server log
 
 
+When logs are written to the file-system their paths, names, and
+types are recorded in
+the  file.  This provides
+a convenient way to find and access log content without establishing a
+database connection.
+
 
  Where To Log
 
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 2e64cc4..f5bfe59 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -15443,6 +15443,19 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n);
   
 
   
+   pg_current_logfile()
+   text
+   primary log file name in use by the logging collector
+  
+
+  
+   pg_current_logfile(text)
+   text
+   log file name, of log in the requested format, in use by the
+   logging collector
+  
+
+  
pg_my_temp_schema()
oid
OID of session's temporary schema, or 0 if none
@@ -15660,6 +15673,34 @@ SET search_path TO schema , schema, ..
 the time when the postmaster process re-read the configuration files.)

 
+
+pg_current_logile
+   
+
+   
+Logging
+pg_current_logfile function
+   
+
+   
+pg_current_logfile returns, as text,
+the path of either the csv or stderr log file currently in use by the
+logging collector.  This is a path including the
+ directory and the log file name.
+Log collection must be active or the return value
+is NULL.  When multiple logfiles exist, each in a
+different format, pg_current_logfile called
+without arguments returns the path of the file having the first format
+found in the ordered
+list: stderr, csvlog.
+NULL is returned when no log file has any of these
+formats.  To request a specific file format supply,
+as text, either csvlog
+or stderr as the value of the optional parameter. The
+return value is NULL when the log format requested
+is not a configured .
+   
+

 pg_my_temp_schema

diff --git a/doc/src/sgml/storage.sgml b/doc/src/sgml/storage.sgml
index fddb69b..40ac876 100644
--- a/doc/src/sgml/storage.sgml
+++ b/doc/src/sgml/storage.sgml
@@ -60,6 +60,30 @@ Item
  Subdirectory containing per-database subdirectories
 
 
+
+ 
+  
+   current_logfiles
+  
+  
+   Logging
+   current_logfiles file
+  
+  current_logfiles
+ 
+ 
+  A file recording the current log file(s) used by the syslogger and
+  its log format, stderr or csvlog. Each line
+  of the file is a space separated list of two elements: the log format and
+  the full path to the log file including the value
+  of . The log format must be present
+  in  to be listed in the file. This file
+  is present only when  is
+  activated and when at least one of stderr or
+  csvlog value is present in
+.
+
+
 
  global
  Subdirectory containing cluster-wide tables, such as
diff --git a/src/backend/postmaster/syslogger.c b/src/backend/postmaster/syslogger.c
index fd62d66..6e7bdce 100644
--- a/src/backend/postmaster/syslogger.c
+++ b/src/backend/postmaster/syslogger.c
@@ -146,6 +146,7 @@ static char *logfile_getname(pg_time_t timestamp, const char *suffix);
 static void set_next_rotation_time(void);
 static void sigHupHandler(SIGNAL_ARGS);
 static void sigUsr1Handler(SIGNAL_ARGS);
+static void logfile_writename(char *filename, char *csvfilename);
 
 
 /*
@@ -348,6 +349,13 @@ SysLoggerMain

Re: [HACKERS] Patch to implement pg_current_logfile() function

2016-11-04 Thread Gilles Darold
Le 04/11/2016 à 14:17, Karl O. Pinc a écrit :
> On Mon, 31 Oct 2016 10:19:18 +0100
> Gilles Darold <gilles.dar...@dalibo.com> wrote:
>
>> Le 31/10/2016 à 04:35, Karl O. Pinc a écrit :
>>> Attached is a patch to apply on top of the v10 patch.
>>>
>>> It updates current_logfiles only once per log rotation.
>>> I see no reason to open and write the file twice
>>> if both csvlog and stderr logging is happening.  
>> I do not take the effort to do that because log rotation is not
>> something that occurs too often and I don't want to change the
>> conditional "time_based_rotation" code lines in logfile_rotate() for
>> readability. My though was also that on high load, log rotation is
>> automatically disabled so logfile_writename() is not called and there
>> will be no additional I/O. But why not, if commiters want to save this
>> additional I/O, this patch can be applied.
> Ok.  You didn't put this into your v11 patch so it can be submitted to
> the committers as a separate patch.
>
>>> I don't understand why you're calling 
>>> logfile_writename(last_file_name, last_csv_file_name);
>>> in the SIGHUP code.  Presumably you've already
>>> written the old logfile names to current_logfiles.
>>> On SIGHUP you want to write the new names, not
>>> the old ones.  But the point of the SIGHUP code
>>> is to look for changes in logfile writing and then
>>> call logfile_rotate() to make those changes.  And
>>> logfile_rotate() calls logfile_writename() as appropriate.
>>> Shouldn't the logfile_writename call in the SIGHUP
>>> code be eliminated?  
>> No, when you change the log_destination and reload configuration you
>> have to refresh the content of current_logfiles, in this case no new
>> rotation have been done and you have to write last_file_name and/or
>> last_csv_file_name.
> I don't understand.  Please explain what's wrong with the
> picture I have of how logging operates on receipt of SIGHUP.
> Here is my understanding:
>
> The system is running normally, current_logfiles exists and contains
> the log file paths of the logs presently being written into.  These
> paths end with the file names in last_file_name and/or
> last_csv_file_name.  (I'm assuming throughout my description here that
> log_destination is writing to the file system at all.)
Yes, also if log_collector is on and log_destination is not stderr or
csvlog, current_logfiles exists too but it is emtpy.
When log_collector is off this file doesn't exists.

 
> A SIGHUP arrives.  The signal handler, sigHupHandler(), sets the
> got_SIGHUP flag.  Nothing else happens.
>
> The logging process wakes up due to the signal.
> Either there's also log data or there's not.  If there's
> not:
>
> The logging process goes through it's processing loop and finds,
> at line 305 of syslogger.c, got_SIGHUP to be true.
> Then it proceeds to do a bunch of assignment statements.
> If it finds that the log directory or logfile name changed
> it requests immediate log file rotation.  It does this by
> setting the request_rotation flag.  If log_destination
> or logging_collector changed request_rotation is not set.
>
> Then, your patch adds a call to logfile_writename().
> But nothing has touched the last_file_name or last_csv_file_name
> variables.  You rewrite into current_logfiles what's
> already in current_logfiles.

Your picture is good, you just miss the case where we just change
log_destination. In this case, syslogger add/change log file destination
but rotation_requested is false. If write to current_logfiles is
conditional to this flag, it will never reflect the file change until
next rotation, see why next answer bellow.


If log_destination remain unchanged I agree that I am rewriting the same
information, but I don't think that this kind of sporadic I/O is enough
to append code to store the old log_destination value and check if there
is a change like what is done with Log_directory. This is my opinion,
but may be I'm wrong to consider that those isolated and not critical
I/O doesn't justify this work.


> If there is log data in the pipe on SIGHUP
> and it's csv log data then if there's a csv
> log file open that's the file that's written to.
> last_csv_file_name does not change so current_logfiles
> does not need to be re-written.  If there is no csv
> log file open then open_csvlogfile() is called
> and it calls logfile_writename().  No need to
> call logfile_writename() again when processing the
> SIGHUP.

Yes, but the problem here is that logfile_writename() doesn't write the
change because the test (Log_destination & LOG_DESTINATION_CSVLOG)
returns false, Log_destination is set after the file is su

Re: [HACKERS] Patch to implement pg_current_logfile() function

2016-11-04 Thread Gilles Darold
Le 04/11/2016 à 00:34, Karl O. Pinc a écrit :
> On Mon, 31 Oct 2016 09:26:27 +0100
> Gilles Darold <gilles.dar...@dalibo.com> wrote:
>
>> Le 30/10/2016 à 08:04, Karl O. Pinc a écrit :
>>> Have you given any thought to my proposal to change
>>> CURRENT_LOG_FILENAME to LOG_METAINFO_FILE?  
>> Yes, I don't think the information logged in this file are kind of
>> meta information and CURRENT_LOG_FILENAME seems obvious.
> To me, the CURRENT_LOG_FILENAME symbol should contain the name 
> of the current log file.  It does not.  The CURRENT_LOG_FILENAME symbol
> holds the name of the file which itself contains the name of 
> the log file(s) being written, plus the log file
> structure of each log file.
>
> IMO, the name of the log files being written, as well as
> the type of data structure written into each log file,
> are meta-information about the logging data.  So maybe
> the right name is LOG_METAINFO_DATAFILE.
>
> If you're not happy with making this change that's fine.
> If not, I'd like to make mention of the symbol name to
> the committers.

If it need to be changed I would prefer something like CURRENT_LOG_INFO,
but this is not really important. Please mention it, and the committer
will choose to change it or not.


-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to implement pg_current_logfile() function

2016-11-03 Thread Gilles Darold
Le 03/11/2016 à 16:15, Robert Haas a écrit :
> On Wed, Oct 26, 2016 at 11:25 PM, Karl O. Pinc <k...@meme.com> wrote:
>> What it comes down to is I don't buy the adequacy of the
>> ".csv" suffix test and think that "keeping things simple" now
>> is a recipe for future breakage, or at least significant future
>> complication and confusion when it come to processing logfile
>> content.
> Sounds like a plausible argument (although I haven't looked at the code).

Yes, the current v11 patch doesn't rely on any extension anymore,
instead the log format is now stored with the log file name.

>> My thoughts are as follows:  Either you know the log format because
>> you configured the cluster or you don't.  If you don't know the log
>> format having the log file is halfway useless.  You can do something
>> like back it up, but you can't ever look at it's contents (in some
>> sense) without knowing what data structure you're looking at.
>>
>> Therefore pg_current_logfile() without any arguments is, in the sense
>> of any sort of automated processing of the logfile content, useless.
> Yeah, but it's not useless in general.  I've certainly had cases where
> somebody gave me access to their PostgreSQL cluster to try to fix a
> problem, and I'm struggling to find the log files.  Being able to ask
> the PostgreSQL cluster "where are all of the files to which you are
> logging?" sounds helpful.
>

+1

Current implementation always returns the log file in use by the logging
collector when pg_current_logfile() is called without arguments. If
stderr and csvlog are both enabled in log_destination, then it will
return the stderr log. If you need to specifically ask for the csvlog
file you can give it with as pg_current_logfile parameter.


-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to implement pg_current_logfile() function

2016-11-02 Thread Gilles Darold
Le 02/11/2016 à 03:51, Karl O. Pinc a écrit :
> On Mon, 31 Oct 2016 09:26:27 +0100
> Gilles Darold <gilles.dar...@dalibo.com> wrote:
>
>> Le 30/10/2016 à 08:04, Karl O. Pinc a écrit :
>> Attached patch v11 include your patch.
>>
>>> I have some questions about logfile_writename():
>>>
>>>  Why does the logfile_open() call fail silently?
>> logfile_open() "fail silently" in logfile_writename(), like in other
>> parts of syslogger.c where it is called, because this is a function()
>> that already report a message when an error occurs ("could not open
>> log file..."). I think I have already replied to this question.
> Please apply the attached patch on top of your v11 patch.
> It simulates a logfile_open() failure.  Upon simulated failure you do
> not get a "currrent_logfile" file, and neither do you get any
> indication of any problems anywhere in the logs.  It's failing
> silently.

Please have a look at line  1137 on HEAD of syslogger.c you will see
that in case of failure function logfile_open() report a FATAL or LOG
error with message:

errmsg("could not open log file \"%s\": %m", filename);

I don't think adding more error messages after this has any interest
this is why logfile_writename() just return without doing anything.
Other call to logfile_open() have the exact same behavior.

For a real example, create the temporary file and change its system
privileges:

   touch /usr/local/postgresql/data/current_logfiles.tmp
   chmod 400 /usr/local/postgresql/data/current_logfiles.tmp

So in this case of failure it prints:

2016-11-02 09:56:00.791 CET [6321] LOG:  could not open log file
"current_logfiles.tmp": Permission denied

I don't understand what you are expecting more? I agree that "open log"
can confuse a little but this is also a file where we log the current
log file name so I can live with that.


-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to implement pg_current_logfile() function

2016-10-31 Thread Gilles Darold
Le 31/10/2016 à 04:35, Karl O. Pinc a écrit :
> Hi Gilles,
>
> On Sat, 29 Oct 2016 22:00:08 +0200
> Gilles Darold <gilles.dar...@dalibo.com> wrote:
>
>> The attached v10 of the current_logfiles patch 
> Attached is a patch to apply on top of the v10 patch.
>
> It updates current_logfiles only once per log rotation.
> I see no reason to open and write the file twice
> if both csvlog and stderr logging is happening.

I do not take the effort to do that because log rotation is not
something that occurs too often and I don't want to change the
conditional "time_based_rotation" code lines in logfile_rotate() for
readability. My though was also that on high load, log rotation is
automatically disabled so logfile_writename() is not called and there
will be no additional I/O. But why not, if commiters want to save this
additional I/O, this patch can be applied.
 
> I have 2 more questions.
>
> I don't understand why you're calling 
> logfile_writename(last_file_name, last_csv_file_name);
> in the SIGHUP code.  Presumably you've already
> written the old logfile names to current_logfiles.
> On SIGHUP you want to write the new names, not
> the old ones.  But the point of the SIGHUP code
> is to look for changes in logfile writing and then
> call logfile_rotate() to make those changes.  And
> logfile_rotate() calls logfile_writename() as appropriate.
> Shouldn't the logfile_writename call in the SIGHUP
> code be eliminated?

No, when you change the log_destination and reload configuration you
have to refresh the content of current_logfiles, in this case no new
rotation have been done and you have to write last_file_name and/or
last_csv_file_name.

> Second, and I've not paid really close attention here,
> you're calling logfile_writename() with last_file_name
> and last_csv_file_name in a number of places.  Are you
> sure that last_file_name and last_csv_file_name is reset
> to the empty string if stderr/csvlog logging is turned
> off and the config file re-read?  You might be recording
> that logging is going somewhere that's been turned off
> by a config change.  I've not noticed last_file_name and
> (especially) last_csv_file_name getting reset to the empty
> string.
In the patch I do not take care if the value of last_file_name and
last_csv_file_name have been reseted or not because following the
log_destination they are written or not to current_logfiles. When they
are written, they always contain the current log filename because the
call to logfile_writename() always appears after their assignment to the
new rotated filename.

> FYI, ultimately, in order to re-try writes to current_logfiles
> after ENFILE and EMFILE failure, I'm thinking that I'll probably
> wind up with logfile_writename() taking no arguments.  It will
> always write last_file_name and last_csv_file_name.  Then it's
> a matter of making sure that these variables contain accurate
> values.  It would be helpful to let me know if you have any
> insight regards config file re-read and resetting of these
> variables before I dive into writing code which retries writes to
> current_logfiles.

As explain above, last_file_name and last_csv_file_name always contains
the last log file name, then even in case of logfile_writename()
repeated failure. Those variables might have been updated in case of log
rotation occurs before a new call to logfile_writename(). In case of
ENFILE and EMFILE failure, log rotation is disabled and
logfile_writename() not called, last_file_name and last_csv_file_name
will still contain the last log file name.


-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to implement pg_current_logfile() function

2016-10-31 Thread Gilles Darold
Le 30/10/2016 à 08:04, Karl O. Pinc a écrit :
> On Sat, 29 Oct 2016 22:00:08 +0200
> Gilles Darold <gilles.dar...@dalibo.com> wrote:
>
>> The attached v10 of the current_logfiles patch include your last
>> changes on documentation but not the patch on v9 about the
>> user-supplied GUC value. I think the v10 path is ready for committers
>> and that the additional patch to add src/include/utils/guc_values.h
>> to define user GUC values is something that need to be taken outside
>> this one. Imo, thoses GUC values (stderr, csvlog) are not expected to
>> change so often to require a global definition, but why not, if
>> committers think this must be done I can add it to a v11 patch.
> I agree that the guc_values.h patches should be submitted separately
> to the committers, when your patch is submitted.  Creating symbols
> for these values is a matter of coding style they should resolve.
> (IMO it's not whether the values will change, it's whether
> someone reading the code can read the letters "stdout" and know
> to what they refer and where to find related usage elsewhere in 
> the code.)
>
> I'll keep up the guc_values.h patches and have them ready for
> submission along with your patch.
>
> I don't think your patch is quite ready for submission to
> the committers.
>
> Attached is a patch to be applied on top of your v10 patch
> which does basic fixup to logfile_writename().

Attached patch v11 include your patch.

>
> I have some questions about logfile_writename():
>
>  Why does the logfile_open() call fail silently?
>  Why not use ereport() here?  (With a WARNING level.)

logfile_open() "fail silently" in logfile_writename(), like in other
parts of syslogger.c where it is called, because this is a function()
that already report a message when an error occurs ("could not open log
file..."). I think I have already replied to this question.

>  Why do the ereport() invocations all have a LOG level?
>  You're not recovering and the errors are unexpected
>  so I'd think WARNING would be the right level.
>  (I previously, IIRC, suggested LOG level -- only if
>  you are retrying and recovering from an error.)

If you look deeper in syslogger.c, all messages are reported at LOG
level. I guess the reason is to not call syslogger repeatedly. I think I
have already replied to this question in the thread too.

> Have you given any thought to my proposal to change
> CURRENT_LOG_FILENAME to LOG_METAINFO_FILE?
Yes, I don't think the information logged in this file are kind of meta
information and CURRENT_LOG_FILENAME seems obvious.



-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index adab2f8..645cbb9 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -4143,6 +4143,12 @@ SELECT * FROM parent WHERE key = 2400;
  server log
 
 
+When logs are written to the file-system their paths, names, and
+types are recorded in
+the  file.  This provides
+a convenient way to find and access log content without establishing a
+database connection.
+
 
  Where To Log
 
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 2e64cc4..f5bfe59 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -15443,6 +15443,19 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n);
   
 
   
+   pg_current_logfile()
+   text
+   primary log file name in use by the logging collector
+  
+
+  
+   pg_current_logfile(text)
+   text
+   log file name, of log in the requested format, in use by the
+   logging collector
+  
+
+  
pg_my_temp_schema()
oid
OID of session's temporary schema, or 0 if none
@@ -15660,6 +15673,34 @@ SET search_path TO schema , schema, ..
 the time when the postmaster process re-read the configuration files.)

 
+
+pg_current_logile
+   
+
+   
+Logging
+pg_current_logfile function
+   
+
+   
+pg_current_logfile returns, as text,
+the path of either the csv or stderr log file currently in use by the
+logging collector.  This is a path including the
+ directory and the log file name.
+Log collection must be active or the return value
+is NULL.  When multiple logfiles exist, each in a
+different format, pg_current_logfile called
+without arguments returns the path of the file having the first format
+found in the ordered
+list: stderr, csvlog.
+NULL is returned when no log file has any of these
+formats.  To request a specific file format supply,
+as text, either csvlog
+or stderr as the value of the optional parameter. The
+return value is NULL when the log format requested
+is not a configured .
+   
+
   

Re: [HACKERS] Patch to implement pg_current_logfile() function

2016-10-29 Thread Gilles Darold
Le 29/10/2016 à 14:38, Karl O. Pinc a écrit :
> On Fri, 28 Oct 2016 10:03:37 +0200
> Gilles Darold <gilles.dar...@dalibo.com> wrote:
>
>> ...
>> v9 of the patch, attached here.
> Attached are 2 more documentation patchs to apply on
> top of your v9 patch.
>
>
> patch_pg_current_logfile-v9.diff.doc_current_logfiles
>
> Explains the current_logfiles file in the
> narrative documentation.  It's not like I want
> to toot our horn here.  I'm afraid that otherwise
> no one will notice the feature.
>
>
> patch_pg_current_logfile-v9.diff.doc_indexes
>
> Fixes an index entry and add more.
>
> Regards,
>
> Karl <k...@meme.com>
> Free Software:  "You don't pay back, you pay forward."
>  -- Robert A. Heinlein

The attached v10 of the current_logfiles patch include your last changes
on documentation but not the patch on v9 about the user-supplied GUC
value. I think the v10 path is ready for committers and that the
additional patch to add src/include/utils/guc_values.h to define user
GUC values is something that need to be taken outside this one. Imo,
thoses GUC values (stderr, csvlog) are not expected to change so often
to require a global definition, but why not, if committers think this
must be done I can add it to a v11 patch.

Best regards,

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index adab2f8..645cbb9 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -4143,6 +4143,12 @@ SELECT * FROM parent WHERE key = 2400;
  server log
 
 
+When logs are written to the file-system their paths, names, and
+types are recorded in
+the  file.  This provides
+a convenient way to find and access log content without establishing a
+database connection.
+
 
  Where To Log
 
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 2e64cc4..f5bfe59 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -15443,6 +15443,19 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n);
   
 
   
+   pg_current_logfile()
+   text
+   primary log file name in use by the logging collector
+  
+
+  
+   pg_current_logfile(text)
+   text
+   log file name, of log in the requested format, in use by the
+   logging collector
+  
+
+  
pg_my_temp_schema()
oid
OID of session's temporary schema, or 0 if none
@@ -15660,6 +15673,34 @@ SET search_path TO schema , schema, ..
 the time when the postmaster process re-read the configuration files.)

 
+
+pg_current_logile
+   
+
+   
+Logging
+pg_current_logfile function
+   
+
+   
+pg_current_logfile returns, as text,
+the path of either the csv or stderr log file currently in use by the
+logging collector.  This is a path including the
+ directory and the log file name.
+Log collection must be active or the return value
+is NULL.  When multiple logfiles exist, each in a
+different format, pg_current_logfile called
+without arguments returns the path of the file having the first format
+found in the ordered
+list: stderr, csvlog.
+NULL is returned when no log file has any of these
+formats.  To request a specific file format supply,
+as text, either csvlog
+or stderr as the value of the optional parameter. The
+return value is NULL when the log format requested
+is not a configured .
+   
+

 pg_my_temp_schema

diff --git a/doc/src/sgml/storage.sgml b/doc/src/sgml/storage.sgml
index fddb69b..1cfb9da 100644
--- a/doc/src/sgml/storage.sgml
+++ b/doc/src/sgml/storage.sgml
@@ -60,6 +60,28 @@ Item
  Subdirectory containing per-database subdirectories
 
 
+
+ 
+  
+   current_logfiles
+  
+  
+   Logging
+   current_logfiles file
+  
+  current_logfiles
+ 
+ 
+  A file recording the current log file(s) used by the syslogger and
+  its log format, stderr or csvlog. Each line
+  of the file is a space separated list of two elements: the log format and
+  the full path to the log file including the value
+  of . The log format must be present
+  in  to be listed in the file. This file
+  is present only when  is
+  activated.
+
+
 
  global
  Subdirectory containing cluster-wide tables, such as
diff --git a/src/backend/postmaster/syslogger.c b/src/backend/postmaster/syslogger.c
index fd62d66..83afc34 100644
--- a/src/backend/postmaster/syslogger.c
+++ b/src/backend/postmaster/syslogger.c
@@ -146,6 +146,7 @@ static char *logfile_getname(pg_time_t timestamp, const char *suffix);
 static void set_next_rotation_time(void);
 static void sigHupHandler(SIGNAL_ARGS);
 static void sigUsr1Handler(SIGNAL_ARGS);
+static void logfile_writename(char *filename, char *csvfilename);
 
 
 /*
@@ -348,6 +349,13 @@ SysLoggerMain(int argc, char *argv[])
 			

Re: [HACKERS] Patch to implement pg_current_logfile() function

2016-10-28 Thread Gilles Darold
Le 28/10/2016 à 05:09, Karl O. Pinc a écrit :
> Hi Gilles,
>
> On Thu, 27 Oct 2016 19:03:11 +0200
> Gilles Darold <gilles.dar...@dalibo.com> wrote:
>
>> The current v8 of the patch
> For your consideration.
>
> Attached is a patch to apply to v8 of your patch.
>
> I moved the call to logfile_writename() in write_syslogger_file()
> into the open_csvlogfile().  That's where the new filename
> is generated and it makes sense to record the new name
> upon generation.

Yes, it make sense. The last change to documentation, comment and this
move have been included in the v9 of the patch, attached here.

Thanks a lot.

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 2e64cc4..d577bac 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -15443,6 +15443,19 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n);
   
 
   
+   pg_current_logfile()
+   text
+   primary log file name in use by the logging collector
+  
+
+  
+   pg_current_logfile(text)
+   text
+   log file name, of log in the requested format, in use by the
+   logging collector
+  
+
+  
pg_my_temp_schema()
oid
OID of session's temporary schema, or 0 if none
@@ -15660,6 +15673,29 @@ SET search_path TO schema , schema, ..
 the time when the postmaster process re-read the configuration files.)

 
+
+pg_current_logile
+   
+
+   
+pg_current_logfile returns, as text,
+the path of either the csv or stderr log file currently in use by the
+logging collector.  This is a path including the
+ directory and the log file name.
+Log collection must be active or the return value
+is NULL.  When multiple logfiles exist, each in a
+different format, pg_current_logfile called
+without arguments returns the path of the file having the first format
+found in the ordered
+list: stderr, csvlog.
+NULL is returned when no log file has any of these
+formats.  To request a specific file format supply,
+as text, either csvlog
+or stderr as the value of the optional parameter. The
+return value is NULL when the log format requested
+is not a configured .
+   
+

 pg_my_temp_schema

diff --git a/doc/src/sgml/storage.sgml b/doc/src/sgml/storage.sgml
index fddb69b..5bea196 100644
--- a/doc/src/sgml/storage.sgml
+++ b/doc/src/sgml/storage.sgml
@@ -61,6 +61,18 @@ Item
 
 
 
+ current_logfiles
+ A file recording the current log file(s) used by the syslogger and
+  its log format, stderr or csvlog. Each line
+  of the file is a space separated list of two elements: the log format and
+  the full path to the log file including the value
+  of . The log format must be present
+  in  to be listed in the file. This file
+  is present only when  is
+  activated.
+
+
+
  global
  Subdirectory containing cluster-wide tables, such as
  pg_database
diff --git a/src/backend/postmaster/syslogger.c b/src/backend/postmaster/syslogger.c
index fd62d66..83afc34 100644
--- a/src/backend/postmaster/syslogger.c
+++ b/src/backend/postmaster/syslogger.c
@@ -146,6 +146,7 @@ static char *logfile_getname(pg_time_t timestamp, const char *suffix);
 static void set_next_rotation_time(void);
 static void sigHupHandler(SIGNAL_ARGS);
 static void sigUsr1Handler(SIGNAL_ARGS);
+static void logfile_writename(char *filename, char *csvfilename);
 
 
 /*
@@ -348,6 +349,13 @@ SysLoggerMain(int argc, char *argv[])
 rotation_disabled = false;
 rotation_requested = true;
 			}
+
+			/*
+			 * Force rewriting last log filename when reloading configuration,
+			 * log_destination and logging_collector may have been changed. Do
+			 * it right now to not wait for the next file rotation.
+			 */
+			logfile_writename(last_file_name, last_csv_file_name);
 		}
 
 		if (Log_RotationAge > 0 && !rotation_disabled)
@@ -513,8 +521,13 @@ SysLogger_Start(void)
 	pid_t		sysloggerPid;
 	char	   *filename;
 
-	if (!Logging_collector)
+	if (!Logging_collector) {
+		/* Logging collector is not enabled. We don't know where messages are
+		 * logged.  Remove outdated file holding the current log filenames.
+		 */
+		unlink(CURRENT_LOG_FILENAME);
 		return 0;
+	}
 
 	/*
 	 * If first time through, create the pipe which will receive stderr
@@ -574,6 +587,8 @@ SysLogger_Start(void)
 
 	syslogFile = logfile_open(filename, "a", false);
 
+	logfile_writename(filename, NULL);
+
 	pfree(filename);
 
 #ifdef EXEC_BACKEND
@@ -1098,6 +1113,8 @@ open_csvlogfile(void)
 		pfree(last_csv_file_name);
 
 	last_csv_file_name = filename;
+
+	logfile_writename(last_file_name, last_csv_file_name);
 }
 
 /*
@@ -1209,6 +1226,8 @@ logfile_rotate(bool time_based_rotation, int size_rotation_for)
 			return;
 		}
 
+		logfile_writename(filename, csvfilename);
+
 		fclose(syslogFile);
 		syslogFile = 

Re: [HACKERS] Patch to implement pg_current_logfile() function

2016-10-27 Thread Gilles Darold
Le 27/10/2016 à 14:14, Karl O. Pinc a écrit :
> On Thu, 27 Oct 2016 11:07:43 +0200
> Christoph Berg <m...@debian.org> wrote:
>
>> Re: Karl O. Pinc 2016-10-27 <20161026222513.74cd3...@slate.meme.com>
>>> But what if current_logfile contains only a single line?  What
>>> sort of file format does the logfile have?  If you don't know
>>> you can't process the logfile content.
>>>
>>> When there's multiple lines in current_logfile your script might
>>> be looking for a logfile in a particular format.  How is the
>>> script supposed to know the file format of each logfile listed?  
>> My idea here would be to always write out two lines, the first for
>> stderr, the second for csvlog, and leave the unused one empty. That's
>> easy to parse from shell scripts.
> That'd work.

I don't think we have to store two lines when just one of the log format
is enabled. If you grep for csvlog in the file and it is not present you
will just know that csvlog is not enabled. If the log format is present
but without the path I think it adds useless information.

The current v8 of the patch only register "formatpath" for the
enabled log destination, the description of the current_logfiles file
have been change to:

"
A file recording the current log file(s) used by the syslogger and its
log format, stderr or csvlog. Each line of the file is a space separated
list of two elements: the log format and the full path to the log file
including the value of log_directory. The log format must be present in
log_destination to be listed in the file. This file is present only when
logging_collector is activated.
"

The file have been renamed current_logfile*s*


> Agreed.  I can't see adding any more meta-information other than
> file format.
>
> I'm partial to "format  path" over just line number, because
> it's more explicit.  Either way works.

This is the format used. Ex:

~$ cat /usr/local/postgresql/data/current_logfile
stderr pg_log/postgresql-2016-10-27_185100.log
csvlog pg_log/postgresql-2016-10-27_185100.csv


> Your comment makes me wonder if pg_current_logfile(), without
> arguments, should instead be "SHOW current_logfile;".
>
> I think not, but have no rationale.  Could this be a good idea?
>

-1, SHOW is used to display run-time parameters values in our case this
is log_destination + log_directory + log_filename. current_logfile is a
filename not a parameter name.

Thanks again for your reviews.

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 2e64cc4..bdae4c6 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -15442,6 +15442,18 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n);
configuration load time
   
 
+   pg_current_logfile()
+   text
+   primary log file name in use by the logging collector
+  
+
+  
+   pg_current_logfile(text)
+   text
+   log file name, of log in the requested format, in use by the
+   logging collector
+  
+
   
pg_my_temp_schema()
oid
@@ -15660,6 +15672,29 @@ SET search_path TO schema , schema, ..
 the time when the postmaster process re-read the configuration files.)

 
+
+pg_current_logile
+   
+
+   
+pg_current_logfile returns, as text,
+the path of either the csv or stderr log file currently in use by the
+logging collector.  This is a path including the
+ directory and the log file name.
+Log collection must be active or the return value
+is NULL.  When multiple logfiles exist, each in a
+different format, pg_current_logfile called
+without arguments returns the path of the file having the first format
+found in the ordered
+list: stderr, csvlog.
+NULL is returned when no log file has any of these
+formats.  To request a specific file format supply,
+as text, either csvlog
+or stderr as the value of the optional parameter. The
+return value is NULL when the log format requested
+is not a configured .
+   
+

 pg_my_temp_schema

diff --git a/doc/src/sgml/storage.sgml b/doc/src/sgml/storage.sgml
index fddb69b..6e8ae05 100644
--- a/doc/src/sgml/storage.sgml
+++ b/doc/src/sgml/storage.sgml
@@ -61,6 +61,17 @@ Item
 
 
 
+ current_logfiles
+ A file recording the current log file(s) used by the syslogger and
+  its log format, stderr or csvlog. Each line of the file is a space separated
+  list of two elements: the log format and the full path to the log file
+  including the value of . The log format
+  must be present in  to be listed in the
+  file. This file is present only when 
+  is activated.
+
+
+
  global
  Subdirectory containing cluster-wide tables, such as
  pg_database
diff --git a/src/backend/postmaster/syslogger.c

Re: [HACKERS] Patch to implement pg_current_logfile() function

2016-10-26 Thread Gilles Darold
Le 26/10/2016 à 05:30, Karl O. Pinc a écrit :
> On Tue, 18 Oct 2016 14:18:36 +0200
> Gilles Darold <gilles.dar...@dalibo.com> wrote:
>
>> Here is the v6 of the patch, here is the description of the
>> pg_current_logfile() function, I have tried to keep thing as simple as
>> possible:
>>
>> pg_current_logfile( [ destination text ] )
>> -
> Attached is a doc patch to apply on top of your v6 patch.

Thanks a lot for the documentation fixes, I've also patched some of your
changes, see v7 of the patch and explanations bellow.

> Changes are, roughly:
>
> Put pg_log_file in alphabetical order in the file name listing
> and section body.

This file is now named current_logfile, I have changed the named in the
documentation, especially in storage.sgml. Sorry for missing that in my
v6 patch.

One other change in documentation is the explanation of values stored in
this file. This is a path: log_directory/log_filename, and no more the
log file name only. This will help to get full path of the log at system
command level. This is also the value returned by function the
pg_current_logfile() to be able to read file directly with a simple:

 SELECT pg_read_file(pg_current_logfile());

It can be also useful if the file is included in a backup to find the
last log corresponding to the backup.


> I also have a couple of preliminary comments.
>
> It seems like the code is testing for whether the
> logfile is a CSV file by looking for a '.csv' suffix
> on the end of the file name.  This seems a little fragile.
> Shouldn't it be looking at something set internally when the
> log_destination config declaration is parsed?

Right, even if syslogger.c always adds the .csv suffix to the log file.
This method would failed if the log file was using this extension even
for a stderr format. It has been fixed in the v7 patch to no more use
the extension suffix.

> Since pg_log_file may contain only one line, and that
> line may be either the filename of the csv log file or
> the file name of the stderr file name it's impossible
> to tell whether that single file is in csv or stderr
> format.  I suppose it might be possible based on file
> name suffix, but Unix expressly ignores file name
> extensions and it seems unwise to force dependence
> on them.  Perhaps each line could begin with
> the "type" of the file, either 'csv' or 'stderr'
> followed by a space and the file name? 

The current_logfile may contain 2 lines, one for csvlog and an other for
stderr when they are both defined in log_destination. As said above, the
csvlog file will always have the .csv suffix, I guess it is enough to
now the format but I agree that it will not be true in all cases. To
keep things simple I prefer to only register the path to the log file,
external tools can easily detect the format or can ask for the path to a
specific log format using SELECT pg_current_logfile('stderr'); for
example. This is my point of view, but if there's a majority to add the
log format into the current_logfile why not.

I have copy/paste here your other comments to limit the number of messages:

> You're writing Unix eol characters into pg_log_file.  (I think.)
> Does this matter on MS Windows?  (I'm not up on MS Windows,
> and haven't put any thought into this at all.  But didn't
> want to forget about the potential issue.)

This doesn't matter as the file is opened in text mode (see
logfile_open() in syslogger.c) and use of LF in fprintf() will be
automatically converted to CRLF on MS Windows.

> While you're at it, it wouldn't hurt to provide another
> function that tells you the format of the file returned
> by pg_current_logfile(), since pg_current_logfile()
> called without arguments could return either a stderr
> formatted file or a csvlog formatted file.

The log format can be check using something like:

postgres=# SELECT pg_current_logfile(), pg_current_logfile('csvlog');
   pg_current_logfile|  
pg_current_logfile   
-+-
 pg_log/postgresql-2016-10-26_231700.log |
pg_log/postgresql-2016-10-26_231700.csv
(1 row)

postgres=# SELECT CASE WHEN (pg_current_logfile() =
pg_current_logfile('stderr')) THEN 'stderr' ELSE 'csvlog' END AS log_format;
 log_format

 stderr
(1 row)

but if we want we can have an other internal function with a call like:

SELECT pg_log_format(pg_current_logfile());

that will return stderr or csvlog but I'm not sure this is necessary.


Thanks for your review, let me know if there's any thing to adapt.

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 2e64cc4..bdae4c6 100644
--- a/doc/src/sgm

Re: [HACKERS] Patch to implement pg_current_logfile() function

2016-10-18 Thread Gilles Darold
Le 14/10/2016 à 20:45, Christoph Berg a écrit :
> Re: Gilles Darold 2016-10-14 <be9571dc-ae7c-63d4-6750-d7243cb5f...@dalibo.com>
>> Agree, the usability of the current version is really a pain. What I've
>> thought is that the function could return the csv log in all case when
>> csvlog is set in log_destination and the stderr log file when csvlog is
>> not defined. We can also have a parametrer to ask for a specific log
>> format, like pg_current_logfile('csv') or pg_current_logfile('stderr').
> Good idea to add an optional parameter.
>
> pg_current_logfile(type text DEFAULT 'auto')
>
> pg_current_logfile()
> pg_current_logfile('stderr')
> pg_current_logfile('csvlog')
> pg_current_logfile('auto')
>
> I think I'd prefer the stderr variant by default when both variants
> are configured.
>
> Christoph
>
>

Here is the v6 of the patch, here is the description of the
pg_current_logfile() function, I have tried to keep thing as simple as
possible:

pg_current_logfile( [ destination text ] )
-

Returns the name of the current log file used by the logging collector,
as text.
Log collection must be active or the return value is undefined. When
csvlog is
used as log destination, the csv filename is returned, when it is set to
stderr,
the stderr filename is returned. When both are used, it returns the
stderr filename.

There is an optional parameter of type text to determines the log
filename to
return following the log format, values can be 'csvlog' or 'stderr'.
When the log
format asked is not used as log destination then the return value is
undefined.

Example of use when log_destination = 'csvlog,stderr'

postgres=# select pg_current_logfile();
   pg_current_logfile   
-
 pg_log/postgresql-2016-10-18_121326.log
(1 row)

postgres=# select pg_current_logfile('csvlog');

   pg_current_logfile   
-
 pg_log/postgresql-2016-10-18_121326.csv
(1 row)

postgres=# select pg_current_logfile('stderr');
   pg_current_logfile   
-
 pg_log/postgresql-2016-10-18_121326.log
(1 row)

postgres=# select pg_current_logfile('csv');
ERROR:  log format csv not supported, possible values are stderr or csvlog

postgres=# select pg_read_file(pg_current_logfile());
  
pg_read_file   
---
 2016-10-18 14:06:30.576 CEST [8113] ERROR:  invalid input syntax for
integer: "A" at character 10+
 2016-10-18 14:06:30.576 CEST [8113] STATEMENT:  select
1='A';    +
 
(1 row)

I've also fixed the crash when current_logfile is removed by hand.


-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index a588350..3cd9535 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -15480,6 +15480,12 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n);
   
 
   
+   pg_current_logfile( destination text)
+   text
+   current log file used by the logging collector
+  
+
+  
session_user
name
session user name
@@ -15686,6 +15692,23 @@ SET search_path TO schema , schema, ..
 pg_notification_queue_usage

 
+   
+pg_current_logfile
+   
+
+   
+pg_current_logfile returns the name of the
+current log file used by the logging collector, as text.
+Log collection must be active or the return value is undefined. When
+csvlog is used as log destination, the csv filename is returned, when
+it is set to stderr, the stderr filename is returned. When both are
+used, it returns the stderr filename.
+There is an optional parameter of type text to determines
+the log filename to return following the log destination, values can
+be 'csvlog' or 'stderr'. When the log format asked is not used as log
+destination then the return value is undefined.
+   
+

 pg_listening_channels returns a set of names of
 asynchronous notification channels that the current session is listening
diff --git a/doc/src/sgml/storage.sgml b/doc/src/sgml/storage.sgml
index 1b812bd..8f909a7 100644
--- a/doc/src/sgml/storage.sgml
+++ b/doc/src/sgml/storage.sgml
@@ -170,6 +170,13 @@ last started with
   (this file is not present after server shutdown)
 
 
+
+ pg_log_file
+ A file recording the current log file(s) used by the syslogger
+  when log collection is active (this file is not present when logging_collector is not activated)
+
+
+
 
 
 
diff --git a/src/backend/postmaster/syslogger.c b/src/backend/postmaster/syslogger.c
index fd62d66..7b09349 100644
--- a/src/backend/p

Re: [HACKERS] Patch to implement pg_current_logfile() function

2016-10-14 Thread Gilles Darold
Le 14/10/2016 à 18:50, Christoph Berg a écrit :
> Re: To Gilles Darold 2016-10-14 <20161014125406.albrfj3qldiwj...@msg.df7cb.de>
>> A better design might be to return two columns instead:
>>
>> postgres=# select * from pg_current_logfile();
>>  stderr|  csvlog
>> ---+---
>>  pg_log/postgresql-2016-10-07_1646.log | 
>> pg_log/postgresql-2016-10-07_1646.csv
>> (The alternative could be to return an extra column:
>>
>> postgres=# select * from pg_current_logfile();
>>   type  | logfile
>> ---
>>  stderr | pg_log/postgresql-2016-10-07_1646.log
>>  csvlog | pg_log/postgresql-2016-10-07_1646.csv
> Usability-wise it might be better to have pg_current_logfile() just
> return the name of the text log (and possibly a HINT that there's a
> csv log if stderr is disabled), and have a second function
> pg_current_csvlog() return the csv log name.
>
> The choice which design is better will probably depend on if we think
> these functions are meant for interactive use (-> 2 functions), or for
> automated use (-> 2 columns). My guess would be that interactive use
> is more important here.

Agree, the usability of the current version is really a pain. What I've
thought is that the function could return the csv log in all case when
csvlog is set in log_destination and the stderr log file when csvlog is
not defined. We can also have a parametrer to ask for a specific log
format, like pg_current_logfile('csv') or pg_current_logfile('stderr').

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org



Re: [HACKERS] proposal: psql \setfileref

2016-10-11 Thread Gilles Darold
Le 11/10/2016 à 07:53, Pavel Stehule a écrit :
>
>
> 2016-10-10 19:50 GMT+02:00 Pavel Stehule <pavel.steh...@gmail.com
> <mailto:pavel.steh...@gmail.com>>:
>
>
>
>     2016-10-10 15:17 GMT+02:00 Gilles Darold <gilles.dar...@dalibo.com
> <mailto:gilles.dar...@dalibo.com>>:
>
> Le 10/10/2016 à 14:38, Daniel Verite a écrit :
> >   Gilles Darold wrote:
> >
> >>postgres=# \setfileref b /dev/random
> >>postgres=# insert into test (:b);
> >>
> >> Process need to be killed using SIGTERM.
> > This can be fixed by setting sigint_interrupt_enabled to true
> > before operating on the file. Then ctrl-C would be able to
> cancel
> > the command.
>
> If we do not prevent the user to be able to load special files
> that
> would be useful yes.
>
>
> I don't think so special files has some sense, just I had not time
> fix this issue. I will do it early - I hope.
>
>
> fresh patch - only regular files are allowed
>
> Regards
>
> Pavel
>

Hi,

The patch works as expected, the last two remaining issues have been
fixed. I've changed the status to "Ready for committers".

Thanks Pavel.

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org



Re: [HACKERS] proposal: psql \setfileref

2016-10-10 Thread Gilles Darold
Le 10/10/2016 à 14:38, Daniel Verite a écrit :
>   Gilles Darold wrote:
>
>>postgres=# \setfileref b /dev/random
>>postgres=# insert into test (:b);
>>
>> Process need to be killed using SIGTERM.
> This can be fixed by setting sigint_interrupt_enabled to true
> before operating on the file. Then ctrl-C would be able to cancel
> the command.

If we do not prevent the user to be able to load special files that
would be useful yes.

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: psql \setfileref

2016-10-10 Thread Gilles Darold
Le 09/10/2016 à 11:48, Pavel Stehule a écrit :
> hi
>
> 2016-10-04 9:18 GMT+02:00 Gilles Darold <gilles.dar...@dalibo.com
> <mailto:gilles.dar...@dalibo.com>>:
>
> Le 03/10/2016 à 23:23, Gilles Darold a écrit :
> > Le 03/10/2016 à 23:03, Robert Haas a écrit :
> >> On Mon, Oct 3, 2016 at 3:54 PM, Gilles Darold
> <gil...@darold.net <mailto:gil...@darold.net>> wrote:
> >>> 4) An other problem is that like this this patch will allow
> anyone to upload into a
> >>> column the content of any system file that can be read by
> postgres system user
> >>> and then allow non system user to read its content.
> >> I thought this was a client-side feature, so that it would let a
> >> client upload any file that the client can read, but not things
> that
> >> can only be read by the postgres system user.
> >>
> > Yes that's right, sorry for the noise, forget this fourth report.
> >
>
> After some more though there is still a security issue here. For a
> PostgreSQL user who also have login acces to the server, it is
> possible
> to read any file that the postgres system user can read, especially a
> .pgpass or a recovery.conf containing password.
>
>
> here is new update - some mentioned issues are fixed + regress tests
> and docs

Looks very good for me minus the two following points:

1) I think \setfileref must return the same syntax than \set command

postgres=# \setfileref a testfile.txt
postgres=# \setfileref
a = 'testfile.txt'

postgres=# \setfileref
...
a = ^'testfile.txt'

I think it would be better to prefixed the variable value with the ^ too
like in the \set report even if we know by using this command that
reported variables are file references.


2) You still allow special file to be used, I understand that this is
from the user responsibility but I think it could be a wise precaution. 

postgres=# \setfileref b /dev/random
postgres=# insert into test (:b);

Process need to be killed using SIGTERM.

However if this last point is not critical and should be handle by the
user, I think this patch is ready to be reviewed by a committer after
fixing the first point.

Regards,

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org



Re: [HACKERS] Patch to implement pg_current_logfile() function

2016-10-07 Thread Gilles Darold
Le 03/10/2016 à 16:09, Christoph Berg a écrit :
> Hi Gilles,
>
> I've just tried v4 of the patch. The OID you picked for
> pg_current_logfile doesn't work anymore, but after increasing it
> randomly by 1, it compiles. I like the added functionality,
> especially that "select pg_read_file(pg_current_logfile());" just
> works.

I've changed the OID and some other things in this v5 of the patch, see
bellow.

> What bugs me is the new file "pg_log_file" in PGDATA. It clutters the
> directory listing. I wouldn't know where else to put it, but you might
> want to cross-check that with the thread that is trying to reshuffle
> the directory layout to make it easier to exclude files from backups.
> (Should this file be part of backups?)
>
> It's probably correct to leave the file around on shutdown (given it's
> still a correct pointer). But there might be a case for removing it on
> startup if logging_collector isn't active anymore.

The file has been renamed into current_logfile and is now removed at
startup if logging_collector is not active. The file can be excluded
from a backup but otherwise if it is restored it will be removed or
overridden at startup. Perhaps the file can give a useful information in
a backup to know the last log file active at backup time, but not sure
it has any interest.

I'm not sure which thread is talking about reshuffling the directory
layout, please give me a pointer if this is not the thread talking about
renaming of pg_xlog and pg_clog. In the future if we have a directory to
store files that must be excluded from backup or status files, it will
be easy to move this file here. I will follow such a change.

> Also, pg_log_file is tab-completion-unfriendly, it conflicts with
> pg_log/. Maybe name it current_logfile?
Right, done.

> Another thing that might possibly be improved is csv logging:
>
> # select pg_read_file(pg_current_logfile());
>  pg_read_file  
> ───
>  LOG:  ending log output to stderr↵
>  HINT:  Future log output will go to log destination "csvlog".↵
>
> -rw---  1 cbe staff 1011 Okt  3 15:06 postgresql-2016-10-03_150602.csv
> -rw---  1 cbe staff   96 Okt  3 15:06 postgresql-2016-10-03_150602.log
>
> ... though it's unclear what to do if both stderr and csvlog are
> selected.
>
> Possibly NULL should be returned if only "syslog" is selected.
> (Maybe remove pg_log_file once 'HINT:  Future log output will go to
> log destination "syslog".' is logged?)

I've totally missed that we can have log_destination set to stderr and
csvlog at the same time, so pg_current_logfile() might return two
filenames in this case. I've changed the function to return a setof
record to report the last stderr or csv log file or both. One another
major change is that the current log filename list is also updated after
a configuration reload and not just after a startup or a log rotation.
So in the case of you are switching from stderr to  csvlog or both you
will see immediately the change in current_logfile instead of waiting
for the next log rotation.

  * log_destination set to csvlog only:

postgres=# select * from pg_current_logfile();
  pg_current_logfile  
---
 pg_log/postgresql-2016-10-07_1646.csv
(1 row)

* log_destination set to stderr only:

postgres=# select pg_reload_conf();
 pg_reload_conf

 t
(1 row)

postgres=# select * from pg_current_logfile();
  pg_current_logfile  
---
 pg_log/postgresql-2016-10-07_1647.log
(1 row)

* log_destination set to both stderr,csvlog:

postgres=# select pg_reload_conf();
 pg_reload_conf

 t
(1 row)

postgres=# select * from pg_current_logfile();
  pg_current_logfile  
---
 pg_log/postgresql-2016-10-07_1648.log
 pg_log/postgresql-2016-10-07_1648.csv
(2 rows)

When logging_collector is disabled, this function return null.

As the return type has changed to a setof, the query to read the file
need to be change too:

postgres=# SELECT pg_read_file(log) FROM pg_current_logfile() b(log);
 
pg_read_file   

LOG:  duration: 0.182 ms  statement: select  pg_read_file(log) from
pg_current_logfile() file(log);+
 
(1 row)

I can change the return type to a single text[] if that's looks better.

Thanks

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index a58835

Re: [HACKERS] proposal: psql \setfileref

2016-10-04 Thread Gilles Darold
Le 04/10/2016 à 17:29, Pavel Stehule a écrit :
>
>
> 2016-10-04 9:18 GMT+02:00 Gilles Darold <gilles.dar...@dalibo.com
> <mailto:gilles.dar...@dalibo.com>>:
>
> Le 03/10/2016 à 23:23, Gilles Darold a écrit :
> > Le 03/10/2016 à 23:03, Robert Haas a écrit :
> >> On Mon, Oct 3, 2016 at 3:54 PM, Gilles Darold
> <gil...@darold.net <mailto:gil...@darold.net>> wrote:
> >>> 4) An other problem is that like this this patch will allow
> anyone to upload into a
> >>> column the content of any system file that can be read by
> postgres system user
> >>> and then allow non system user to read its content.
> >> I thought this was a client-side feature, so that it would let a
> >> client upload any file that the client can read, but not things
> that
> >> can only be read by the postgres system user.
> >>
> > Yes that's right, sorry for the noise, forget this fourth report.
> >
>
> After some more though there is still a security issue here. For a
> PostgreSQL user who also have login acces to the server, it is
> possible
> to read any file that the postgres system user can read, especially a
> .pgpass or a recovery.conf containing password.
>
>
> This patch doesn't introduce any new server side functionality, so if
> there is some vulnerability, then it is exists now too.
>

It doesn't exists, that was my system user which have extended
privilege. You can definitively forget the fouth point.

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org



Re: [HACKERS] proposal: psql \setfileref

2016-10-04 Thread Gilles Darold
Le 03/10/2016 à 23:23, Gilles Darold a écrit :
> Le 03/10/2016 à 23:03, Robert Haas a écrit :
>> On Mon, Oct 3, 2016 at 3:54 PM, Gilles Darold <gil...@darold.net> wrote:
>>> 4) An other problem is that like this this patch will allow anyone to 
>>> upload into a
>>> column the content of any system file that can be read by postgres system 
>>> user
>>> and then allow non system user to read its content.
>> I thought this was a client-side feature, so that it would let a
>> client upload any file that the client can read, but not things that
>> can only be read by the postgres system user.
>>
> Yes that's right, sorry for the noise, forget this fourth report.
>

After some more though there is still a security issue here. For a
PostgreSQL user who also have login acces to the server, it is possible
to read any file that the postgres system user can read, especially a
.pgpass or a recovery.conf containing password.


-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: psql \setfileref

2016-10-03 Thread Gilles Darold
Le 03/10/2016 à 23:03, Robert Haas a écrit :
> On Mon, Oct 3, 2016 at 3:54 PM, Gilles Darold <gil...@darold.net> wrote:
>> 4) An other problem is that like this this patch will allow anyone to upload 
>> into a
>> column the content of any system file that can be read by postgres system 
>> user
>> and then allow non system user to read its content.
> I thought this was a client-side feature, so that it would let a
> client upload any file that the client can read, but not things that
> can only be read by the postgres system user.
>

Yes that's right, sorry for the noise, forget this fourth report.

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: psql \setfileref

2016-10-03 Thread Gilles Darold
The following review has been posted through the commitfest application:
make installcheck-world:  tested, passed
Implements feature:   tested, failed
Spec compliant:   tested, failed
Documentation:not tested

Contents & Purpose
==

This patch adds a new type of psql variables: file references, that can be
set using command \setfileref. The value of the named variable is the path
to the referenced file. It allows simple inserting of large data without
necessity of manual escaping or using LO api. Use:

postgres=# create table test (col1 bytea);
postgres=# \setfileref a ~/avatar.gif
postgres=# insert into test values(:a);

Content of file is returned as bytea, the text output can be used when
escaping is not required, in this case use convert_from() to obtain a
text output.

postgres=# create table test (col1 text);
postgres=# \setfileref a ~/README.txt
postgres=# insert into test values(convert_from(:a, 'utf8'));

The content of file reference variables is not persistent in memory.

List of file referenced variable can be listed using \set

postgres=# \set
...
b = ^'~/README.txt'

Empty file outputs an empty bytea (\x).

Initial Run
===

The patch applies cleanly to HEAD and doesn't break anything, at least the
regression tests all pass successfully.

Feedback on testing
===

I see several problems.

1) Error reading referenced file returns the system error and a syntax error
on variable:

postgres=# \setfileref a /etc/sudoers
postgres=# insert into test values (:a);
/etc/sudoers: Permission denied
ERROR:  syntax error at or near ":"
LINE 1: insert into t1 values (:a);

2) Trying to load a file with size upper than the 1GB limit reports the 
following
error (size 2254110720 bytes):

postgres=# \setfileref b /home/VMs/ISOs/sol-10-u11-ga-x86-dvd.iso
postgres=# insert into test values (:b);
INSERT 0 1
postgres=# ANALYZE test;
postgres=# SELECT relname, relkind, relpages, 
pg_size_pretty(relpages::bigint*8192) FROM pg_class WHERE relname='test';
 relname | relkind | relpages | pg_size_pretty 
-+-+--+
 test| r   |1 | 8192 bytes
(1 row)

postgres=# select * from test;
 col1 
--
 \x
(1 row)

This should not insert an empty bytea but might raise an error instead.

Trying to load smaller file but with bytea escaping total size upper than
the 1GB limit (size 666894336 bytes) reports:

postgres=# \setfileref a /var/ISOs/CentOS-7-x86_64-Minimal-1503-01.iso
postgres=# insert into t1 values (1, :a, 'tr');
ERROR:  out of memory
DETAIL:  Cannot enlarge string buffer containing 0 bytes by 1333788688 
more bytes.
Time: 1428.657 ms (00:01.429)

This raise an error as bytea escaping increase content size which is the 
behavior expected.

3) The path doesn't not allow the use of pipe to system command, which is a 
secure
behavior, but it is quite easy to perform a DOS by using special files like:

postgres=# \setfileref b /dev/random
   
postgres=# insert into t1 (:b);.

this will never end until we kill the psql session. I think we must also prevent
non regular files to be referenced using S_ISREG.

I think all these three errors can be caught and prevented at variable 
declaration using some tests on files like:

is readable?
is a regular file?
is small size enough?

to report an error directly at variable file reference setting.

4) An other problem is that like this this patch will allow anyone to upload 
into a
column the content of any system file that can be read by postgres system user
and then allow non system user to read its content. For example, connected as
a basic PostgreSQL only user:

testdb=> select current_user;
 current_user 
--
 user1
(1 row)
testdb=> \setfileref b /etc/passwd
testdb=> insert into test values (:b);
INSERT 0 1

then to read the file:

testdb=> select convert_from(col1, 'utf8') from t1; 

Maybe the referenced files must be limited to some part of the filesystem
or the feature limited to superuser only.

5) There is no documentation at all on this feature. Here a proposal
for inclusion in doc/src/sgml/ref/psql-ref.sgml 

\setfileref name value
Sets the internal variable name as a reference to the file path
set as value. To unset a variable, use the \unset command.

File references are shown as file path prefixed with the ^ character
when using the \set command alone.

Valid variable names can contain characters, digits, and underscores.
See the section Variables below for details. Variable names are
 

Re: [HACKERS] Patch to implement pg_current_logfile() function

2016-06-28 Thread Gilles Darold
Le 07/04/2016 08:30, Karl O. Pinc a écrit :
> On Thu, 7 Apr 2016 01:13:51 -0500
> "Karl O. Pinc" <k...@meme.com> wrote:
>
>> On Wed, 6 Apr 2016 23:37:09 -0500
>> "Karl O. Pinc" <k...@meme.com> wrote:
>>
>>> On Wed, 6 Apr 2016 22:26:13 -0500
>>> "Karl O. Pinc" <k...@meme.com> wrote:  
>>>> On Wed, 23 Mar 2016 23:22:26 +0100
>>>> Gilles Darold <gilles.dar...@dalibo.com> wrote:
>>>> 
>>>>> Thanks for the reminder, here is the v3 of the patch after a
>>>>> deeper review and testing. It is now registered to the next
>>>>> commit fest under the System Administration topic.
>> This is what I see at the moment.  I'll wait for replies
>> before looking further and writing more.
> Er, one more thing.  Isn't it true that in logfile_rotate()
> you only need to call store_current_log_filename() when
> logfile_open() is called with a "w" mode, and never need to
> call it when logfile_open() is called with an "a" mode?
>
>
> Karl <k...@meme.com>
> Free Software:  "You don't pay back, you pay forward."
>  -- Robert A. Heinlein
>
>

Hi Karl,

Thank you very much for the patch review and please apologies this too
long response delay. I was traveling since end of April and totally
forgotten this patch. I have applied all your useful feedbacks on the
patch and attached a new one (v4) to this email :

- Fix the missing  in doc/src/sgml/func.sgml
- Rewrite pg_current_logfile descritption as suggested
- Remove comment in src/backend/postmaster/syslogger.c
- Use pgrename to first write the filename to a temporary file
before overriding the last log file.
- Rename store_current_log_filename() into logfile_writename() -
logfile_getname is used to retrieve the filename.
- Use logfile_open() to enable CRLF line endings on Windows
- Change log level for when it can't create pg_log_file, from
WARNING to LOG
- Remove NOTICE message when the syslogger is not enabled, the
function return null.

On other questions:

> "src/backend/postmaster/syslogger.c expects to see fopen() fail with
ENFILE and EMFILE.  What will you do if you get these?"

- Nothing, if the problem occurs during the log rotate call, then
log rotation file is disabled so logfile_writename() will not be called.
Case where the problem occurs between the rotation call and the
logfile_writename() call is possible but I don't think that it will be
useful to try again. In this last case log filename will be updated
during next rotation.

> "Have you given any thought as to when logfile_rotate() is called? 
Since logfile_rotate() itself logs with ereport(), it would _seem_ safe
to ereport() from within your store_current_log_filename(), called from
within logfile_rotate()."

- Other part of logfile_rotate() use ereport() so I guess it is safe
to use it.

> "The indentation of the ereport(), in the part that continues over
multiple lines"

- This was my first thought but seeing what is done in the other
call to ereport() I think I have done it the right way.

> "Isn't it true that in logfile_rotate() you only need to call
store_current_log_filename() when logfile_open() is called with a "w"
mode, and never need to call it when logfile_open() is called with an
"a" mode?"

- No, it is false, append mode is used when logfile_rotate used an
existing file but the filename still change.


Best regards,

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 3f627dc..7100881 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -15293,6 +15293,12 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n);
   
 
   
+   pg_current_logfile()
+   text
+   current log file used by the logging collector
+  
+
+  
session_user
name
session user name
@@ -15499,6 +15505,17 @@ SET search_path TO schema , schema, ..
 pg_notification_queue_usage

 
+   
+pg_current_logfile
+   
+
+   
+pg_current_logfile returns the name of the
+current log file used by the logging collector, as
+text. Log collection must be active or the
+return value is undefined.
+   
+

 pg_listening_channels returns a set of names of
 asynchronous notification channels that the current session is listening
diff --git a/doc/src/sgml/storage.sgml b/doc/src/sgml/storage.sgml
index 1b812bd..7dae000 100644
--- a/doc/src/sgml/storage.sgml
+++ b/doc/src/sgml/storage.sgml
@@ -170,6 +170,13 @@ last started with
   (this file is not present after server shutdown)
 
 
+
+ pg_log_file
+ A file recording the current log file used by the syslogg

Re: [HACKERS] Patch to implement pg_current_logfile() function

2016-03-23 Thread Gilles Darold
Le 22/03/2016 14:44, Michael Paquier a écrit :
> On Sat, Mar 12, 2016 at 12:46 AM, Gilles Darold
> <gilles.dar...@dalibo.com> wrote:
>> Here is the patch rewritten to use alternate file
>> $PGDATA/pg_log_filename to store the current log filename used by
>> syslogger. All examples used in the first mail of this thread work the
>> exact same way. If there's no other remarks, I will add the patch to the
>> next commit fest.
> Please be sure to register this patch to the next CF:
> https://commitfest.postgresql.org/10/
> Things are hot enough with 9.6, so this will only be considered in 9.7.

Thanks for the reminder, here is the v3 of the patch after a deeper
review and testing. It is now registered to the next commit fest under
the System Administration topic.

Fixes in this patch are:

- Output file have been renamed as PGDATA/pg_log_file

- Log level of the warning when logging collector is not active has been
changed to NOTICE

postgres=# select pg_current_logfile();
NOTICE:  current log can not be reported, log collection is not active
 pg_current_logfile


(1 row)

- Log level for file access errors in function
store_current_log_filename() of file src/backend/postmaster/syslogger.c
has been set to WARNING, using ERROR level forced the backend to stop
with a FATAL error.

- Add information about file PGDATA/pg_log_file in storage file layout
of doc/src/sgml/storage.sgml


-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index ae93e69..155e76b 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -15158,6 +15158,11 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n);
(0 if not called, directly or indirectly, from inside a trigger)
   
 
+  pg_current_logfile()
+   text
+   current log file used by the logging collector
+  
+
   
session_user
name
@@ -15365,6 +15370,16 @@ SET search_path TO schema , schema, ..
 pg_notification_queue_usage

 
+   
+pg_current_logfile
+   
+
+   
+pg_current_logfile returns the name of the current log
+file used by the logging collector, as a text. Log collection
+must be active.
+   
+

 pg_listening_channels returns a set of names of
 asynchronous notification channels that the current session is listening
diff --git a/doc/src/sgml/storage.sgml b/doc/src/sgml/storage.sgml
index 9b2e09e..6284d54 100644
--- a/doc/src/sgml/storage.sgml
+++ b/doc/src/sgml/storage.sgml
@@ -170,6 +170,13 @@ last started with
   (this file is not present after server shutdown)
 
 
+
+ pg_log_file
+ A file recording the current log file used by the syslogger
+  when log collection is active
+
+
+
 
 
 
diff --git a/src/backend/postmaster/syslogger.c b/src/backend/postmaster/syslogger.c
index e7e488a..c3bf672 100644
--- a/src/backend/postmaster/syslogger.c
+++ b/src/backend/postmaster/syslogger.c
@@ -145,6 +145,7 @@ static char *logfile_getname(pg_time_t timestamp, const char *suffix);
 static void set_next_rotation_time(void);
 static void sigHupHandler(SIGNAL_ARGS);
 static void sigUsr1Handler(SIGNAL_ARGS);
+static void store_current_log_filename(char *filename);
 
 
 /*
@@ -571,6 +572,9 @@ SysLogger_Start(void)
 
 	syslogFile = logfile_open(filename, "a", false);
 
+	/* store information about current log filename used by log collection */
+	store_current_log_filename(filename);
+
 	pfree(filename);
 
 #ifdef EXEC_BACKEND
@@ -1209,6 +1213,9 @@ logfile_rotate(bool time_based_rotation, int size_rotation_for)
 		fclose(syslogFile);
 		syslogFile = fh;
 
+		/* store information about current log filename used by log collection */
+		store_current_log_filename(filename);
+
 		/* instead of pfree'ing filename, remember it for next time */
 		if (last_file_name != NULL)
 			pfree(last_file_name);
@@ -1253,6 +1260,9 @@ logfile_rotate(bool time_based_rotation, int size_rotation_for)
 		fclose(csvlogFile);
 		csvlogFile = fh;
 
+		/* store information about current log filename used by log collection */
+		store_current_log_filename(csvfilename);
+
 		/* instead of pfree'ing filename, remember it for next time */
 		if (last_csv_file_name != NULL)
 			pfree(last_csv_file_name);
@@ -1362,3 +1372,35 @@ sigUsr1Handler(SIGNAL_ARGS)
 
 	errno = save_errno;
 }
+
+
+/*
+ * Store the name of the file where current log messages are written when
+ * log collector is enabled. Useful to find the name of the current log file
+ * when a time-based rotation is defined. 
+ */
+static void
+store_current_log_filename(char *filename)
+{
+	FILE   *fh;
+	charlogpathfilename[MAXPGPATH];
+
+	snprintf(logpathfilename, sizeof(logpathfilename), "%s",
+		CURRENT_LOG_FILENAME);
+	if ((fh = fopen(logpathfilename, "w")) == NULL)
+	{
+	   ereport(WARNING,
+			   (errcode_for_file_access(),
+		

Re: [HACKERS] Patch to implement pg_current_logfile() function

2016-03-11 Thread Gilles Darold
Le 11/03/2016 15:22, Tom Lane a écrit :
> Gilles Darold <gilles.dar...@dalibo.com> writes:
>> Le 11/03/2016 10:49, Shulgin, Oleksandr a écrit :
>>> Would it make sense to have it as a symlink instead?
>> The only cons I see is that it can be more "difficult" with some
>> language to gather the real path, but do we really need it? There is
>> also little time where the symlink doesn't exist, this is when it needs
>> to be removed before being recreated to point to the new log file.
> Yeah, a symlink is impossible to update atomically (on most platforms
> anyway).  Plain file containing the pathname seems better.
>
> Another point is that we might not necessarily want *only* the pathname in
> there.  postmaster.pid has accreted more stuff over time, and this file
> might too.  I can see wanting the syslogger PID in there, for example,
> so that onlookers can detect a totally stale file.  (Not proposing this
> right now, just pointing out that it's a conceivable future feature.)
>
>   regards, tom lane

Here is the patch rewritten to use alternate file
$PGDATA/pg_log_filename to store the current log filename used by
syslogger. All examples used in the first mail of this thread work the
exact same way. If there's no other remarks, I will add the patch to the
next commit fest.


Here are some additional examples with this feature.

To obtain the filling percentage of the log file when log_rotation_size
is used:

postgres=# SELECT pg_current_logfile(), (select setting::int*1000 from
pg_settings where name='log_rotation_size'), a.size size, 
((a.size*100)/(select setting::int*1000 from pg_settings where
name='log_rotation_size')) percent_used FROM
pg_stat_file(pg_current_logfile())
a(size,access,modification,change,creation,isdir);

-[ RECORD 1 ]--+
pg_current_logfile | pg_log/postgresql-2016-03-11_160817.log
log_rotation_size  | 1024
size   | 1246000
percent_used   | 12

This can help to know if the file is near to be rotated. Or if you use
time based rotation:

postgres=# select pg_current_logfile(), (select setting::int*60 from
pg_settings where name='log_rotation_age')
log_rotation_age,a.access,a.modification, (((extract(epoch from
a.modification) - extract(epoch from a.access)) * 100) / (select
setting::int*60 from pg_settings where name='log_rotation_age'))
percent_used FROM pg_stat_file(pg_current_logfile())
a(size,access,modification,change,creation,isdir);
-[ RECORD 1 ]--+
pg_current_logfile | pg_log/postgresql-2016-03-11_162143.log
log_rotation_age   | 3600
access | 2016-03-11 16:21:43+01
modification   | 2016-03-11 16:33:12+01
percent_used   | 19.13889

Best regards,

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 4b5ee81..e6a18fc 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -15026,6 +15026,11 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n);
channel names that the session is currently listening on
   
 
+  pg_current_logfile()
+   text
+   current log file used by the logging collector
+  
+
   
pg_notification_queue_usage()
double
@@ -15264,6 +15269,16 @@ SET search_path TO schema , schema, ..

 

+pg_current_logfile
+   
+
+   
+pg_current_logfile returns the name of the current log
+file used by the logging collector, as a text. Log collection
+must be active.
+   
+
+   
 pg_postmaster_start_time

 
diff --git a/src/backend/postmaster/syslogger.c b/src/backend/postmaster/syslogger.c
index e7e488a..4769dc5 100644
--- a/src/backend/postmaster/syslogger.c
+++ b/src/backend/postmaster/syslogger.c
@@ -145,6 +145,7 @@ static char *logfile_getname(pg_time_t timestamp, const char *suffix);
 static void set_next_rotation_time(void);
 static void sigHupHandler(SIGNAL_ARGS);
 static void sigUsr1Handler(SIGNAL_ARGS);
+static void store_current_log_filename(char *filename);
 
 
 /*
@@ -571,6 +572,9 @@ SysLogger_Start(void)
 
 	syslogFile = logfile_open(filename, "a", false);
 
+	/* store information about current log filename used by log collection */
+	store_current_log_filename(filename);
+
 	pfree(filename);
 
 #ifdef EXEC_BACKEND
@@ -1209,6 +1213,9 @@ logfile_rotate(bool time_based_rotation, int size_rotation_for)
 		fclose(syslogFile);
 		syslogFile = fh;
 
+		/* store information about current log filename used by log collection */
+		store_current_log_filename(filename);
+
 		/* instead of pfree'ing filename, remember it for next time */
 		if (last_file_name != NULL)
 			pfree(last_file_name);
@@ -1253,6 +1260,9 @@ logfile_rotate(bool time_based_rotation, int size_rotation_for)
 		fclose(csvlogFile);
 		csvlogFile = fh;
 
+		/* store informat

Re: [HACKERS] Patch to implement pg_current_logfile() function

2016-03-11 Thread Gilles Darold
Le 11/03/2016 10:49, Shulgin, Oleksandr a écrit :
> On Thu, Mar 10, 2016 at 9:05 PM, Tom Lane <t...@sss.pgh.pa.us
> <mailto:t...@sss.pgh.pa.us>> wrote:
>
> Gilles Darold <gilles.dar...@dalibo.com
> <mailto:gilles.dar...@dalibo.com>> writes:
> > Then, should I have to use an alternate file to store the
> information or
> > implement a bidirectional communication with the syslogger?
>
> I'd just define a new single-purpose file $PGDATA/log_file_name
> or some such.
>
>
> Would it make sense to have it as a symlink instead?

The only cons I see is that it can be more "difficult" with some
language to gather the real path, but do we really need it? There is
also little time where the symlink doesn't exist, this is when it needs
to be removed before being recreated to point to the new log file. If
your external script try to reopen the log file at this moment it will
complain that file doesn't exists and looping until the file exists is
probably a bad idea.

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org



Re: [HACKERS] Patch to implement pg_current_logfile() function

2016-03-10 Thread Gilles Darold
Le 10/03/2016 16:26, Tom Lane a écrit :
> Robert Haas <robertmh...@gmail.com> writes:
>> On Wed, Mar 9, 2016 at 12:32 PM, Gilles Darold <gilles.dar...@dalibo.com> 
>> wrote:
>>> I choose to allow the log collector to write his current log file name
>>> into the lock file 'postmaster.pid'.
>> Gosh, why?  Piggybacking this on a file written for a specific purpose
>> by a different process seems like making life very hard for yourself,
>> and almost certainly a recipe for bugs.
> That's a *complete* nonstarter.  postmaster.pid has to be written by the
> postmaster process and nobody else.
>
> It's a particularly bad choice for the syslogger, which will exist
> fractionally longer than the postmaster, and thus might be trying to write
> into the file after the postmaster has removed it.

I was thinking that the lock file was removed after all and that
postmaster was waiting for all childs die before removing it, but ok I
know why this was not my first choice, this was a very bad idea :-)
Then, should I have to use an alternate file to store the information or
implement a bidirectional communication with the syslogger? The last
solution looks like really too much for this very simple feature. With
an alternate file what is the best place where it has to be written? All
places have their special use, the global/ directory?
 

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Patch to implement pg_current_logfile() function

2016-03-09 Thread Gilles Darold
Hi,

Here is a patch that is supposed to solve the remaining problem to
find the current log file used by the log collector after a rotation.
There is lot of external command to try to find this information but
it seems useful to have an internal function to retrieve the name
of the current log file from the log collector.

There is a corresponding item in the TODO list at "Administration"
section. The original thread can be reach at the following link
http://archives.postgresql.org/pgsql-general/2008-11/msg00418.php
The goal is to provide a way to query the log collector subprocess
to determine the name of the currently active log file.

This patch implements the pg_current_logfile() function that can be
used as follow. The function returns NULL when logging_collector
is not active and outputs a warning.

postgres=# \pset null *
postgres=# SELECT pg_current_logfile();
WARNING:  current log can not be reported because log collection is not
active
 pg_current_logfile

 *
(1 line)

So a better query should be:

postgres=# SELECT CASE WHEN current_setting('logging_collector')='on'
THEN pg_current_logfile()
ELSE current_setting('log_destination')
END;
 current_setting
-
 syslog
(1 line)

Same query with log collection active and, for example,
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'

postgres=# SELECT CASE WHEN current_setting('logging_collector')='on'
THEN pg_current_logfile()
ELSE current_setting('log_destination')
END;
 current_setting
-
 pg_log/postgresql-2016-03-09_152827.log
(1 line)

Then after a log rotation:

postgres=# SELECT pg_rotate_logfile();
 pg_rotate_logfile
---
 t
(1 line)

postgres=# select pg_current_logfile();
   pg_current_logfile   
-
 pg_log/postgresql-2016-03-09_152908.log
(1 line)

I choose to allow the log collector to write his current log file name
into the lock file 'postmaster.pid'. This allow simple access to this
information through system commands, for example:

postgres@W230ST:~$ tail -n1 /usr/local/pgql-devel/data/postmaster.pid
pg_log/postgresql-2016-03-09_152908.log

Log filename is written at the 8th line position when log collection
is active and all other information have been written to lock file.

The function pg_current_logfile() use in SQL mode read the lock file
to report the information.

I don't know if there's any limitation on using postmaster.pid file to
do that but it seems to me a bit weird to log this information to an
other file. My first attempt was to use a dedicated file and save it
to global/pg_current_logfile or pg_stat_tmp/pg_current_logfile but I
think it is better to use the postmaster.pid file for that. I also
though about a communication protocol or notification with the log
collector subprocess to query and retrieve the name of the currently
active log file. But obviously, it would be too much work for just this
simple function and I can't see any other feature that need such a
work.

Any though? Should I add this patch to the commit fest? If the use
of the postmater.pid file is a problem I can easily modify the patch
to use an alternate file.

Best regards,

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 4b5ee81..313403e 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -15027,6 +15027,12 @@ SELECT * FROM pg_ls_dir('.') WITH ORDINALITY AS t(ls,n);
   
 
   
+   pg_current_logfile()
+   text
+   current log file used by the logging collector
+  
+
+  
pg_notification_queue_usage()
double
fraction of the asynchronous notification queue currently occupied (0-1)
@@ -15264,6 +15270,16 @@ SET search_path TO schema , schema, ..

 

+pg_current_logfile
+   
+
+   
+pg_current_logfile returns the name of the current log
+file used by the logging collector, as a text. Log collection
+must be active.
+   
+
+   
 pg_postmaster_start_time

 
diff --git a/doc/src/sgml/storage.sgml b/doc/src/sgml/storage.sgml
index 9b2e09e..41aaf5d 100644
--- a/doc/src/sgml/storage.sgml
+++ b/doc/src/sgml/storage.sgml
@@ -166,7 +166,8 @@ last started with
   Unix-domain socket directory path (empty on Windows),
   first valid listen_address (IP address or *, or empty if
   not listening on TCP),
-  and shared memory segment ID
+  the shared memory segment ID,
+  and the path to the current log file used by the syslogger
   (this file is not present after server shutdown)
 
 
diff --git a/src/backend/postmaster/syslogger.c b/src/backend/postmaster/syslogger.c
index e7e488a..40fec7a 100644
--- a/src/backend/postmaster/syslogger.c
+++ b/src/backend/postmaster/syslogger.c
@@ -54,7 +54,6 @@
  */
 #define READ_BUF_SIZE (2 * PIPE_

Re: [HACKERS] Optimizer questions

2016-01-18 Thread Gilles Darold
Le 18/01/2016 03:47, Bruce Momjian a écrit :
> On Tue, Jan  5, 2016 at 05:55:28PM +0300, konstantin knizhnik wrote:
>> Hi hackers, 
>>
>> I want to ask two questions about PostgreSQL optimizer.
>> I have the following query:
>>
>> SELECT o.id as id,s.id as sid,o.owner,o.creator,o.parent_id
>> as dir_id,s.mime_id,m.c_type,s.p_file,s.mtime,o.ctime,o.name
>> ,o.title,o.size,o.deleted,la.otime,la.etime,uo.login as owner_login,uc.login 
>> as
>> creator_login,(CASE WHEN f.user_id IS NULL THEN 0 ELSE 1 END) AS flagged,
>> (select 'userid\\:'||string_agg(user_id,' userid\\:') from 
>> get_authorized_users
>> (o.id)) as acl FROM objects s JOIN objects o ON s.original_file=o.id LEFT 
>> JOIN
>> flags f ON o.id=f.obj_id AND o.owner=f.user_id LEFT JOIN 
>> objects_last_activity
>> la ON o.id = la.obj_id AND o.owner = la.user_id, mime m, users uc , users uo
>> WHERE (s.mime_id=904 or s.mime_id=908) AND m.mime_id = o.mime_id AND o.owner 
>> =
>> uo.user_id AND o.creator = uc.user_id ORDER BY s.mtime LIMIT 9;
> FYI, I could not make any sense out of this query, and I frankly can't
> figure out how others can udnerstand it.  :-O   Anyway, I ran it through
> pgFormatter (https://github.com/darold/pgFormatter), which I am showing
> here because I was impressed with the results:
>
>   SELECT
>   o.id AS id,
>   s.id AS sid,
>   o.owner,
>   o.creator,
>   o.parent_id AS dir_id,
>   s.mime_id,
>   m.c_type,
>   s.p_file,
>   s.mtime,
>   o.ctime,
>   o.name,
>   o.title,
>   o.size,
>   o.deleted,
>   la.otime,
>   la.etime,
>   uo.login AS owner_login,
>   uc.login AS creator_login,
>   (
>   CASE
>   WHEN f.user_id IS NULL THEN 0
>   ELSE 1
>   END ) AS flagged,
>   (
>   SELECT
>   'userid\\:' || string_agg (
>   user_id,
>   ' userid\\:' )
>   FROM
>   get_authorized_users (
>   o.id ) ) AS acl
>   FROM
>   objects s
>   JOIN objects o ON s.original_file = o.id
>   LEFT JOIN flags f ON o.id = f.obj_id
>   AND o.owner = f.user_id
>   LEFT JOIN objects_last_activity la ON o.id = la.obj_id
>   AND o.owner = la.user_id,
>   mime m,
>   users uc,
>   users uo
>   WHERE (
>   s.mime_id = 904
>   OR s.mime_id = 908 )
>   AND m.mime_id = o.mime_id
>   AND o.owner = uo.user_id
>   AND o.creator = uc.user_id
>   ORDER BY
>   s.mtime
>   LIMIT 9;
>


Thanks Bruce for the pointer on this tool, even if it is not perfect I
think it can be useful. There's also an on-line version at
http://sqlformat.darold.net/ that every one can use without having to
install it and to format queries up to 20Kb with option to control the
output format. It can also anonymize SQL queries.

Actually this is the SQL formatter/beautifier used in pgBadger, it has
been extracted as an independent project to be able to improve this part
of pgBadger without having to run it each time.

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug in pg_dump

2015-02-27 Thread Gilles Darold
Le 26/02/2015 12:41, Michael Paquier a écrit :
 On Wed, Feb 25, 2015 at 7:46 AM, Gilles Darold gilles.dar...@dalibo.com 
 wrote:
 This is a far better patch and the test to export/import of the
 postgis_topology extension works great for me.

 Thanks for the work.
 Attached is a patch that uses an even better approach by querying only
 once all the FK dependencies of tables in extensions whose data is
 registered as dumpable by getExtensionMembership(). Could you check
 that it works correctly? My test case passes but an extra check would
 be a good nice. The patch footprint is pretty low so we may be able to
 backport this patch easily.

Works great too. I'm agree that this patch should be backported but I
think we need to warn admins in the release note that their
import/restore scripts may be broken.

One of the common workaround about this issue was to not take care of
the error during data import and then reload data from the tables with
FK errors at the end of the import. If the admins are not warned, this
can conduct to duplicate entries or return an error.

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug in pg_dump

2015-02-24 Thread Gilles Darold
Le 24/02/2015 05:40, Michael Paquier a écrit :


 On Tue, Feb 24, 2015 at 2:17 AM, Gilles Darold
 gilles.dar...@dalibo.com mailto:gilles.dar...@dalibo.com wrote:

 Looks great to me, I have tested with the postgis_topology extension
 everything works fine.


 Actually, after looking more in depth at the internals of pg_dump I
 think that both patches are wrong (did that yesterday night for
 another patch). First this patch marks a table in an extension as
 always dumpable:
 +  /* Mark member table as dumpable */
 +  configtbl-dobj.dump = true;
 And then many checks on ext_member are added in many code paths to
 ensure that objects in extensions have their definition never dumped.
 But actually this assumption is not true all the time per this code in
 getExtensionMemberShip:
 if (!dopt-binary_upgrade)
 dobj-dump = false;
 else
 dobj-dump = refdobj-dump;
 So this patch would break binary upgrade where some extension objects
 should be dumped (one reason why I haven't noticed that before is that
 pg_upgrade tests do not include extensions).

 Hence, one idea coming to my mind to fix the problem would be to add
 some extra processing directly in getExtensionMembership() after
 building the objects DO_TABLE_DATA with makeTableDataInfo() by
 checking the FK dependencies and add a dependency link with
 addObjectDependency. The good part with that is that even user tables
 that reference extension tables with a FK can be restored correctly
 because their constraint is added *after* loading the data. I noticed
 as well that with this patch the --data-only mode was dumping tables
 in the correct order.

 Speaking of which, patches implementing this idea are attached. The
 module test case has been improved as well with a check using a table
 not in an extension linked with a FK to a table in an extension.
 -- 
 Michael

This is a far better patch and the test to export/import of the
postgis_topology extension works great for me.

Thanks for the work.

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org



Re: [HACKERS] Bug in pg_dump

2015-02-23 Thread Gilles Darold
Le 17/02/2015 14:44, Michael Paquier a écrit :
 On Tue, Jan 20, 2015 at 8:06 PM, Gilles Darold gilles.dar...@dalibo.com 
 wrote:
 Le 19/01/2015 14:41, Robert Haas a écrit :
 On Thu, Jan 15, 2015 at 6:26 AM, Gilles Darold gilles.dar...@dalibo.com 
 wrote:
 I attach a patch that solves the issue in pg_dump, let me know if it might
 be included in Commit Fest or if the three other solutions are a better
 choice.
 I think a fix in pg_dump is appropriate, so I'd encourage you to add
 this to the next CommitFest.

 Ok, thanks Robert. The patch have been added to next CommitFest under
 the Bug Fixes section.

 I've also made some review of the patch and more test. I've rewritten
 some comments and added a test when TableInfo is NULL to avoid potential
 pg_dump crash.

 New patch attached: pg_dump.c-extension-FK-v2.patch
 So, I looked at your patch and I have some comments...

 The approach taken by the patch looks correct to me as we cannot
 create FK constraints after loading the data in the case of an
 extension, something that can be done with a data-only dump.

 Now, I think that the routine hasExtensionMember may impact
 performance unnecessarily in the case of databases with many tables,
 say thousands or more. And we can easily avoid this performance
 problem by checking the content of each object dumped by doing the
 legwork directly in getTableData. Also, most of the NULL pointer
 checks are not needed as most of those code paths would crash if
 tbinfo is NULL, and actually only keeping the one in dumpConstraint is
 fine.

Yes that's exactly what we discuss at Moscow, thanks for removing the
hasExtensionMember() routine. About NULL pointer I'm was not sure that
it could not crash on some other parts so I decided to check it
everywhere. If that's ok to just check in dumpConstraint() that's fine.
 
 On top of those things, I have written a small extension able to
 reproduce the problem that I have included as a test module in
 src/test/modules. The dump/restore check is done using the TAP tests,
 and I have hacked prove_check to install as well the contents of the
 current folder to be able to use the TAP routines with an extension
 easily. IMO, as we have no coverage of pg_dump with extensions I think
 that it would be useful to ensure that this does not break again in
 the future.

Great ! I did not had time to do that, thank you so much.

 All the hacking I have done during the review results in the set of
 patch attached:
 - 0001 is your patch, Gilles, with some comment fixes as well as the
 performance issue with hasExtensionMember fix
 - 0002 is the modification of prove_check that makes TAP tests usable
 with extensions
 - 0003 is the test module covering the tests needed for pg_dump, at
 least for the problem of this thread.

 Gilles, how does that look to you?

Looks great to me, I have tested with the postgis_topology extension
everything works fine.

 (Btw, be sure to generate your patches directly with git next time. :) )

Sorry, some old reminiscence :-)

Thanks for the review.
 
Best regards,

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug in pg_dump

2015-01-20 Thread Gilles Darold
Le 19/01/2015 14:41, Robert Haas a écrit :
 On Thu, Jan 15, 2015 at 6:26 AM, Gilles Darold gilles.dar...@dalibo.com 
 wrote:
 I attach a patch that solves the issue in pg_dump, let me know if it might
 be included in Commit Fest or if the three other solutions are a better
 choice.
 I think a fix in pg_dump is appropriate, so I'd encourage you to add
 this to the next CommitFest.

Ok, thanks Robert. The patch have been added to next CommitFest under
the Bug Fixes section.

I've also made some review of the patch and more test. I've rewritten
some comments and added a test when TableInfo is NULL to avoid potential
pg_dump crash.

New patch attached: pg_dump.c-extension-FK-v2.patch

Regards

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org

--- ../postgresql/src/bin/pg_dump/pg_dump.c	2015-01-19 19:03:45.897706879 +0100
+++ src/bin/pg_dump/pg_dump.c	2015-01-20 11:22:01.144794489 +0100
@@ -209,6 +209,7 @@
 
 static void getDomainConstraints(Archive *fout, TypeInfo *tyinfo);
 static void getTableData(DumpOptions *dopt, TableInfo *tblinfo, int numTables, bool oids);
+static bool hasExtensionMember(TableInfo *tblinfo, int numTables);
 static void makeTableDataInfo(DumpOptions *dopt, TableInfo *tbinfo, bool oids);
 static void buildMatViewRefreshDependencies(Archive *fout);
 static void getTableDataFKConstraints(void);
@@ -730,9 +731,20 @@
 
 	if (!dopt.schemaOnly)
 	{
+		bool has_ext_member;
+
 		getTableData(dopt, tblinfo, numTables, dopt.oids);
+
+		/* Search if there's dumpable table's members in an extension */
+		has_ext_member = hasExtensionMember(tblinfo, numTables);
+
 		buildMatViewRefreshDependencies(fout);
-		if (dopt.dataOnly)
+		/*
+		 * Get FK constraints even with schema+data if extension's
+		 * members have FK because in this case tables need to be
+		 * dump-ordered too.
+		 */
+		if (dopt.dataOnly || has_ext_member)
 			getTableDataFKConstraints();
 	}
 
@@ -1852,6 +1864,26 @@
 }
 
 /*
+ * hasExtensionMember -
+ *	  return true when on of the dumpable object
+ *	  is an extension members
+ */
+static bool
+hasExtensionMember(TableInfo *tblinfo, int numTables)
+{
+	int			i;
+
+	for (i = 0; i  numTables; i++)
+	{
+		if (tblinfo[i].dobj.ext_member)
+			return true;
+	}
+
+	return false;
+}
+
+
+/*
  * Make a dumpable object for the data of this specific table
  *
  * Note: we make a TableDataInfo if and only if we are going to dump the
@@ -2026,10 +2058,12 @@
  * getTableDataFKConstraints -
  *	  add dump-order dependencies reflecting foreign key constraints
  *
- * This code is executed only in a data-only dump --- in schema+data dumps
- * we handle foreign key issues by not creating the FK constraints until
- * after the data is loaded.  In a data-only dump, however, we want to
- * order the table data objects in such a way that a table's referenced
+ * This code is executed only in a data-only dump or when there is extension's
+ * member -- in schema+data dumps we handle foreign key issues by not creating
+ * the FK constraints until after the data is loaded. In a data-only dump or
+ * when there is an extension member to dump (schema dumps do not concern
+ * extension's objects, they are created during the CREATE EXTENSION), we want
+ * to order the table data objects in such a way that a table's referenced
  * tables are restored first.  (In the presence of circular references or
  * self-references this may be impossible; we'll detect and complain about
  * that during the dependency sorting step.)
@@ -2930,9 +2964,14 @@
 	PQExpBuffer delqry;
 	const char *cmd;
 
+	/* Policy is SCHEMA only */
 	if (dopt-dataOnly)
 		return;
 
+	/* CREATE EXTENSION should take care of that */
+	if ((tbinfo != NULL)  tbinfo-dobj.ext_member)
+		return;
+
 	/*
 	 * If polname is NULL, then this record is just indicating that ROW
 	 * LEVEL SECURITY is enabled for the table. Dump as ALTER TABLE table
@@ -7884,6 +7923,10 @@
 	if (dopt-dataOnly)
 		return;
 
+	/* CREATE EXTENSION should take care of that */
+	if ((tbinfo != NULL)  tbinfo-dobj.ext_member)
+		return;
+
 	/* Search for comments associated with relation, using table */
 	ncomments = findComments(fout,
 			 tbinfo-dobj.catId.tableoid,
@@ -13138,6 +13181,10 @@
 	if (dopt-dataOnly)
 		return;
 
+	/* CREATE EXTENSION should take care of that */
+	if ((tbinfo != NULL)  tbinfo-dobj.ext_member)
+		return;
+
 	/* Search for comments associated with relation, using table */
 	nlabels = findSecLabels(fout,
 			tbinfo-dobj.catId.tableoid,
@@ -13345,7 +13392,7 @@
 static void
 dumpTable(Archive *fout, DumpOptions *dopt, TableInfo *tbinfo)
 {
-	if (tbinfo-dobj.dump  !dopt-dataOnly)
+	if (tbinfo-dobj.dump  !dopt-dataOnly  !tbinfo-dobj.ext_member)
 	{
 		char	   *namecopy;
 
@@ -13483,6 +13530,10 @@
 	int			j,
 k;
 
+	/* CREATE EXTENSION should take care of that */
+	if ((tbinfo != NULL)  tbinfo-dobj.ext_member)
+		return;
+
 	/* Make sure we are in proper schema */
 	selectSourceSchema(fout, tbinfo-dobj.namespace-dobj.name

Re: [HACKERS] Bug in pg_dump

2015-01-16 Thread Gilles Darold
On 16/01/2015 01:06, Jim Nasby wrote:
 On 1/15/15 5:26 AM, Gilles Darold wrote:
 Hello,

 There's a long pending issue with pg_dump and extensions that have
 table members with foreign keys. This was previously reported in this
 thread
 http://www.postgresql.org/message-id/ca+tgmoyvzkadmgh_8el7uvm472geru0b4pnnfjqye6ss1k9...@mail.gmail.com
 and discuss by Robert. All PostgreSQL users that use the PostGis
 extension postgis_topology are facing the issue because the two
 members tables (topology and layer) are linked by foreign keys.

 If you dump a database with this extension and try to import it you
 will experience this error:

 pg_restore: [archiver (db)] Error while PROCESSING TOC:
 pg_restore: [archiver (db)] Error from TOC entry 3345; 0
 157059176 TABLE DATA layer gilles
 pg_restore: [archiver (db)] COPY failed for table layer: ERROR:
 insert or update on table layer violates foreign key constraint
 layer_topology_id_fkey
 DETAIL:  Key (topology_id)=(1) is not present in table topology.
 WARNING: errors ignored on restore: 1


 The problem is that, whatever export type you choose (plain/custom
 and full-export/data-only) the data of tables topology and layer
 are always exported in alphabetic order. I think this is a bug
 because outside extension, in data-only export, pg_dump is able to
 find foreign keys dependency and dump table's data in the right order
 but not with extension's members. Default is alphabetic order but
 that should not be the case with extension's members because
 constraints are recreated during the CREATE EXTENSION order. I hope I
 am clear enough.

 Here we have three solutions:

  1/ Inform developers of extensions to take care to alphabetical
 order when they have member tables using foreign keys.
  2/ Inform DBAs that they have to restore the failing table
 independently. The use case above can be resumed using the following
 command:

   pg_restore -h localhost -n topology -t layer -Fc -d
 testdb_empty testdump.dump

  3/ Inform DBAs that they have to restore the schema first then
 the data only using --disable-triggers

 I don't like 1-3, and I doubt anyone else does...

  4/ Patch pg_dump to solve this issue.

 5. Disable FK's during load.
 This is really a bigger item than just extensions. It would have the
 nice benefit of doing a wholesale FK validation instead of firing
 per-row triggers, but it would leave the database in a weird state if
 a restore failed...

I think this is an other problem. Here we just need to apply to
extension's members tables the same work than to normal tables. I guess
this is what this patch try to solve.


 I attach a patch that solves the issue in pg_dump, let me know if it
 might be included in Commit Fest or if the three other solutions are
 a better choice. I also join a sample extension (test_fk_in_ext) to
 be able to reproduce the issue and test the patch. Note that it might
 exists a simpler solution than the one I used in this patch, if this
 is the case please point me on the right way, I will be pleased to
 rewrite and send an other patch.

 The only problem I see with this approach is circular FK's:

 decibel@decina.local=# create table a(a_id serial primary key, b_id int);
 CREATE TABLE
 decibel@decina.local=# create table b(b_id serial primary key, a_id
 int references a);
 CREATE TABLE
 decibel@decina.local=# alter table a add foreign key(b_id) references b;
 ALTER TABLE
 decibel@decina.local=#

 That's esoteric enough that I think it's OK not to directly support
 them, but pg_dump shouldn't puke on them (and really should throw a
 warning). Though it looks like it doesn't handle that in the data-only
 case anyway...

The patch is taking care or circular references and you will be warn if
pg_dump found it in the extension members. That was not the case before.
If you try do dump a database with the postgis extension you will be
warn about FK defined on the edge_data table.


-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Bug in pg_dump

2015-01-15 Thread Gilles Darold
Hello,

There's a long pending issue with pg_dump and extensions that have table
members with foreign keys. This was previously reported in this thread
http://www.postgresql.org/message-id/ca+tgmoyvzkadmgh_8el7uvm472geru0b4pnnfjqye6ss1k9...@mail.gmail.com
and discuss by Robert. All PostgreSQL users that use the PostGis
extension postgis_topology are facing the issue because the two members
tables (topology and layer) are linked by foreign keys.

If you dump a database with this extension and try to import it you will
experience this error:

pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 3345; 0 157059176
TABLE DATA layer gilles
pg_restore: [archiver (db)] COPY failed for table layer: ERROR: 
insert or update on table layer violates foreign key constraint
layer_topology_id_fkey
DETAIL:  Key (topology_id)=(1) is not present in table topology.
WARNING: errors ignored on restore: 1


The problem is that, whatever export type you choose (plain/custom and
full-export/data-only) the data of tables topology and layer are
always exported in alphabetic order. I think this is a bug because
outside extension, in data-only export, pg_dump is able to find foreign
keys dependency and dump table's data in the right order but not with
extension's members. Default is alphabetic order but that should not be
the case with extension's members because constraints are recreated
during the CREATE EXTENSION order. I hope I am clear enough.

Here we have three solutions:

1/ Inform developers of extensions to take care to alphabetical
order when they have member tables using foreign keys.
2/ Inform DBAs that they have to restore the failing table
independently. The use case above can be resumed using the following
command:

 pg_restore -h localhost -n topology -t layer -Fc -d
testdb_empty testdump.dump

3/ Inform DBAs that they have to restore the schema first then the
data only using --disable-triggers
4/ Patch pg_dump to solve this issue.

I attach a patch that solves the issue in pg_dump, let me know if it
might be included in Commit Fest or if the three other solutions are a
better choice. I also join a sample extension (test_fk_in_ext) to be
able to reproduce the issue and test the patch. Note that it might
exists a simpler solution than the one I used in this patch, if this is
the case please point me on the right way, I will be pleased to rewrite
and send an other patch.

In the test extension attached, there is a file called
test_fk_in_ext/SYNOPSIS.txt that describe all actions to reproduce the
issue and test the patch. Here is the SQL part of the test extension:

CREATE TABLE IF NOT EXISTS b_test_fk_in_ext1 (
id int PRIMARY KEY
);

CREATE TABLE IF NOT EXISTS a_test_fk_in_ext1 (
id int REFERENCES b_test_fk_in_ext1(id)
);

SELECT pg_catalog.pg_extension_config_dump('b_test_fk_in_ext1', '');
SELECT pg_catalog.pg_extension_config_dump('a_test_fk_in_ext1', '');



Best regards,

-- 
Gilles Darold
Consultant PostgreSQL
http://dalibo.com - http://dalibo.org

diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index dc062e6..49889ce 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -209,6 +209,7 @@ static void addBoundaryDependencies(DumpableObject **dobjs, int numObjs,
 
 static void getDomainConstraints(Archive *fout, TypeInfo *tyinfo);
 static void getTableData(DumpOptions *dopt, TableInfo *tblinfo, int numTables, bool oids);
+static bool hasExtensionMember(TableInfo *tblinfo, int numTables);
 static void makeTableDataInfo(DumpOptions *dopt, TableInfo *tbinfo, bool oids);
 static void buildMatViewRefreshDependencies(Archive *fout);
 static void getTableDataFKConstraints(void);
@@ -730,9 +731,17 @@ main(int argc, char **argv)
 
 	if (!dopt.schemaOnly)
 	{
+		bool has_ext_member;
+
 		getTableData(dopt, tblinfo, numTables, dopt.oids);
+		/* Search if there is dumpable tables member of and extension */
+		has_ext_member = hasExtensionMember(tblinfo, numTables);
 		buildMatViewRefreshDependencies(fout);
-		if (dopt.dataOnly)
+		/*
+		 * Always get FK constraints even with schema+data, extension's
+		 * members can have FK so tables need to be dump-ordered.
+		 */
+		if (dopt.dataOnly || has_ext_member)
 			getTableDataFKConstraints();
 	}
 
@@ -1852,6 +1861,25 @@ getTableData(DumpOptions *dopt, TableInfo *tblinfo, int numTables, bool oids)
 }
 
 /*
+ * hasExtensionMember -
+ *	  set up dumpable objects representing the contents of tables
+ */
+static bool
+hasExtensionMember(TableInfo *tblinfo, int numTables)
+{
+	int			i;
+
+	for (i = 0; i  numTables; i++)
+	{
+		if (tblinfo[i].dobj.ext_member)
+			return true;
+	}
+
+	return false;
+}
+
+
+/*
  * Make a dumpable object for the data of this specific table
  *
  * Note: we make a TableDataInfo if and only if we are going to dump the
@@ -2024,12 +2052,14

[HACKERS] Segmentation fault in pg_dumpall from master down to 9.1 and other bug introduced by RLS

2014-11-13 Thread Gilles Darold
Hi,

There's a segfault when trying to dump global object from a running
7.4.27 with a pg_dumpall of version 9.3.5 but also 9.2.9.

$ pg_dumpall  -g -h localhost -p 5474

column number -1 is out of range 0..12
Segmentation fault (core dumped)

The problem comes from the first columns of the query in function
dumpRoles(PGconn *conn) that has no alias name. Fixing it with

SELECT 0 **as oid**, ...;

Fix the issue. This bug affect all versions of PostgreSQL from master
down to 9.1, I mean 9.1 is working.

In the same query there is an other bug introduced by commit 491c029
that add Row-Level Security Policies.  Current master code has a broken
pg_dumpall when trying to dump from a backend lower than 8.1. Here is
the error:

ERROR:  each UNION query must have the same number of columns

The query sent to the database is the following:

SELECT 0, usename as rolname, usesuper as rolsuper, true as
rolinherit, usesuper as rolcreaterole, usecreatedb as rolcreatedb, true
as rolcanlogin, -1 as rolconnlimit, passwd as rolpassword, valuntil as
rolvaliduntil, false as rolreplication, null as rolcomment, usename =
current_user AS is_current_user FROM pg_shadow UNION ALL SELECT 0,
groname as rolname, false as rolsuper, true as rolinherit, false as
rolcreaterole, false as rolcreatedb, false as rolcanlogin, -1 as
rolconnlimit, null::text as rolpassword, null::abstime as rolvaliduntil,
false as rolreplication, false as rolbypassrls, null as rolcomment,
false FROM pg_group WHERE NOT EXISTS (SELECT 1 FROM pg_shadow  WHERE
usename = groname) ORDER BY 2;

The column rolbypassrls is missing in the first UNION query. As this is
the same query as previous issue the first column of the query need the
same alias: oid.

I've attached a patch against master that fix the two issues but for
older branch, only alias to the first column of the query might be
applied. Let me know if it need other work.

Best regards,

-- 
Gilles Darold
http://dalibo.com - http://dalibo.org

diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 23cb0b4..b07b1f6 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -714,7 +714,7 @@ dumpRoles(PGconn *conn)
 		  ORDER BY 2);
 	else
 		printfPQExpBuffer(buf,
-		  SELECT 0, usename as rolname, 
+		  SELECT 0 as oid, usename as rolname, 
 		  usesuper as rolsuper, 
 		  true as rolinherit, 
 		  usesuper as rolcreaterole, 
@@ -724,6 +724,7 @@ dumpRoles(PGconn *conn)
 		  passwd as rolpassword, 
 		  valuntil as rolvaliduntil, 
 		  false as rolreplication, 
+		  false as rolbypassrls, 
 		  null as rolcomment, 
 		  usename = current_user AS is_current_user 
 		  FROM pg_shadow 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [TODO] Track number of files ready to be archived in pg_stat_archiver

2014-08-25 Thread Gilles Darold
Le 21/08/2014 10:17, Julien Rouhaud a écrit :
 Hello,

 Attached patch implements the following TODO item :

 Track number of WAL files ready to be archived in pg_stat_archiver

 However, it will track the total number of any file ready to be
 archived, not only WAL files.

 Please let me know what you think about it.

 Regards.

Hi,

Maybe looking at archive ready count will be more efficient if it is
done in the view definition through a function. This will avoid any
issue with incrementing/decrement of archiverStats.ready_count and the
patch will be more simple. Also I don't think we need an other memory
allocation for that, the counter information is always in the number of
.ready files in the archive_status directory and the call to
pg_stat_archiver doesn't need high speed performances.

For example having a new function called
pg_stat_get_archive_ready_count() that does the same at what you add
into pgstat_read_statsfiles() and the pg_stat_archiver defined as follow:

CREATE VIEW pg_stat_archiver AS
s.failed_count,
s.last_failed_wal,
s.last_failed_time,
pg_stat_get_archive_ready() as ready_count,
s.stats_reset
FROM pg_stat_get_archiver() s;

The function pg_stat_get_archive_ready_count() will also be available
for any other querying.

-- 
Gilles Darold
http://dalibo.com - http://dalibo.org



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] review: psql and pset without any arguments

2013-09-30 Thread Gilles Darold
Le 30/09/2013 05:43, Alvaro Herrera a écrit :
 Gilles Darold escribió:

 +else if (strcmp(param, numericlocale) == 0)
 +{
 +if (popt-topt.numericLocale)
 +puts(_(Locale-adjusted numeric output (numericlocale) 
 is on.));
 +else
 +puts(_(Locale-adjusted numeric output (numericlocale) 
 is off.));
 +}
 Please don't make the variable name part of the translatable message.  I
 suggest using the following pattern:

 +else if (strcmp(param, numericlocale) == 0)
 +{
 +if (popt-topt.numericLocale)
 +printf(_(Locale-adjusted numeric output (%s) is on.), 
 numericlocale);
 +else
 +printf(_(Locale-adjusted numeric output (%s) is 
 off.), numericlocale);
 +}
 Otherwise it will be too easy for the translator to make the mistake
 that the variable name needs translation too.


That's right, here is the patch modified with just a little change with
your suggestion:

if (popt-topt.numericLocale)
printf(_(Locale-adjusted numeric output (%s) is
on.\n), param);
else
printf(_(Locale-adjusted numeric output (%s) is
off.\n), param);
 

Thanks

-- 
Gilles Darold
Administrateur de bases de données
http://dalibo.com - http://dalibo.org

diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml
index 574db5c..ddf7bba 100644
--- a/doc/src/sgml/ref/psql-ref.sgml
+++ b/doc/src/sgml/ref/psql-ref.sgml
@@ -2272,13 +2272,10 @@ lo_import 152801
 /para
 /tip
 
-note
 para
-It is an error to call command\pset/command without any
-arguments. In the future this case might show the current status
-of all printing options.
+	command\pset/command without any arguments displays current status
+	of all printing options.
 /para
-/note
 
 /listitem
   /varlistentry
diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c
index 10e9f64..abaee9b 100644
--- a/src/bin/psql/command.c
+++ b/src/bin/psql/command.c
@@ -68,6 +68,7 @@ static int	strip_lineno_from_funcdesc(char *func);
 static void minimal_error_message(PGresult *res);
 
 static void printSSLInfo(void);
+static bool printPsetInfo(const char *param, struct printQueryOpt *popt);
 
 #ifdef WIN32
 static void checkWin32Codepage(void);
@@ -1045,8 +1046,20 @@ exec_command(const char *cmd,
 
 		if (!opt0)
 		{
-			psql_error(\\%s: missing required argument\n, cmd);
-			success = false;
+		   size_t i;
+		   /* list all variables */
+		   static const char *const my_list[] = {
+border, columns, expanded, fieldsep,
+footer, format, linestyle, null,
+numericlocale, pager, recordsep,
+tableattr, title, tuples_only,
+NULL };
+		   for (i = 0; my_list[i] != NULL; i++) {
+			   printPsetInfo(my_list[i], pset.popt);
+		   }
+
+		   success = true;
+
 		}
 		else
 			success = do_pset(opt0, opt1, pset.popt, pset.quiet);
@@ -2275,8 +2288,6 @@ do_pset(const char *param, const char *value, printQueryOpt *popt, bool quiet)
 			return false;
 		}
 
-		if (!quiet)
-			printf(_(Output format is %s.\n), _align2string(popt-topt.format));
 	}
 
 	/* set table line style */
@@ -2296,9 +2307,6 @@ do_pset(const char *param, const char *value, printQueryOpt *popt, bool quiet)
 			return false;
 		}
 
-		if (!quiet)
-			printf(_(Line style is %s.\n),
-   get_line_style(popt-topt)-name);
 	}
 
 	/* set border style/width */
@@ -2307,8 +2315,6 @@ do_pset(const char *param, const char *value, printQueryOpt *popt, bool quiet)
 		if (value)
 			popt-topt.border = atoi(value);
 
-		if (!quiet)
-			printf(_(Border style is %d.\n), popt-topt.border);
 	}
 
 	/* set expanded/vertical mode */
@@ -2320,15 +2326,6 @@ do_pset(const char *param, const char *value, printQueryOpt *popt, bool quiet)
 			popt-topt.expanded = ParseVariableBool(value);
 		else
 			popt-topt.expanded = !popt-topt.expanded;
-		if (!quiet)
-		{
-			if (popt-topt.expanded == 1)
-printf(_(Expanded display is on.\n));
-			else if (popt-topt.expanded == 2)
-printf(_(Expanded display is used automatically.\n));
-			else
-printf(_(Expanded display is off.\n));
-		}
 	}
 
 	/* locale-aware numeric output */
@@ -2338,13 +2335,6 @@ do_pset(const char *param, const char *value, printQueryOpt *popt, bool quiet)
 			popt-topt.numericLocale = ParseVariableBool(value);
 		else
 			popt-topt.numericLocale = !popt-topt.numericLocale;
-		if (!quiet)
-		{
-			if (popt-topt.numericLocale)
-puts(_(Showing locale-adjusted numeric output.));
-			else
-puts(_(Locale-adjusted numeric output is off.));
-		}
 	}
 
 	/* null display */
@@ -2355,8 +2345,6 @@ do_pset(const char *param, const char *value, printQueryOpt *popt, bool quiet)
 			free(popt-nullPrint);
 			popt-nullPrint = pg_strdup(value);
 		}
-		if (!quiet)
-			printf(_(Null display is \%s\.\n), popt-nullPrint ? popt

Re: [HACKERS] review: psql and pset without any arguments

2013-09-30 Thread Gilles Darold
Le 30/09/2013 17:35, Peter Eisentraut a écrit :
 Please remove the tabs from the SGML files.

Done. I've also fixed the typo reported by Ian. Here is the attached v4
patch.

Thanks a lot for your review.

Regards,

-- 
Gilles Darold
Administrateur de bases de données
http://dalibo.com - http://dalibo.org

diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml
index 574db5c..98a4221 100644
--- a/doc/src/sgml/ref/psql-ref.sgml
+++ b/doc/src/sgml/ref/psql-ref.sgml
@@ -2272,13 +2272,10 @@ lo_import 152801
 /para
 /tip
 
-note
 para
-It is an error to call command\pset/command without any
-arguments. In the future this case might show the current status
+command\pset/command without any arguments displays current status
 of all printing options.
 /para
-/note
 
 /listitem
   /varlistentry
diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c
index 10e9f64..3fc0728 100644
--- a/src/bin/psql/command.c
+++ b/src/bin/psql/command.c
@@ -68,6 +68,7 @@ static int	strip_lineno_from_funcdesc(char *func);
 static void minimal_error_message(PGresult *res);
 
 static void printSSLInfo(void);
+static bool printPsetInfo(const char *param, struct printQueryOpt *popt);
 
 #ifdef WIN32
 static void checkWin32Codepage(void);
@@ -1045,8 +1046,20 @@ exec_command(const char *cmd,
 
 		if (!opt0)
 		{
-			psql_error(\\%s: missing required argument\n, cmd);
-			success = false;
+		   size_t i;
+		   /* list all variables */
+		   static const char *const my_list[] = {
+border, columns, expanded, fieldsep,
+footer, format, linestyle, null,
+numericlocale, pager, recordsep,
+tableattr, title, tuples_only,
+NULL };
+		   for (i = 0; my_list[i] != NULL; i++) {
+			   printPsetInfo(my_list[i], pset.popt);
+		   }
+
+		   success = true;
+
 		}
 		else
 			success = do_pset(opt0, opt1, pset.popt, pset.quiet);
@@ -2275,8 +2288,6 @@ do_pset(const char *param, const char *value, printQueryOpt *popt, bool quiet)
 			return false;
 		}
 
-		if (!quiet)
-			printf(_(Output format is %s.\n), _align2string(popt-topt.format));
 	}
 
 	/* set table line style */
@@ -2296,9 +2307,6 @@ do_pset(const char *param, const char *value, printQueryOpt *popt, bool quiet)
 			return false;
 		}
 
-		if (!quiet)
-			printf(_(Line style is %s.\n),
-   get_line_style(popt-topt)-name);
 	}
 
 	/* set border style/width */
@@ -2307,8 +2315,6 @@ do_pset(const char *param, const char *value, printQueryOpt *popt, bool quiet)
 		if (value)
 			popt-topt.border = atoi(value);
 
-		if (!quiet)
-			printf(_(Border style is %d.\n), popt-topt.border);
 	}
 
 	/* set expanded/vertical mode */
@@ -2320,15 +2326,6 @@ do_pset(const char *param, const char *value, printQueryOpt *popt, bool quiet)
 			popt-topt.expanded = ParseVariableBool(value);
 		else
 			popt-topt.expanded = !popt-topt.expanded;
-		if (!quiet)
-		{
-			if (popt-topt.expanded == 1)
-printf(_(Expanded display is on.\n));
-			else if (popt-topt.expanded == 2)
-printf(_(Expanded display is used automatically.\n));
-			else
-printf(_(Expanded display is off.\n));
-		}
 	}
 
 	/* locale-aware numeric output */
@@ -2338,13 +2335,6 @@ do_pset(const char *param, const char *value, printQueryOpt *popt, bool quiet)
 			popt-topt.numericLocale = ParseVariableBool(value);
 		else
 			popt-topt.numericLocale = !popt-topt.numericLocale;
-		if (!quiet)
-		{
-			if (popt-topt.numericLocale)
-puts(_(Showing locale-adjusted numeric output.));
-			else
-puts(_(Locale-adjusted numeric output is off.));
-		}
 	}
 
 	/* null display */
@@ -2355,8 +2345,6 @@ do_pset(const char *param, const char *value, printQueryOpt *popt, bool quiet)
 			free(popt-nullPrint);
 			popt-nullPrint = pg_strdup(value);
 		}
-		if (!quiet)
-			printf(_(Null display is \%s\.\n), popt-nullPrint ? popt-nullPrint : );
 	}
 
 	/* field separator for unaligned text */
@@ -2368,13 +2356,6 @@ do_pset(const char *param, const char *value, printQueryOpt *popt, bool quiet)
 			popt-topt.fieldSep.separator = pg_strdup(value);
 			popt-topt.fieldSep.separator_zero = false;
 		}
-		if (!quiet)
-		{
-			if (popt-topt.fieldSep.separator_zero)
-printf(_(Field separator is zero byte.\n));
-			else
-printf(_(Field separator is \%s\.\n), popt-topt.fieldSep.separator);
-		}
 	}
 
 	else if (strcmp(param, fieldsep_zero) == 0)
@@ -2382,8 +2363,6 @@ do_pset(const char *param, const char *value, printQueryOpt *popt, bool quiet)
 		free(popt-topt.fieldSep.separator);
 		popt-topt.fieldSep.separator = NULL;
 		popt-topt.fieldSep.separator_zero = true;
-		if (!quiet)
-			printf(_(Field separator is zero byte.\n));
 	}
 
 	/* record separator for unaligned text */
@@ -2395,15 +2374,6 @@ do_pset(const char *param, const char *value, printQueryOpt *popt, bool quiet)
 			popt-topt.recordSep.separator = pg_strdup(value);
 			popt-topt.recordSep.separator_zero = false;
 		}
-		if (!quiet

Re: [HACKERS] review: psql and pset without any arguments

2013-09-29 Thread Gilles Darold
Le 07/09/2013 21:22, Pavel Stehule a écrit :



 2013/9/7 Gilles Darold gilles.dar...@dalibo.com
 mailto:gilles.dar...@dalibo.com

 Le 07/09/2013 10:02, Pavel Stehule a écrit :
  Hello
 
  * patch is cleanly patchable and compilation is without warnings
  * all regression tests passed
  * no impact on dump, performance or any current features
  * no comments to programming style
  * we would this feature - it is consistent with \set and it gives a
  fast resume about psql printer settings
 
  Issues:
  1) it doesn't show linestyle when is default
 
  I looked there and it is probably a small different issue - this
 value
  is initialized as NULL as default. But I dislike a empty output in
  this case:
 
  else if (strcmp(param, linestyle) == 0)
  {
  if (!popt-topt.line_style)
  ;
  else
  printf(_(Line style is %s.\n),
 get_line_style(popt-topt)-name);
  }
 
  I propose a verbose result instead nothing:
 
  else if (strcmp(param, linestyle) == 0)
  {
  if (!popt-topt.line_style)
 printf(_(Line style is unset.\n)) ;
  else
  printf(_(Line style is %s.\n),
 get_line_style(popt-topt)-name);
  }

 +1 to show the information even if linestyle is not set but by default
 get_line_style() return ascii if linestyle is not set. So maybe
 it is
 better to  rewrite it as follow:

 else if (strcmp(param, linestyle) == 0)
 {
 printf(_(Line style is %s.\n),
get_line_style(popt-topt)-name);
 }

 This will output:

 Line style is ascii.

 when linestyle is not set or of course it is set to ascii.

  2) there is only one open question
 
 
 http://www.postgresql.org/message-id/b6f6fd62f2624c4c9916ac0175d56d880ce00...@jenmbs01.ad.intershop.net
  - there is no clean relation between output and some pset option.
 
  I don't think so Marc' proposal is ideal (these values are not a
  variables) - but maybe some enhanced output (only for this
 resume) can
  be better:
 
  postgres=# \pset
  Output format (format) is aligned.
  Border style (border) is 1.
  Expanded display (expanded) is off.
  Null display (null) is .
  Field separator (fieldsep) is |.
  Tuples only (tuples_only) is off.
  Title (title) is unset.
  Table attributes (tableattr) unset.
  Pager (pager) is used for long output.
  Record separator (recordsep) is newline.
 
  This expanded output should be used only for this resume (not when a
  option was changed or individual ask on option value)

 Yes this could be a good accommodation but I really prefer to not
 duplicate code and translation between this resume and the output when
 these options are set. If we can print the same output messages using:

 postgres=# \pset fieldsep '|'
 Field separator (fieldsep) is |.

 it could be a good compromise.


 ok

 Pavel

Hello,

Sorry for the delay, here is the new patch. The \pset output will look
like follow:

postgres=# \pset
Border style (border) is 1.
Target width (columns) unset.
Expanded display (expanded) is off.
Field separator (fieldsep) is |.
Default footer (footer) is on.
Output format (format) s aligned.
Line style (linestyle) is ascii.
Null display (null) is .
Locale-adjusted numeric output (numericlocale) is off.
Pager (pager) is used for long output.
Record separator (recordsep) is newline.
Table attributes (tableattr) unset.
Title (title) unset.
Tuples only (tuples_only) is off.

postgres=# \pset null #
Null display (null) is #.
postgres=#


This also mean that all translation strings of those messages should be
updated.

If we don't want to modify those messages, I can provide an other patch
which print output as follow:

postgres=# \pset
border: Border style is 1.
columns: Target width unset.
expanded: Expanded display is off.
fieldsep: Field separator is |.
footer: Default footer is on.
format: Output format is aligned.
linestyle: Line style is ascii.
null: Null display is .
numericlocale: Locale-adjusted numeric output is off.
pager: Pager is used for long output.
recordsep: Record separator is newline.
tableattr: Table attributes unset.
title: Title unset.
tuples_only: Tuples only is off.

postgres=# \pset null #
Null display is #.
postgres=#

I think the first output is better but it need translation work. Let me
know.

Regards,

-- 
Gilles Darold
Administrateur de bases de données
http://dalibo.com - http://dalibo.org

diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml
index

Re: [HACKERS] review: psql and pset without any arguments

2013-09-07 Thread Gilles Darold
Le 07/09/2013 10:02, Pavel Stehule a écrit :
 Hello

 * patch is cleanly patchable and compilation is without warnings
 * all regression tests passed
 * no impact on dump, performance or any current features
 * no comments to programming style
 * we would this feature - it is consistent with \set and it gives a
 fast resume about psql printer settings

 Issues:
 1) it doesn't show linestyle when is default

 I looked there and it is probably a small different issue - this value
 is initialized as NULL as default. But I dislike a empty output in
 this case:

 else if (strcmp(param, linestyle) == 0)
 {
 if (!popt-topt.line_style)
 ;
 else
 printf(_(Line style is %s.\n),
get_line_style(popt-topt)-name);
 }

 I propose a verbose result instead nothing:

 else if (strcmp(param, linestyle) == 0)
 {
 if (!popt-topt.line_style)
printf(_(Line style is unset.\n)) ;
 else
 printf(_(Line style is %s.\n),
get_line_style(popt-topt)-name);
 }

+1 to show the information even if linestyle is not set but by default
get_line_style() return ascii if linestyle is not set. So maybe it is
better to  rewrite it as follow:

else if (strcmp(param, linestyle) == 0)
{
printf(_(Line style is %s.\n),
   get_line_style(popt-topt)-name);
}

This will output:

Line style is ascii.

when linestyle is not set or of course it is set to ascii.

 2) there is only one open question
 http://www.postgresql.org/message-id/b6f6fd62f2624c4c9916ac0175d56d880ce00...@jenmbs01.ad.intershop.net
 - there is no clean relation between output and some pset option.
   
 I don't think so Marc' proposal is ideal (these values are not a
 variables) - but maybe some enhanced output (only for this resume) can
 be better:

 postgres=# \pset
 Output format (format) is aligned.
 Border style (border) is 1.
 Expanded display (expanded) is off.
 Null display (null) is .
 Field separator (fieldsep) is |.
 Tuples only (tuples_only) is off.
 Title (title) is unset.
 Table attributes (tableattr) unset.
 Pager (pager) is used for long output.
 Record separator (recordsep) is newline.

 This expanded output should be used only for this resume (not when a
 option was changed or individual ask on option value)

Yes this could be a good accommodation but I really prefer to not
duplicate code and translation between this resume and the output when
these options are set. If we can print the same output messages using:

postgres=# \pset fieldsep '|'
Field separator (fieldsep) is |.

it could be a good compromise.

Regards,

-- 
Gilles Darold
Administrateur de bases de données
http://dalibo.com - http://dalibo.org



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql and pset without any arguments

2013-09-02 Thread Gilles Darold
Patch added to current open commitfest under the Client section with title:

   Call \pset without any arguments displays current status of all
printing options

Status: Need review.

Let me know if it should not be there.

Regards,

Le 29/06/2013 01:08, Gilles Darold a écrit :
 Hi,

 I was looking at psql 8.3 documention about \pset options and saw that
 there was the following note :

 Note: It is an error to call \pset without any arguments. In the
 future this case might show the current status of all printing options.

 I looked backward and forward to find that this note is present in all
 versions since 7.1 up to 9.3, maybe it is time to add this little feature.

 I've attached a patch to add the usage of the \pset command without any
 arguments to displays current status of all printing options instead of
 the error message. Here is a sample output:

 (postgres@[local]:5494) [postgres]  \pset
 Output format is aligned.
 Border style is 2.
 Expanded display is used automatically.
 Null display is NULL.
 Field separator is |.
 Tuples only is off.
 Title is unset.
 Table attributes unset.
 Line style is unicode.
 Pager is used for long output.
 Record separator is newline.
 (postgres@[local]:5494) [postgres] 

 To avoid redundant code I've added a new method printPsetInfo() so that
 do_pset() and exec_command() will used the same output message, they are
 all in src/bin/psql/command.c. For example:

 (postgres@[local]:5494) [postgres]  \pset null 'NULL'
 Null display is NULL.
 (postgres@[local]:5494) [postgres] 

 The patch print all variables information from struct printTableOpt when
 \pset is given without any arguments and also update documentation.

 Let me know if there's any additional work to do on this basic patch or
 something that I've omitted.

 Best regards,

-- 
Gilles Darold
http://dalibo.com - http://dalibo.org



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql and pset without any arguments

2013-06-29 Thread Gilles Darold
Le 29/06/2013 13:55, Erik Rijkers a écrit :
 On Sat, June 29, 2013 01:08, Gilles Darold wrote:
  Here is a sample output:

 (postgres@[local]:5494) [postgres]  \pset
 Output format is aligned.
 Border style is 2.
 Expanded display is used automatically.
 Null display is NULL.
 Field separator is |.
 Tuples only is off.
 Title is unset.
 Table attributes unset.
 Line style is unicode.
 Pager is used for long output.
 Record separator is newline.
 (postgres@[local]:5494) [postgres] 

 +1

 This seems handy.  Maybe it could be improved
 a bit with the keyboard shortcuts prefixed, like so:

 (postgres@[local]:5494) [postgres]  \pset
 \a  Output format is aligned.
 \x  Expanded display is used automatically.
 \f  Field separator is |.
 \t  Tuples only is off.
 \C  Title is unset.
 \T  Table attributes unset.
 Border style is 2.
 Line style is unicode.
 Null display is NULL.
 Pager is used for long output.
 Record separator is newline.


 So that it also serves a reminder on how to subsequently
 change them

My first though was to print something like \set output, but why not
reuse the original code/output when \pset is used ?

This second choice has three main advantages :

 * Information shown is the same everywhere
 * Backward compatibility with \pset output
 * Avoid code redundancy

About shortcut I'm agree with Pavel that it is less readable and already
in the help, \? is the big reminder :-)

Regards,

-- 
Gilles Darold
http://dalibo.com - http://dalibo.org



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] psql and pset without any arguments

2013-06-28 Thread Gilles Darold
Hi,

I was looking at psql 8.3 documention about \pset options and saw that
there was the following note :

Note: It is an error to call \pset without any arguments. In the
future this case might show the current status of all printing options.

I looked backward and forward to find that this note is present in all
versions since 7.1 up to 9.3, maybe it is time to add this little feature.

I've attached a patch to add the usage of the \pset command without any
arguments to displays current status of all printing options instead of
the error message. Here is a sample output:

(postgres@[local]:5494) [postgres]  \pset
Output format is aligned.
Border style is 2.
Expanded display is used automatically.
Null display is NULL.
Field separator is |.
Tuples only is off.
Title is unset.
Table attributes unset.
Line style is unicode.
Pager is used for long output.
Record separator is newline.
(postgres@[local]:5494) [postgres] 

To avoid redundant code I've added a new method printPsetInfo() so that
do_pset() and exec_command() will used the same output message, they are
all in src/bin/psql/command.c. For example:

(postgres@[local]:5494) [postgres]  \pset null 'NULL'
Null display is NULL.
(postgres@[local]:5494) [postgres] 

The patch print all variables information from struct printTableOpt when
\pset is given without any arguments and also update documentation.

Let me know if there's any additional work to do on this basic patch or
something that I've omitted.

Best regards,

-- 
Gilles Darold
http://dalibo.com - http://dalibo.org

diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml
index 574db5c..a3bf555 100644
--- a/doc/src/sgml/ref/psql-ref.sgml
+++ b/doc/src/sgml/ref/psql-ref.sgml
@@ -2272,13 +2272,9 @@ lo_import 152801
 /para
 /tip
 
-note
-para
-It is an error to call command\pset/command without any
-arguments. In the future this case might show the current status
-of all printing options.
+paracommand\pset/ without any arguments displays current status
+ of all printing options.
 /para
-/note
 
 /listitem
   /varlistentry
diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c
index 351e684..daf7ac7 100644
--- a/src/bin/psql/command.c
+++ b/src/bin/psql/command.c
@@ -68,6 +68,7 @@ static int	strip_lineno_from_funcdesc(char *func);
 static void minimal_error_message(PGresult *res);
 
 static void printSSLInfo(void);
+static bool printPsetInfo(const char *param, struct printQueryOpt *popt);
 
 #ifdef WIN32
 static void checkWin32Codepage(void);
@@ -1045,8 +1046,16 @@ exec_command(const char *cmd,
 
 		if (!opt0)
 		{
-			psql_error(\\%s: missing required argument\n, cmd);
-			success = false;
+			size_t i;
+			/* list all variables */
+			static const char *const my_list[] = {format, border,
+expanded, null, fieldsep, tuples_only, title,
+tableattr, linestyle, pager, recordsep, NULL};
+			for (i = 0; my_list[i] != NULL; i++) {
+printPsetInfo(my_list[i], pset.popt);
+			}
+
+			success = true;
 		}
 		else
 			success = do_pset(opt0, opt1, pset.popt, pset.quiet);
@@ -2275,8 +2284,6 @@ do_pset(const char *param, const char *value, printQueryOpt *popt, bool quiet)
 			return false;
 		}
 
-		if (!quiet)
-			printf(_(Output format is %s.\n), _align2string(popt-topt.format));
 	}
 
 	/* set table line style */
@@ -2295,10 +2302,6 @@ do_pset(const char *param, const char *value, printQueryOpt *popt, bool quiet)
 			psql_error(\\pset: allowed line styles are ascii, old-ascii, unicode\n);
 			return false;
 		}
-
-		if (!quiet)
-			printf(_(Line style is %s.\n),
-   get_line_style(popt-topt)-name);
 	}
 
 	/* set border style/width */
@@ -2306,9 +2309,6 @@ do_pset(const char *param, const char *value, printQueryOpt *popt, bool quiet)
 	{
 		if (value)
 			popt-topt.border = atoi(value);
-
-		if (!quiet)
-			printf(_(Border style is %d.\n), popt-topt.border);
 	}
 
 	/* set expanded/vertical mode */
@@ -2320,15 +2320,6 @@ do_pset(const char *param, const char *value, printQueryOpt *popt, bool quiet)
 			popt-topt.expanded = ParseVariableBool(value);
 		else
 			popt-topt.expanded = !popt-topt.expanded;
-		if (!quiet)
-		{
-			if (popt-topt.expanded == 1)
-printf(_(Expanded display is on.\n));
-			else if (popt-topt.expanded == 2)
-printf(_(Expanded display is used automatically.\n));
-			else
-printf(_(Expanded display is off.\n));
-		}
 	}
 
 	/* locale-aware numeric output */
@@ -2338,13 +2329,6 @@ do_pset(const char *param, const char *value, printQueryOpt *popt, bool quiet)
 			popt-topt.numericLocale = ParseVariableBool(value);
 		else
 			popt-topt.numericLocale = !popt-topt.numericLocale;
-		if (!quiet)
-		{
-			if (popt-topt.numericLocale)
-puts(_(Showing locale-adjusted numeric output.));
-			else
-puts(_(Locale

[HACKERS] pg_dump restore error

2012-10-14 Thread Gilles Darold
Hello,

Restoring a backup generated with pg_dump/pg_dumpall in plain text
format and the --clean option will report errors if the backup is loaded
in an other or empty database. I mean that the backup file contains all
SQL order to drop the database's objects before recreating them, so if
you load this backup into a new database it will throw errors on each
DROP call complaining that the objects doesn't exists.

This is not very important because everything goes fine but these error
reports can be easily prevented with the addition of IF EXISTS clauses
and this will probably be less confusing. I've attached a patch adding
those IF EXISTS on each DROP and ALTER statements.

Best regards,

-- 
Gilles Darold
http://dalibo.com - http://dalibo.org

diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 9920d96..e61cdd6 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -1965,7 +1965,7 @@ dumpDatabase(Archive *fout)
 
 	}
 
-	appendPQExpBuffer(delQry, DROP DATABASE %s;\n,
+	appendPQExpBuffer(delQry, DROP DATABASE IF EXISTS %s;\n,
 	  fmtId(datname));
 
 	dbDumpId = createDumpId();
@@ -7345,7 +7345,7 @@ dumpNamespace(Archive *fout, NamespaceInfo *nspinfo)
 
 	qnspname = pg_strdup(fmtId(nspinfo-dobj.name));
 
-	appendPQExpBuffer(delq, DROP SCHEMA %s;\n, qnspname);
+	appendPQExpBuffer(delq, DROP SCHEMA IF EXISTS %s;\n, qnspname);
 
 	appendPQExpBuffer(q, CREATE SCHEMA %s;\n, qnspname);
 
@@ -7404,7 +7404,7 @@ dumpExtension(Archive *fout, ExtensionInfo *extinfo)
 
 	qextname = pg_strdup(fmtId(extinfo-dobj.name));
 
-	appendPQExpBuffer(delq, DROP EXTENSION %s;\n, qextname);
+	appendPQExpBuffer(delq, DROP EXTENSION IF EXISTS %s;\n, qextname);
 
 	if (!binary_upgrade)
 	{
@@ -7575,7 +7575,7 @@ dumpEnumType(Archive *fout, TypeInfo *tyinfo)
 	 * CASCADE shouldn't be required here as for normal types since the I/O
 	 * functions are generic and do not get dropped.
 	 */
-	appendPQExpBuffer(delq, DROP TYPE %s.,
+	appendPQExpBuffer(delq, DROP TYPE IF EXISTS %s.,
 	  fmtId(tyinfo-dobj.namespace-dobj.name));
 	appendPQExpBuffer(delq, %s;\n,
 	  fmtId(tyinfo-dobj.name));
@@ -7697,7 +7697,7 @@ dumpRangeType(Archive *fout, TypeInfo *tyinfo)
 	 * CASCADE shouldn't be required here as for normal types since the I/O
 	 * functions are generic and do not get dropped.
 	 */
-	appendPQExpBuffer(delq, DROP TYPE %s.,
+	appendPQExpBuffer(delq, DROP TYPE IF EXISTS %s.,
 	  fmtId(tyinfo-dobj.namespace-dobj.name));
 	appendPQExpBuffer(delq, %s;\n,
 	  fmtId(tyinfo-dobj.name));
@@ -8029,7 +8029,7 @@ dumpBaseType(Archive *fout, TypeInfo *tyinfo)
 	 * the type and its I/O functions makes it impossible to drop the type any
 	 * other way.
 	 */
-	appendPQExpBuffer(delq, DROP TYPE %s.,
+	appendPQExpBuffer(delq, DROP TYPE IF EXISTS %s.,
 	  fmtId(tyinfo-dobj.namespace-dobj.name));
 	appendPQExpBuffer(delq, %s CASCADE;\n,
 	  fmtId(tyinfo-dobj.name));
@@ -8281,7 +8281,7 @@ dumpDomain(Archive *fout, TypeInfo *tyinfo)
 	/*
 	 * DROP must be fully qualified in case same name appears in pg_catalog
 	 */
-	appendPQExpBuffer(delq, DROP DOMAIN %s.,
+	appendPQExpBuffer(delq, DROP DOMAIN IF EXISTS %s.,
 	  fmtId(tyinfo-dobj.namespace-dobj.name));
 	appendPQExpBuffer(delq, %s;\n,
 	  fmtId(tyinfo-dobj.name));
@@ -8794,7 +8794,7 @@ dumpProcLang(Archive *fout, ProcLangInfo *plang)
 	else
 		lanschema = NULL;
 
-	appendPQExpBuffer(delqry, DROP PROCEDURAL LANGUAGE %s;\n,
+	appendPQExpBuffer(delqry, DROP PROCEDURAL LANGUAGE IF EXISTS %s;\n,
 	  qlanname);
 
 	if (useParams)
@@ -9335,7 +9335,7 @@ dumpFunc(Archive *fout, FuncInfo *finfo)
 	/*
 	 * DROP must be fully qualified in case same name appears in pg_catalog
 	 */
-	appendPQExpBuffer(delqry, DROP FUNCTION %s.%s;\n,
+	appendPQExpBuffer(delqry, DROP FUNCTION IF EXISTS %s.%s;\n,
 	  fmtId(finfo-dobj.namespace-dobj.name),
 	  funcsig);
 
@@ -9553,7 +9553,7 @@ dumpCast(Archive *fout, CastInfo *cast)
 	delqry = createPQExpBuffer();
 	labelq = createPQExpBuffer();
 
-	appendPQExpBuffer(delqry, DROP CAST (%s AS %s);\n,
+	appendPQExpBuffer(delqry, DROP CAST IF EXISTS (%s AS %s);\n,
 	getFormattedTypeName(fout, cast-castsource, zeroAsNone),
    getFormattedTypeName(fout, cast-casttarget, zeroAsNone));
 
@@ -9823,7 +9823,7 @@ dumpOpr(Archive *fout, OprInfo *oprinfo)
 	/*
 	 * DROP must be fully qualified in case same name appears in pg_catalog
 	 */
-	appendPQExpBuffer(delq, DROP OPERATOR %s.%s;\n,
+	appendPQExpBuffer(delq, DROP OPERATOR IF EXISTS %s.%s;\n,
 	  fmtId(oprinfo-dobj.namespace-dobj.name),
 	  oprid-data);
 
@@ -10109,7 +10109,7 @@ dumpOpclass(Archive *fout, OpclassInfo *opcinfo)
 	/*
 	 * DROP must be fully qualified in case same name appears in pg_catalog
 	 */
-	appendPQExpBuffer(delq, DROP OPERATOR CLASS %s,
+	appendPQExpBuffer(delq, DROP OPERATOR CLASS IF EXISTS %s,
 	  fmtId(opcinfo-dobj.namespace-dobj.name));
 	appendPQExpBuffer(delq, .%s,
 	  fmtId(opcinfo-dobj.name));
@@ -10553,7

Re: [HACKERS] pg_dump restore error

2012-10-14 Thread Gilles Darold
Le 14/10/2012 18:50, Tom Lane a écrit :
 Gilles Darold gilles.dar...@dalibo.com writes:
 Restoring a backup generated with pg_dump/pg_dumpall in plain text
 format and the --clean option will report errors if the backup is loaded
 in an other or empty database.
 This is intentional.  What you propose amounts to a fundamental change
 in the semantics of --clean, and I don't think that it's really much
 of an improvement.

I'm agree that this is not an improvement, I used to work with it
because I know it's harmless error messages. The request comes from a
client that claims that replaying his backup on a new server was always
producing those errors.

I was just thinking that cleaning if exists was also harmless, but it is
probably better to have error reported in most of the cases.

Regards,

-- 
Gilles Darold
Administrateur de bases de données
http://dalibo.com - http://dalibo.org



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch pg_is_in_backup()

2012-05-14 Thread Gilles Darold
Sorry for the the double post but it seems that my previous reply
doesn't reach the pgsql-hacker list. So here is the new patches that
limit lines to 80 characters.

Regards,

Le 02/05/2012 19:53, Gabriele Bartolini a écrit :
 Hi Gilles,

Sorry for the delay.

 Il 03/04/12 14:21, Gilles Darold ha scritto:
 +1, this is also my point of view.

I have looked at the patch that contains both pg_is_in_backup() and
 pg_backup_start_time().

From a functional point of view it looks fine to me. I was thinking
 of adding the BackupInProgress() at the beginning of
 pg_backup_start_time(), but the AllocateFile() function already make
 sure the file exists.

I have performed some basic testing of both functions and tried to
 inject invalid characters in the start time field of the backup_label
 file and it is handled (with an exception) by the server. Cool.

I spotted though some formatting issues, in particular indentation
 and multi-line comments. Some rows are longer than 80 chars.

Please resubmit with these cosmetic changes and it is fine with me.
 Thank you.

 Cheers,
 Gabriele



-- 
Gilles Darold
Administrateur de bases de données
http://dalibo.com - http://dalibo.org

diff -ru postgresql-head/doc/src/sgml/func.sgml postgresql/doc/src/sgml/func.sgml
--- postgresql-head/doc/src/sgml/func.sgml	2012-04-03 10:44:43.979702643 +0200
+++ postgresql/doc/src/sgml/func.sgml	2012-04-03 11:21:15.075701574 +0200
@@ -14467,6 +14467,9 @@
 primarypg_stop_backup/primary
/indexterm
indexterm
+primarypg_backup_start_time/primary
+   /indexterm
+   indexterm
 primarypg_switch_xlog/primary
/indexterm
indexterm
@@ -14532,6 +14535,13 @@
   /row
   row
entry
+literalfunctionpg_backup_start_time()/function/literal
+/entry
+   entrytypetimestamp with time zone/type/entry
+   entryGet start time of an online exclusive backup in progress./entry
+  /row
+  row
+   entry
 literalfunctionpg_switch_xlog()/function/literal
 /entry
entrytypetext/type/entry
diff -ru postgresql-head/src/backend/access/transam/xlogfuncs.c postgresql/src/backend/access/transam/xlogfuncs.c
--- postgresql-head/src/backend/access/transam/xlogfuncs.c	2012-04-03 10:44:44.227702645 +0200
+++ postgresql/src/backend/access/transam/xlogfuncs.c	2012-04-03 11:21:15.075701574 +0200
@@ -29,6 +29,7 @@
 #include utils/numeric.h
 #include utils/guc.h
 #include utils/timestamp.h
+#include storage/fd.h
 
 
 static void validate_xlog_location(char *str);
@@ -563,3 +564,67 @@
 
 	PG_RETURN_NUMERIC(result);
 }
+
+/*
+ * Returns start time of an online exclusive backup.
+ *
+ * When there's no exclusive backup in progress, the function
+ * returns NULL.
+ */
+Datum
+pg_backup_start_time(PG_FUNCTION_ARGS)
+{
+	TimestampTzxtime;
+	FILE*lfp;
+	charfline[MAXPGPATH];
+	char   backup_start_time[30];
+
+	/*
+	* See if label file is present
+	*/
+	lfp = AllocateFile(BACKUP_LABEL_FILE, r);
+	if (lfp == NULL)
+	{
+	   if (errno != ENOENT)
+		   ereport(ERROR,
+(errcode_for_file_access(),
+errmsg(could not read file \%s\: %m,
+	BACKUP_LABEL_FILE)));
+	   PG_RETURN_NULL();
+	}
+
+	/*
+	 * Parse the file to find the the START TIME line.
+	 */
+	backup_start_time[0] = '\0';
+	while (fgets(fline, sizeof(fline), lfp) != NULL)
+	{
+	   if (sscanf(fline, START TIME: %25[^\n]\n,
+		backup_start_time) == 1)
+			break;
+	}
+
+	/*
+	* Close the backup label file.
+	*/
+	if (ferror(lfp) || FreeFile(lfp)) {
+		ereport(ERROR,
+			(errcode_for_file_access(),
+			errmsg(could not read file \%s\: %m,
+			BACKUP_LABEL_FILE)));
+	}
+	
+	if (strlen(backup_start_time) == 0) {
+		ereport(ERROR,
+			(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+			errmsg(invalid data in file \%s\,
+			BACKUP_LABEL_FILE)));
+	}
+
+	/*
+	* Convert the time string read from file to TimestampTz form.
+	*/
+	xtime = DatumGetTimestampTz(
+		DirectFunctionCall3(timestamptz_in,
+			CStringGetDatum(backup_start_time),
+			ObjectIdGetDatum(InvalidOid),Int32GetDatum(-1))
+	);
+
+	PG_RETURN_TIMESTAMPTZ(xtime);
+}
+
diff -ru postgresql-head/src/include/access/xlog_internal.h postgresql/src/include/access/xlog_internal.h
--- postgresql-head/src/include/access/xlog_internal.h	2012-04-03 10:44:45.051702642 +0200
+++ postgresql/src/include/access/xlog_internal.h	2012-04-03 11:21:15.075701574 +0200
@@ -282,5 +282,6 @@
 extern Datum pg_xlog_replay_resume(PG_FUNCTION_ARGS);
 extern Datum pg_is_xlog_replay_paused(PG_FUNCTION_ARGS);
 extern Datum pg_xlog_location_diff(PG_FUNCTION_ARGS);
+extern Datum pg_backup_start_time(PG_FUNCTION_ARGS);
 
 #endif   /* XLOG_INTERNAL_H */
diff -ru postgresql-head/src/include/catalog/pg_proc.h postgresql/src/include/catalog/pg_proc.h
--- postgresql-head/src/include/catalog/pg_proc.h	2012-04-03 10:44:45.051702642 +0200
+++ postgresql/src/include/catalog/pg_proc.h	2012-04-03 11:21:15.075701574 +0200

Re: [HACKERS] Patch pg_is_in_backup()

2012-04-03 Thread Gilles Darold
Hi Gabriele,

Le 31/03/2012 14:25, Gabriele Bartolini a écrit :
 Hi Gilles,

first and foremost, sorry for jumping in this thread so late. I
 read all previous discussions and I'd be happy to help you with this
 patch.

 Agreed and sorry for the response delay. I've attached 2 patches
 here, the first one is the same as before with just the renaming of
 the function into pg_is_in_exclusive_backup().

My quick response: I really like the idea of having a function that
 simply returns a boolean value. To be honest, I would have called it
 pg_is_in_backup() as you originally did - after all, I do not execute
 pg_start_exclusive_backup() or pg_stop_exclusive_backup(). It is more
 intuitive and pairs with pg_is_in_recovery(). I have never found any
 mention whatsoever in the documentation that talks about exclusive
 backup, and I am afraid it would generate confusion.

+1, this is also my point of view.

However, I leave this up to the rest of the community's judgement
 (here is my opinion).

I agree also that a function that returns the timestamp of the
 start of the backup might be useful. My opinion with the 'exclusive'
 keyword applies here too (I would simply name it pg_backup_start_time()).

Ok, I've reverted the name to pg_is_in_backup() and
pg_backup_start_time(), if at the end the functions names have to be
changed a simple replacement in the patch will be easy to do.

Finally, I tried to apply the patch but it looks like we need a new
 version that can apply with current HEAD. If you can do that, I am
 happy to assist with the review.

I've attach 3 patches, the first one include both functions, the others
are patches per function. They are all from the current HEAD branch, if
they don't work please help me with the correct git/diff command to perform.

Thank you, regards,

-- 
Gilles Darold
http://dalibo.com - http://dalibo.org

diff -ru postgresql-head/doc/src/sgml/func.sgml postgresql/doc/src/sgml/func.sgml
--- postgresql-head/doc/src/sgml/func.sgml	2012-04-03 10:44:43.979702643 +0200
+++ postgresql/doc/src/sgml/func.sgml	2012-04-03 12:00:13.099700433 +0200
@@ -14467,6 +14467,12 @@
 primarypg_stop_backup/primary
/indexterm
indexterm
+primarypg_is_in_backup/primary
+   /indexterm
+   indexterm
+primarypg_backup_start_time/primary
+   /indexterm
+   indexterm
 primarypg_switch_xlog/primary
/indexterm
indexterm
@@ -14532,6 +14538,20 @@
   /row
   row
entry
+literalfunctionpg_is_in_backup()/function/literal
+/entry
+   entrytypebool/type/entry
+   entryTrue if an on-line exclusive backup is still in progress./entry
+  /row
+  row
+   entry
+literalfunctionpg_backup_start_time()/function/literal
+/entry
+   entrytypetimestamp with time zone/type/entry
+   entryGet start time of an on-line exclusive backup in progress./entry
+  /row
+  row
+   entry
 literalfunctionpg_switch_xlog()/function/literal
 /entry
entrytypetext/type/entry
diff -ru postgresql-head/src/backend/access/transam/xlogfuncs.c postgresql/src/backend/access/transam/xlogfuncs.c
--- postgresql-head/src/backend/access/transam/xlogfuncs.c	2012-04-03 10:44:44.227702645 +0200
+++ postgresql/src/backend/access/transam/xlogfuncs.c	2012-04-03 12:02:51.043700358 +0200
@@ -29,7 +29,7 @@
 #include utils/numeric.h
 #include utils/guc.h
 #include utils/timestamp.h
-
+#include storage/fd.h
 
 static void validate_xlog_location(char *str);
 
@@ -563,3 +563,75 @@
 
 	PG_RETURN_NUMERIC(result);
 }
+
+/*
+ * Returns bool with current on-line backup mode, a global state.
+ */
+Datum
+pg_is_in_backup(PG_FUNCTION_ARGS)
+{
+   PG_RETURN_BOOL(BackupInProgress());
+}
+
+/*
+ * Returns start time of an online exclusive backup.
+ *
+ * When there's no exclusive backup in progress, the function
+ * returns NULL.
+ */
+Datum
+pg_backup_start_time(PG_FUNCTION_ARGS)
+{
+	TimestampTzxtime;
+	FILE*lfp;
+	charfline[MAXPGPATH];
+	char   backup_start_time[30];
+
+	/*
+	* See if label file is present
+	*/
+	lfp = AllocateFile(BACKUP_LABEL_FILE, r);
+	if (lfp == NULL)
+	{
+	  if (errno != ENOENT)
+		  ereport(ERROR,
+			   (errcode_for_file_access(),
+			   errmsg(could not read file \%s\: %m, BACKUP_LABEL_FILE)));
+	  PG_RETURN_NULL();
+	}
+
+	/*
+	* Parse the file to find the the START TIME line.
+	*/
+	backup_start_time[0] = '\0';
+	while (fgets(fline, sizeof(fline), lfp) != NULL)
+	{
+	  if (sscanf(fline, START TIME: %25[^\n]\n, backup_start_time) == 1)
+		   break;
+	}
+
+	/*
+	* Close the backup label file.
+	*/
+	if (ferror(lfp) || FreeFile(lfp)) {
+	   ereport(ERROR,
+		   (errcode_for_file_access(),
+		   errmsg(could not read file \%s\: %m, BACKUP_LABEL_FILE)));
+	}
+
+	if (strlen(backup_start_time) == 0) {
+	   ereport(ERROR,
+		   (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE

Re: [HACKERS] Patch pg_is_in_backup()

2012-03-02 Thread Gilles Darold
Le 03/02/2012 10:52, Magnus Hagander a écrit :
 On Fri, Feb 3, 2012 at 10:47, Fujii Masao masao.fu...@gmail.com wrote:
 On Fri, Feb 3, 2012 at 6:10 PM, Bernd Helmle maili...@oopsware.de wrote:

 --On 3. Februar 2012 13:21:11 +0900 Fujii Masao masao.fu...@gmail.com
 wrote:

 It seems to be more user-friendly to introduce a view like pg_stat_backup
 rather than the function returning an array.

 I like this idea. A use case i saw for monitoring backup_label's in the
 past, was mainly to discover a forgotten exclusive pg_stop_backup() (e.g.
 due to broken backup scripts). If the view would be able to distinguish
 both, exclusive and non-exclusive backups, this would be great.
 Agreed. Monitoring an exclusive backup is very helpful. But I wonder
 why we want to monitor non-exclusive backup. Is there any use case?
 Actually, we can already monitor much of the non-exclusive one through
 pg_stat_replication. Including the info on when it was started (at
 least in almost every case, that will be more or less the
 backend_start time for that one)

 If we want to monitor non-exclusive backup, why not pg_dump backup?
 .. which we can also monitor though pg_stat_activity by looking at
 application_name (which can be faked of course, but still)

 If there is no use case, it seems sufficient to implement the function
 which reports the information only about exclusive backup.
 Yeah, thinking more of it, i think I agree. But the function should
 then probably be named in such a way that it's clear that we're
 talking about exclusive backups, e.g. not pg_is_in_backup() but
 instead pg_is_in_exclusive_backup() (renamed if we change it to return
 the timestamp instead, of course, but you get the idea)

Agreed and sorry for the response delay. I've attached 2 patches here,
the first one is the same as before with just the renaming of the
function into pg_is_in_exclusive_backup(). Maybe this patch should be
abandoned in favor of the second one which introduce a new function
called pg_exclusive_backup_start_time() that return the backup_label
START TIME information as a timestamp with timezone.

Sample usage/result:

postgres=# select pg_start_backup('Online backup');
 pg_start_backup
-
 0/220
(1 ligne)

postgres=# select pg_exclusive_backup_start_time();
 pg_exclusive_backup_start_time

 2012-03-02 14:52:49+01
(1 ligne)

postgres=# select now() - pg_exclusive_backup_start_time() as
backup_started_since;
  backup_started_since

 00:12:24.569226
(1 ligne)

postgres=# select pg_stop_backup();
 pg_stop_backup

 0/298
(1 ligne)

postgres=# select pg_exclusive_backup_start_time();
 pg_exclusive_backup_start_time


(1 ligne)

Regards,

-- 
Gilles Darold
http://dalibo.com - http://dalibo.org

diff -ru postgresql/doc/src/sgml/func.sgml postgresql-is_in_exclusive_backup-patch//doc/src/sgml/func.sgml
--- postgresql/doc/src/sgml/func.sgml	2012-01-26 23:01:31.956613398 +0100
+++ postgresql-is_in_exclusive_backup-patch//doc/src/sgml/func.sgml	2012-03-01 10:58:20.556472776 +0100
@@ -14371,6 +14371,9 @@
 primarypg_current_xlog_location/primary
/indexterm
indexterm
+primarypg_is_in_exclusive_backup/primary
+   /indexterm
+   indexterm
 primarypg_start_backup/primary
/indexterm
indexterm
@@ -14424,6 +14427,14 @@
   /row
   row
entry
+literalfunctionpg_is_in_exclusive_backup()/function/literal
+/entry
+   entrytypebool/type/entry
+   entryTrue if an on-line exclusive backup is still in progress.
+   /entry
+  /row
+  row
+   entry
 literalfunctionpg_start_backup(parameterlabel/ typetext/ optional, parameterfast/ typeboolean/ /optional)/function/literal
 /entry
entrytypetext/type/entry
diff -ru postgresql/src/backend/access/transam/xlogfuncs.c postgresql-is_in_exclusive_backup-patch//src/backend/access/transam/xlogfuncs.c
--- postgresql/src/backend/access/transam/xlogfuncs.c	2012-01-26 23:01:32.032613398 +0100
+++ postgresql-is_in_exclusive_backup-patch//src/backend/access/transam/xlogfuncs.c	2012-03-01 10:48:48.056473056 +0100
@@ -465,3 +465,12 @@
 {
 	PG_RETURN_BOOL(RecoveryInProgress());
 }
+
+/*
+ * Returns bool with current on-line backup mode, a global state.
+ */
+Datum
+pg_is_in_exclusive_backup(PG_FUNCTION_ARGS)
+{
+	PG_RETURN_BOOL(BackupInProgress());
+}
diff -ru postgresql/src/include/access/xlog_internal.h postgresql-is_in_exclusive_backup-patch//src/include/access/xlog_internal.h
--- postgresql/src/include/access/xlog_internal.h	2012-01-26 23:01:32.332613398 +0100
+++ postgresql-is_in_exclusive_backup-patch//src/include/access/xlog_internal.h	2012-03-01 10:48:48.076473056 +0100
@@ -281,5 +281,6 @@
 extern Datum pg_xlog_replay_pause(PG_FUNCTION_ARGS);
 extern Datum pg_xlog_replay_resume(PG_FUNCTION_ARGS

Re: [HACKERS] Patch pg_is_in_backup()

2012-02-02 Thread Gilles Darold
Le 02/02/2012 12:23, Marti Raudsepp a écrit :
 On Mon, Jan 30, 2012 at 20:33, Gilles Darold gilles.dar...@dalibo.com wrote:
 After some time searching for a Pg system administration function like
 pg_is_in_recovery(), let's say pg_is_in_backup(), I couldn't find one.
 The minor patch attached here adds this administrative function that can
 be used with others backup control functions. This is a very little
 patch outside documentation because the function is only a wrapper to
 the internal BackupInProgress() function, just like pg_is_in_recovery()
 is calling RecoveryInProgress().
 I think it would be more useful to have a function that returns a
 timestamp when the backup started. That way, it's possible to write a
 generic monitoring script that alerts the sysadmin only when a backup
 has been running for too long, but is silent for well-behaved backups.

For now the internal function BackupInProgress() only return a boolean.
I like the idea that pg_is_in_backup() just works as pg_is_in_recovery()
do. I mean it doesn't return the timestamp of the recovery starting time
but only true or false.

Maybe we can have an other function that will return all information
available in the backup_label file ?

Regards,

-- 
Gilles Darold
http://dalibo.com - http://dalibo.org


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch pg_is_in_backup()

2012-01-31 Thread Gilles Darold
Le 30/01/2012 22:14, Euler Taveira de Oliveira a écrit :
 On 30-01-2012 15:33, Gilles Darold wrote:
 Yesterday I was facing a little issue with a backup software (NetBackup)
 that do not report error when a post backup script is run. The problem
 is that this script execute pg_stop_backup() and if there's any failure
 PostgreSQL keeps running in on-line backup mode. So the backup is not
 completed and the next one too because the call to pg_start_backup()
 will fail.
 I use something similar to your pl/PgSQL version and was about to propose
 something along these lines to core. +1 to include it. Please, add your patch
 to the next CF.

Ok, it has been added to next CF under the System administration new
topic I've just created. I'm not sure about that, so feel free to change
it or let me know.

Best regards,

-- 
Gilles Darold
http://dalibo.com - http://dalibo.org


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Patch pg_is_in_backup()

2012-01-30 Thread Gilles Darold
Hello,

Yesterday I was facing a little issue with a backup software (NetBackup)
that do not report error when a post backup script is run. The problem
is that this script execute pg_stop_backup() and if there's any failure
PostgreSQL keeps running in on-line backup mode. So the backup is not
completed and the next one too because the call to pg_start_backup()
will fail.

After some time searching for a Pg system administration function like
pg_is_in_recovery(), let's say pg_is_in_backup(), I couldn't find one.
The minor patch attached here adds this administrative function that can
be used with others backup control functions. This is a very little
patch outside documentation because the function is only a wrapper to
the internal BackupInProgress() function, just like pg_is_in_recovery()
is calling RecoveryInProgress().

I hope it could be included as it could help to save time for some
sysadmin who want to monitor on-line backup process and that do not have
SQL knowledge. Saying that it's always possible to check if the
backup_label file exist under PGDATA but this need to be checked on the
host with postgres priviledge. The other way is to create a plpgsql
function with security definer that will have the same effect than the
patch above except that it need some SQL knowledge. I've give it here
for web search, thanks to Guillaume and Marc.

CREATE OR REPLACE FUNCTION pg_is_in_backup ( )
RETURNS BOOLEAN AS $$
DECLARE is_exists BOOLEAN;
BEGIN
-- Set a secure search_path: trusted schemas, then 'pg_temp'.
PERFORM pg_catalog.set_config('search_path', 'pg_temp', true);
SELECT ((pg_stat_file('backup_label')).modification IS NOT NULL)
INTO is_exists;
RETURN is_exists;
EXCEPTION
WHEN undefined_file THEN
  RETURN false;
END
$$ LANGUAGE plpgsql SECURITY DEFINER;

Regards,

-- 
Gilles Darold
http://dalibo.com - http://dalibo.org

diff -ru postgresql/doc/src/sgml/func.sgml postgresql-is_in_backup-patch/doc/src/sgml/func.sgml
--- postgresql/doc/src/sgml/func.sgml	2012-01-26 23:01:31.956613398 +0100
+++ postgresql-is_in_backup-patch/doc/src/sgml/func.sgml	2012-01-26 23:14:29.468613019 +0100
@@ -14371,6 +14371,9 @@
 primarypg_current_xlog_location/primary
/indexterm
indexterm
+primarypg_is_in_backup/primary
+   /indexterm
+   indexterm
 primarypg_start_backup/primary
/indexterm
indexterm
@@ -14424,6 +14427,14 @@
   /row
   row
entry
+literalfunctionpg_is_in_backup()/function/literal
+/entry
+   entrytypebool/type/entry
+   entryTrue if an on-line backup is still in progress.
+   /entry
+  /row
+  row
+   entry
 literalfunctionpg_start_backup(parameterlabel/ typetext/ optional, parameterfast/ typeboolean/ /optional)/function/literal
 /entry
entrytypetext/type/entry
diff -ru postgresql/src/backend/access/transam/xlogfuncs.c postgresql-is_in_backup-patch/src/backend/access/transam/xlogfuncs.c
--- postgresql/src/backend/access/transam/xlogfuncs.c	2012-01-26 23:01:32.032613398 +0100
+++ postgresql-is_in_backup-patch/src/backend/access/transam/xlogfuncs.c	2012-01-26 23:16:04.100612972 +0100
@@ -465,3 +465,12 @@
 {
 	PG_RETURN_BOOL(RecoveryInProgress());
 }
+
+/*
+ * Returns bool with current on-line backup mode, a global state.
+ */
+Datum
+pg_is_in_backup(PG_FUNCTION_ARGS)
+{
+	PG_RETURN_BOOL(BackupInProgress());
+}
diff -ru postgresql/src/include/access/xlog_internal.h postgresql-is_in_backup-patch/src/include/access/xlog_internal.h
--- postgresql/src/include/access/xlog_internal.h	2012-01-26 23:01:32.332613398 +0100
+++ postgresql-is_in_backup-patch/src/include/access/xlog_internal.h	2012-01-26 23:22:51.492612774 +0100
@@ -281,5 +281,6 @@
 extern Datum pg_xlog_replay_pause(PG_FUNCTION_ARGS);
 extern Datum pg_xlog_replay_resume(PG_FUNCTION_ARGS);
 extern Datum pg_is_xlog_replay_paused(PG_FUNCTION_ARGS);
+extern Datum pg_is_in_backup(PG_FUNCTION_ARGS);
 
 #endif   /* XLOG_INTERNAL_H */
diff -ru postgresql/src/include/catalog/pg_proc.h postgresql-is_in_backup-patch/src/include/catalog/pg_proc.h
--- postgresql/src/include/catalog/pg_proc.h	2012-01-26 23:01:32.340613398 +0100
+++ postgresql-is_in_backup-patch/src/include/catalog/pg_proc.h	2012-01-27 09:26:09.918736657 +0100
@@ -2899,6 +2899,8 @@
 DESCR(prepare for taking an online backup);
 DATA(insert OID = 2173 ( pg_stop_backup			PGNSP PGUID 12 1 0 0 0 f f f t f v 0 0 25  _null_ _null_ _null_ _null_ pg_stop_backup _null_ _null_ _null_ ));
 DESCR(finish taking an online backup);
+DATA(insert OID = 3882 (  pg_is_in_backup		PGNSP PGUID 12 1 0 0 0 f f f t f v 0 0 16  _null_ _null_ _null_ _null_ pg_is_in_backup _null_ _null_ _null_ ));
+DESCR(true if server is in on-line backup);
 DATA(insert OID = 2848 ( pg_switch_xlog			PGNSP PGUID 12 1 0 0 0 f f f t f v 0 0 25  _null_ _null_ _null_ _null_ pg_switch_xlog _null_ _null_ _null_ ));
 DESCR(switch to new xlog file);
 DATA(insert OID = 3098 ( pg_create_restore_point	PGNSP PGUID 12 1 0 0

[HACKERS] Contrib update

2002-03-26 Thread Gilles DAROLD

Hi Justin,

I have a new updated version of the Ora2Pg tool which correct many
problems and add some new features, could you or someone else update
the contrib directory.
(download at: http://www.samse.fr/GPL/ora2pg/ora2pg-1.8.tar.gz)

I also just post a new tool in replacement of the Oracle XSQL Servlet,
use to create dynamic web application with XML/XSLT.

Let me know if it can take place under the contrib directory.
(http://www.samse.fr/GPL/pxsql/)


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] XML

2002-03-20 Thread Gilles DAROLD

Hi,

I have something that do exactly what you are looking for (and more)
but damned it's perl stuff !

You just have to set your SQL queries into XML files, associate
XSL template to them and run a cgi perl script througth apache (with or
without mod_perl). You can also add perl procedures to process your data
before output. I also parse cgi parameters... and more.

If you're interested let me know !

longjohn wrote:

 Do U know if pgSQL supports XML ?
 4 example : BROWSER-APACHE JSERV-SERVLET-DB-xsl+xml-HTML
 Do U know any open source DB doing that?
 Thanks a lot

 ---(end of broadcast)---
 TIP 6: Have you searched our list archives?

 http://archives.postgresql.org


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[HACKERS] Re: [GENERAL] Postgresql log analyzer

2001-08-23 Thread Gilles DAROLD

Hi all,

I have updated the drafts for pg log analyzer especially for EXPLAIN  output.
What do you want to see as statistics result. Currently I only output the following:

- scan type
- startup cost
- total cost
- number of rows returned
- and the width

There's certainly other usefull information but I don't know. Please let me know !

Note:

This is a simple draft to show what can be done, as a general purpose it will include:

- A configuration file (to choose what should be reported, paths, etc...)
   or/and command line args
- An index page with resume of all reports
- Incremental scan working on full or rotate log

For other good requests it's done...

Let me know any other requests otherwise I will publish the first release at least on
monday
if not tomorow !

http://www.samse.fr/GPL/log_report/

Regards,

Gilles Darold

Andrew McMillan wrote:

 Gilles DAROLD wrote:
 
  Hi all,
 
  Here is a first draft generated by a log analyzer for postgres I've wrote today:
 
  http://www.samse.fr/GPL/log_report/
 
  In all this html report there is what I'm able to extract minus the statistics.
 
  I need to know what people want to see reported to have a powerfull log analyzer,

 I like what you have there so far.

 For my own use I would like to see the ability to turn some of these off,
 and also perhaps a summary page that you would click through to the more
 detailed reports.

 The 'query' page is kind of complicated too.  Would it be possible to put
 that into a table layout as well?
 +---+
 |select...  |
 +++++---+
 |stat|stat|stat|stat ...|   |
 +++++---+

 sort of layout.

 It would be nice to see an EXPLAIN on the query page, but you would want
 this to be an option, I guess.  I imagine you could do this by getting the
 EXPLAIN at log analysis time if it isn't in the logs.

 Cheers,
 Andrew.
 --
 _
 Andrew McMillan, e-mail: Andrew @ catalyst . net . nz
 Catalyst IT Ltd, PO Box 10-225, Level 22, 105 The Terrace, Wellington
 Me: +64(21)635-694,  Fax:+64(4)499-5596, Office: +64(4)499-2267xtn709

 ---(end of broadcast)---
 TIP 5: Have you checked our extensive FAQ?

 http://www.postgresql.org/users-lounge/docs/faq.html




---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



[HACKERS] Postgresql log analyzer

2001-08-21 Thread Gilles DAROLD

Hi all,

I'm currently trying to develop a log analyzer for PostgreSQL logs and at
the first
stage I'm finding a little problem with the postgresql.conf option
log_timestamp.

The problem is that if this option is set to false we have no idea of when
the backend
is started:

DEBUG:  database system was shut down at 2001-08-20 21:51:54 CEST
DEBUG:  CheckPoint record at (0, 126088776)
DEBUG:  Redo record at (0, 126088776); Undo record at (0, 0); Shutdown TRUE
DEBUG:  NextTransactionId: 489793; NextOid: 77577
DEBUG:  database system is in production state

Is it possible to have it into the last line as we have the information of
the database
shutdown timestamp in the first line ?

Also, an other question is why using timestamp into the other log instead of
the value
of  time  in  seconds  since  the  Epoch like the time() function do ?

I don't know if it is speedest or not but if timestamp is system local
dependant
I think it should be very difficult to me to have a portable log analyzer...

Regards,

Gilles Darold


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [HACKERS] Postgresql log analyzer

2001-08-21 Thread Gilles DAROLD

Hi all,

Here is a first draft generated by a log analyzer for postgres I've wrote today:

http://www.samse.fr/GPL/log_report/

In all this html report there is what I'm able to extract minus the statistics.

I need to know what people want to see reported to have a powerfull log analyzer,

I can spend this week do do that...

Peter, sorry for my poor english that what I mean is that if you don't activated
the
log_timestamp option we have no idea when the postmaster have been started
(or at least doing a ls -la on /tmp/.s.PGSQL.5432). Other things was just
question
and you answer them, thanks.

Regards



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [HACKERS] Dollar in identifiers

2001-08-16 Thread Gilles DAROLD

Hi,

Dollar in identifier is currently working, you just have to doublequote the
identifier.

create table foo$ (
foo$ int4
);

select * from foo$;
select foo$ from foo$;

works just fine. Or

create table $foo (
$foo int4
);

select * from $foo;
select $foo from $foo;

also works.

Perhaps it may be some problems with pl/pgsql, not tested...




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] Re: From TODO, XML?

2001-07-31 Thread Gilles DAROLD

Hi,

Why don't use the excellent DBIx-XML_RDB perl module ? Give it the query
it will return XML output as you sample. With some hack you can do what you
want...

Regards

Gilles DAROLD

mlw wrote:

 Bruce Momjian wrote:
 
 I have managed to get several XML files into PostgreSQL by writing a parser,
 and it is a huge hassle, the public parsers are too picky. I am thinking 
that a
 fuzzy parser, combined with some intelligence and an XML DTD reader, could 
make
 a very cool utility, one which I have not been able to find.

 Perhaps it is a two stage process? First pass creates a schema which can be
 modified/corrected, the second pass loads the data.
   
Can we accept only relational XML.  Does that buy us anything?  Are the
other database vendors outputting heirchical XML?  Are they using
foreign/primary keys to do it?
  
   Then what's the point? Almost no one creates a non-hierarchical XML. For the
   utility to be usefull, beyond just a different format for pg_dump, it has to
   deal with these issues and do the right thing.
 
  Oh, seems XML will be much more complicated than I thought.

 I think an XML output for pg_dump would be a pretty good/easy feature. It is
 easy to create XML.

 record
 field1bla bla/field1
 field2foo bar/field2
 /record



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Call for platforms

2001-03-21 Thread Gilles DAROLD

Hi,

I reported Linux RedHat 6.2 - 2.2.14-5.0smp #1 SMP Tue Mar 7 21:01:40 EST
2000 i686
2 cpu - 1Go RAM

Gilles DAROLD


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [HACKERS] Call for platforms

2001-03-21 Thread Gilles DAROLD


Hi,

I am currently testing beta6 on AIX 4.3.3 on a RS6000 H80 with 4 cpu and 4
Go RAM
I use :

./configure
--with-CC=/usr/local/bin/gcc
--with-includes=/usr/local/include
--with-libraries=/usr/local/lib

All seem to be ok, There just the geometry failure in regression test
(following the AIX FAQ
it's normal ?)

But when I configure with --with-perl I have the following error :

make[4]: cc : Command not found

Any idea ?


Gilles DAROLD

 Hi,

 I reported Linux RedHat 6.2 - 2.2.14-5.0smp #1 SMP Tue Mar 7 21:01:40 EST
 2000 i686
 2 cpu - 1Go RAM

 Gilles DAROLD

 ---(end of broadcast)---
 TIP 6: Have you searched our list archives?

 http://www.postgresql.org/search.mpl


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] View tables relationship

2001-01-16 Thread Gilles DAROLD

Hi,

This question have been posted recently, please take a look to archive.

I use this very simple query :

SELECT a.tgargs
FROM pg_class c, pg_trigger a
WHERE a.tgname LIKE 'RI_ConstraintTrigger_%' AND c.relname='$table' AND
a.tgrelid = c.oid

Regards,

Gilles DAROLD

riccardo wrote:

 Hi.
 I have a big problem with postgres because I need to know how I can see the
 relations among the table like foreign-key.
 It' s possible use some commands or graphic tool from wich I can see that
 relations?
 Do you know some sites where i can found more information about this!

 Thank you very much,
 and exuse me for my bad English!

 Fabio




[HACKERS] Re: [GENERAL] PL/Perl compilation error

2000-10-17 Thread Gilles DAROLD

Bruce Momjian wrote:

 I can not apply this.  Seems it has changed in the current tree.  Here
 is the current plperl.c file.


It seems that the current file has been fixed. There's no more call to the
buggy variables in it. I don't know what you want me to do ?
Do you still have problem to compiling this code ? If so send me an url
where i can find the complete PG distribution you want to see working.
I will check if it works for me and try to fix if there is problems.
Not sure of what I can do...

Regards

Gilles DAROLD