Re: [HACKERS] proposal: schema variables

2017-11-13 Thread Pavel Stehule
Hi

2017-11-13 13:15 GMT+01:00 Pavel Golub :

> Hello, Pavel.
>
> You wrote:
>
> PS> Hi,
>
> PS> I propose a  new database object - a variable. The variable is
> PS> persistent object, that holds unshared session based not
> PS> transactional in memory value of any type. Like variables in any
> PS> other languages. The persistence is required for possibility to do
> PS> static checks, but can be limited to session - the variables can be
> temporal.
>
> Great idea.
>
> PS> My proposal is related to session variables from Sybase, MSSQL or
> PS> MySQL (based on prefix usage @ or @@), or package variables from
> PS> Oracle (access is controlled by scope), or schema variables from
> PS> DB2. Any design is coming from different sources, traditions and
> PS> has some advantages or disadvantages. The base of my proposal is
> PS> usage schema variables as session variables for stored procedures.
> PS> It should to help to people who try to port complex projects to
> PostgreSQL from other databases.
>
> PS> The Sybase  (T-SQL) design is good for interactive work, but it
> PS> is weak for usage in stored procedures - the static check is not
> PS> possible. Is not possible to set some access rights on variables.
>
> PS> The ADA design (used on Oracle) based on scope is great, but our
> PS> environment is not nested. And we should to support other PL than
> PLpgSQL more strongly.
>
> PS> There is not too much other possibilities - the variable that
> PS> should be accessed from different PL, different procedures (in
> PS> time) should to live somewhere over PL, and there is the schema only.
>
> PS> The variable can be created by CREATE statement:
>
> PS> CREATE VARIABLE public.myvar AS integer;
> PS> CREATE VARIABLE myschema.myvar AS mytype;
>
> PS> CREATE [TEMP] VARIABLE [IF NOT EXISTS] name AS type
> PS>   [ DEFAULT expression ] [[NOT] NULL]
> PS>   [ ON TRANSACTION END { RESET | DROP } ]
> PS>   [ { VOLATILE | STABLE } ];
>
>
> PS> It is dropped by command DROP VARIABLE  [ IF EXISTS] varname.
>
> PS> The access rights is controlled by usual access rights - by
> PS> commands GRANT/REVOKE. The possible rights are: READ, WRITE
>
> PS> The variables can be modified by SQL command SET (this is taken from
> standard, and it natural)
>
> PS> SET varname = expression;
>
> I propose LET keyword for this to distinguish GUC from variables, e.g.
>
> LET varname = expression;
>

 It is one possible variant. I plan to implement more variants and then
choose one.

Regards

Pavel

>
> PS> Unfortunately we use the SET command for different purpose. But I
> PS> am thinking so we can solve it with few tricks. The first is
> PS> moving our GUC to pg_catalog schema. We can control the strictness
> PS> of SET command. In one variant, we can detect custom GUC and allow
> PS> it, in another we can disallow a custom GUC and allow only schema
> PS> variables. A new command LET can be alternative.
>
>
>
> PS> The variables should be used in queries implicitly (without JOIN)
>
>
> PS> SELECT varname;
>
>
> PS> The SEARCH_PATH is used, when varname is located. The variables
> PS> can be used everywhere where query parameters are allowed.
>
>
>
> PS> I hope so this proposal is good enough and simple.
>
>
> PS> Comments, notes?
>
>
> PS> regards
>
>
> PS> Pavel
>
>
>
>
>
>
> --
> With best wishes,
>  Pavel  mailto:pa...@gf.microolap.com
>
>


Re: [HACKERS] possible encoding issues with libxml2 functions

2017-11-11 Thread Pavel Stehule
2017-11-11 21:19 GMT+01:00 Noah Misch <n...@leadboat.com>:

> On Sun, Nov 05, 2017 at 06:10:04PM +0100, Pavel Stehule wrote:
> > Hi
> >
> > 2017-11-05 4:07 GMT+01:00 Noah Misch <n...@leadboat.com>:
> >
> > > On Tue, Oct 17, 2017 at 06:06:40AM +0200, Pavel Stehule wrote:
> > > > Please, if you can, try it write. I am little bit lost :)
> > >
> > > I'm attaching the patch I desired.  Please review.  This will probably
> miss
> > > this week's minor releases.  If there's significant support, I could
> > > instead
> > > push before the wrap.
> > >
> >
> > I have not any objection to this solution. It fixes my regress tests too.
> >
> > I checked it and it is working.
>
> Pushed, but the buildfarm shows I didn't get the test quite right for the
> non-xml, non-UTF8 case.  Fixing.
>

Thank you

Pavel


Re: [HACKERS] proposal: psql command \graw

2017-11-10 Thread Pavel Stehule
2017-11-10 16:38 GMT+01:00 Fabien COELHO :

>
> Hello,
>
> Maybe I'm missing something, but it looks that it could be made to work
>>> without adding another boolean.
>>>
>>
>> The tuples only cannot be disabled, because then other parts print number
>> of rows
>>
>> postgres=# \pset format unaligned
>> Output format is unaligned.
>>
>> postgres=# select 10 as a, 20 as b;
>> a|b
>> 10|20
>> (1 row) <
>>
>
> Argh. Too bad.
>
> I'm not at ease with having two bools which nearly mean the opposite one
> of the other but not exactly... however I'm not sure that there is a
> simpler way out of this, some exception handling is needed one way or the
> other, either within the header or within the footer... Maybe the whole
> topt logic should be reviewed, but that is not the point of this patch.
>

I don't think so it is not correct - this mean tuples only + header.
Probably the best implementation is something three state - all, tuples
only, tuples only and header. But it mean much more changes in psql logic -
not adequate to size of this patch


> So I switched the patch to "ready for committer".
>

Thank you very much

Regards

Pavel


>
> --
> Fabien.
>


Re: [HACKERS] Variable substitution in psql backtick expansion

2017-11-10 Thread Pavel Stehule
Hi

I am sending a review of last patch psql-server-version-1.patch.gz

This patch is trivial - the most big problem is choosing correct name for
GUC. I am thinking so server_version_raw is acceptable.

I had to fix doc - see attached updated patch

All tests passed.

I'll mark this patch as ready for commiters

Regards

Pavel
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index d360fc4d58..924766fce7 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -7956,8 +7956,22 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir'
   
   

-Reports the version number of the server as an integer. It is determined
-by the value of PG_VERSION_NUM when building the server.
+Reports the version number of the server as a short string. It is determined
+by the value of PG_VERSION when building the server.
+   
+  
+ 
+
+ 
+  server_version_raw (string)
+  
+   server_version_raw configuration parameter
+  
+  
+  
+   
+Reports the version of the server as a long string. It is determined
+by the value of PG_VERSION_STR when building the server.

   
  
diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml
index e520cdf3ba..50d6f0a8fc 100644
--- a/doc/src/sgml/ref/psql-ref.sgml
+++ b/doc/src/sgml/ref/psql-ref.sgml
@@ -3770,11 +3770,14 @@ bar
   
 
   
+SERVER_VERSION
 SERVER_VERSION_NAME
 SERVER_VERSION_NUM
 
 
-The server's version number as a string, for
+The server's version number as a long string, for
+example PostgreSQL 11devel ...,
+as a short string, for
 example 9.6.2, 10.1 or 11beta1,
 and in numeric form, for
 example 90602 or 11.
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index c4c1afa084..49ff61246f 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -500,6 +500,7 @@ static char *locale_collate;
 static char *locale_ctype;
 static char *server_encoding_string;
 static char *server_version_string;
+static char *server_version_raw_string;
 static int	server_version_num;
 static char *timezone_string;
 static char *log_timezone_string;
@@ -3295,6 +3296,18 @@ static struct config_string ConfigureNamesString[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		/* Can't be set in postgresql.conf */
+		{"server_version_raw", PGC_INTERNAL, PRESET_OPTIONS,
+			gettext_noop("Shows the server version string."),
+			NULL,
+			GUC_REPORT | GUC_NOT_IN_SAMPLE | GUC_DISALLOW_IN_FILE
+		},
+		_version_raw_string,
+		PG_VERSION_STR,
+		NULL, NULL, NULL
+	},
+
 	{
 		/* Not for general use --- used by SET ROLE */
 		{"role", PGC_USERSET, UNGROUPED,
diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c
index 8cc4de3878..cfac89c8da 100644
--- a/src/bin/psql/command.c
+++ b/src/bin/psql/command.c
@@ -3210,7 +3210,8 @@ void
 SyncVariables(void)
 {
 	char		vbuf[32];
-	const char *server_version;
+	const char *server_version,
+			   *server_version_raw;
 
 	/* get stuff from connection */
 	pset.encoding = PQclientEncoding(pset.db);
@@ -3237,6 +3238,17 @@ SyncVariables(void)
 	snprintf(vbuf, sizeof(vbuf), "%d", pset.sversion);
 	SetVariable(pset.vars, "SERVER_VERSION_NUM", vbuf);
 
+	server_version_raw = PQparameterStatus(pset.db, "server_version_raw");
+	/* fall back again */
+	if (!server_version_raw)
+	{
+		snprintf(vbuf, sizeof(vbuf), "PostgreSQL ");
+		formatPGVersionNumber(pset.sversion, true, vbuf + strlen(vbuf),
+			  sizeof(vbuf) - strlen(vbuf));
+		server_version_raw = vbuf;
+	}
+	SetVariable(pset.vars, "SERVER_VERSION", server_version_raw);
+
 	/* send stuff to it, too */
 	PQsetErrorVerbosity(pset.db, pset.verbosity);
 	PQsetErrorContextVisibility(pset.db, pset.show_context);
@@ -3255,6 +3267,7 @@ UnsyncVariables(void)
 	SetVariable(pset.vars, "HOST", NULL);
 	SetVariable(pset.vars, "PORT", NULL);
 	SetVariable(pset.vars, "ENCODING", NULL);
+	SetVariable(pset.vars, "SERVER_VERSION", NULL);
 	SetVariable(pset.vars, "SERVER_VERSION_NAME", NULL);
 	SetVariable(pset.vars, "SERVER_VERSION_NUM", NULL);
 }
diff --git a/src/interfaces/libpq/fe-protocol2.c b/src/interfaces/libpq/fe-protocol2.c
index 5335a91440..0418779f79 100644
--- a/src/interfaces/libpq/fe-protocol2.c
+++ b/src/interfaces/libpq/fe-protocol2.c
@@ -280,6 +280,10 @@ pqSetenvPoll(PGconn *conn)
 		{
 			char	   *ptr;
 
+			/* keep returned value */
+			pqSaveParameterStatus(conn, "server_version_raw",
+  val);
+
 			/* strip off PostgreSQL part */
 			val += 11;
 
diff --git a/src/test/regress/expected/guc.out b/src/test/regress/expected/guc.out
index 43ac5f5f11..eabb990d4e 100644
--- a/src/test/regress/expected/guc.out
+++ b/src/test/regress/expected/guc.out
@@ -767,3 +767,14 @@ NOTICE:  text search configuration "no_such_config" does not exist
 select 

Re: [HACKERS] Transform for pl/perl

2017-11-10 Thread Pavel Stehule
Hi

2017-10-24 14:27 GMT+02:00 Anthony Bykov :

> There are some moments I should mention:
> 1. {"1":1}::jsonb is transformed into HV {"1"=>"1"}, while
> ["1","2"]::jsonb is transformed into AV ["1", "2"]
>
> 2. If there is a numeric value appear in jsonb, it will be transformed
> to SVnv through string (Numeric->String->SV->SVnv). Not the best
> solution, but as far as I understand this is usual practise in
> postgresql to serialize Numerics and de-serialize them.
>
> 3. SVnv is transformed into jsonb through string
> (SVnv->String->Numeric).
>
> An example may also be helpful to understand extension. So, as an
> example, function "test" transforms incoming jsonb into perl,
> transforms it back into jsonb and returns it.
>
> create extension jsonb_plperl cascade;
>
> create or replace function test(val jsonb)
> returns jsonb
> transform for type jsonb
> language plperl
> as $$
> return $_[0];
> $$;
>
> select test('{"1":1,"example": null}'::jsonb);
>
>
I am looking to this patch:

1. the patch contains some artefacts - look the word "hstore"

2. I got lot of warnings


make[1]: Vstupuje se do adresáře
„/home/pavel/src/postgresql/contrib/jsonb_plperl“
gcc -Wall -Wmissing-prototypes -Wpointer-arith
-Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute
-Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard
-g -ggdb -Og -g3 -fno-omit-frame-pointer -fPIC -I../../src/pl/plperl -I.
-I. -I../../src/include  -D_GNU_SOURCE -I/usr/include/libxml2
-I/usr/lib64/perl5/CORE  -c -o jsonb_plperl.o jsonb_plperl.c
jsonb_plperl.c: In function ‘SV_FromJsonbValue’:
jsonb_plperl.c:83:9: warning: ‘result’ may be used uninitialized in this
function [-Wmaybe-uninitialized]
  return (result);
 ^
jsonb_plperl.c: In function ‘SV_FromJsonb’:
jsonb_plperl.c:95:10: warning: ‘object’ may be used uninitialized in this
function [-Wmaybe-uninitialized]
  HV *object;
  ^~
In file included from /usr/lib64/perl5/CORE/perl.h:5644:0,
 from ../../src/pl/plperl/plperl.h:52,
 from jsonb_plperl.c:17:
/usr/lib64/perl5/CORE/embed.h:404:19: warning: ‘value’ may be used
uninitialized in this function [-Wmaybe-uninitialized]
 #define newRV(a)  Perl_newRV(aTHX_ a)
   ^~
jsonb_plperl.c:101:10: note: ‘value’ was declared here
  SV *value;
  ^
gcc -Wall -Wmissing-prototypes -Wpointer-arith
-Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute
-Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard
-g -ggdb -Og -g3 -fno-omit-frame-pointer -fPIC -shared -o jsonb_plperl.so
jsonb_plperl.o  -L../../src/port -L../../src/common -Wl,--as-needed
-Wl,-rpath,'/usr/lib64/perl5/CORE',--enable-new-dtags   -Wl,-z,relro
-specs=/usr/lib/rpm/redhat/redhat-hardened-ld -fstack-protector-strong
-L/usr/local/lib  -L/usr/lib64/perl5/CORE -lperl -lpthread -lresolv -lnsl
-ldl -lm -lcrypt -lutil -lc
make[1]: Opouští se adresář
„/home/pavel/src/postgresql/contrib/jsonb_plperl“

[pavel@nemesis contrib]$ gcc --version
gcc (GCC) 7.2.1 20170915 (Red Hat 7.2.1-2)
Copyright (C) 2017 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

3. regress tests passed

4. There are not any documentation - probably it should be part of PLPerl

5. The regress tests doesn't coverage other datatypes than numbers. I miss
boolean, binary, object, ... Maybe using data::dumper or some similar can
be interesting

Note - it is great extension, I am pleasured so transformations are used.

Regards

Pavel







>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>


[HACKERS] pg audit requirements

2017-11-09 Thread Pavel Stehule
Hi

I am sending some notes, experience about usage of pgAudit.

pgAudit provides basic functionality and usually is good enough. But it is
not good enough for some applications in financial services.

The requirements:

1. structured output - attached query is not good enough - column name,
table name, schema, database, role should be separated

2. separated log (log file) with guaranteed write - fsync after every line
means significant performance issue, but fsync every 1sec (or defined
interval) is acceptable

3. security issues - not enough access rights to database object should be
processed and logged in audit log too.

Regards

Pavel


Re: [HACKERS] proposal: psql command \graw

2017-11-09 Thread Pavel Stehule
2017-11-10 8:12 GMT+01:00 Fabien COELHO :

>
> ISTM that you can remove "force_column_header" and just set "tuple_only"
>>> to what you need, that is you do not need to change anything in function
>>> "print_unaligned_text".
>>>
>>
>> Last point is not possible - I would not to break original tuple only
>> mode.
>>
>
> Hmmm... I do not understand. I can see only one use of force_column_header
> in the function:
>
>  -   if (!opt_tuples_only)
>  +   if (!opt_tuples_only || opt_force_column_header)
>
> So I would basically suggest to do:
>
>  my_popt.topt.tuples_only = !pset.g_raw_header;
>
> in the driver. Looking at the detailed code in that function, probably you
> need to set start_table to on when headers are needed and stop_table to off
> for the raw mode anyway?
>
> Maybe I'm missing something, but it looks that it could be made to work
> without adding another boolean.
>

The tuples only cannot be disabled, because then other parts print number
of rows

postgres=# \pset format unaligned
Output format is unaligned.

postgres=# select 10 as a, 20 as b;
a|b
10|20
(1 row) <



> --
> Fabien.
>


Re: [HACKERS] proposal: psql command \graw

2017-11-09 Thread Pavel Stehule
2017-11-09 21:12 GMT+01:00 Pavel Stehule <pavel.steh...@gmail.com>:

>
>
> 2017-11-09 21:03 GMT+01:00 Fabien COELHO <coe...@cri.ensmp.fr>:
>
>>
>> Hello Pavel,
>>
>> I hope so I fixed all mentioned issues.
>>>
>>
>> Patch applies with a warning:
>>
>>  > git apply ~/psql-graw-2.patch
>>  /home/fabien/psql-graw-2.patch:192: new blank line at EOF.
>>  +
>>  warning: 1 line adds whitespace errors.
>>
>> Otherwise it compiles. "make check" ok. doc gen ok.
>>
>> Two spurious empty lines are added before StoreQueryTuple.
>>
>> Doc: "If + is appended to the command name, a column
>> names are displayed."
>>
>> I suggest instead: "When + is appended, column names
>> are also displayed."
>>
>> ISTM that you can remove "force_column_header" and just set "tuple_only"
>> to what you need, that is you do not need to change anything in function
>> "print_unaligned_text".
>>
>
> Last point is not possible - I would not to break original tuple only
> mode.
>
>
updated patch




> Pavel
>
>>
>> --
>> Fabien.
>>
>
>
diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml
index e520cdf3ba..457a59eeab 100644
--- a/doc/src/sgml/ref/psql-ref.sgml
+++ b/doc/src/sgml/ref/psql-ref.sgml
@@ -2020,6 +2020,19 @@ CREATE INDEX
   
 
 
+  
+\graw[+] [ filename ]
+\graw[+]  [ |command ]
+
+
+\graw is equivalent to \g, but
+forces unaligned output mode for this query. When +
+is appended, column names are also displayed.
+
+
+  
+
+
   
 \gset [ prefix ]
 
diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c
index 8cc4de3878..b3461291eb 100644
--- a/src/bin/psql/command.c
+++ b/src/bin/psql/command.c
@@ -332,7 +332,8 @@ exec_command(const char *cmd,
 		status = exec_command_errverbose(scan_state, active_branch);
 	else if (strcmp(cmd, "f") == 0)
 		status = exec_command_f(scan_state, active_branch);
-	else if (strcmp(cmd, "g") == 0 || strcmp(cmd, "gx") == 0)
+	else if (strcmp(cmd, "g") == 0 || strcmp(cmd, "gx") == 0 ||
+			 strcmp(cmd, "graw") == 0 || strcmp(cmd, "graw+") == 0)
 		status = exec_command_g(scan_state, active_branch, cmd);
 	else if (strcmp(cmd, "gdesc") == 0)
 		status = exec_command_gdesc(scan_state, active_branch);
@@ -1232,6 +1233,7 @@ exec_command_f(PsqlScanState scan_state, bool active_branch)
 
 /*
  * \g [filename] -- send query, optionally with output to file/pipe
+ * \graw [filename] -- same as \g with raw format
  * \gx [filename] -- same as \g, with expanded mode forced
  */
 static backslashResult
@@ -1254,6 +1256,10 @@ exec_command_g(PsqlScanState scan_state, bool active_branch, const char *cmd)
 		free(fname);
 		if (strcmp(cmd, "gx") == 0)
 			pset.g_expanded = true;
+		else if (strcmp(cmd, "graw") == 0)
+			pset.g_raw = true;
+		else if (strcmp(cmd, "graw+") == 0)
+			pset.g_raw_header = true;
 		status = PSQL_CMD_SEND;
 	}
 	else
diff --git a/src/bin/psql/common.c b/src/bin/psql/common.c
index 7a91a44b2b..9f7ef51dfb 100644
--- a/src/bin/psql/common.c
+++ b/src/bin/psql/common.c
@@ -865,6 +865,14 @@ PrintQueryTuples(const PGresult *results)
 	if (pset.g_expanded)
 		my_popt.topt.expanded = 1;
 
+	/* one-shot raw output requested by \raw and \graw+ */
+	else if (pset.g_raw || pset.g_raw_header)
+	{
+		my_popt.topt.format = PRINT_UNALIGNED;
+		my_popt.topt.tuples_only = true;
+		my_popt.topt.force_column_header = pset.g_raw_header;
+	}
+
 	/* write output to \g argument, if any */
 	if (pset.gfname)
 	{
@@ -1517,6 +1525,10 @@ sendquery_cleanup:
 	/* reset \gx's expanded-mode flag */
 	pset.g_expanded = false;
 
+	/* reset \graw flags */
+	pset.g_raw = false;
+	pset.g_raw_header = false;
+
 	/* reset \gset trigger */
 	if (pset.gset_prefix)
 	{
diff --git a/src/bin/psql/help.c b/src/bin/psql/help.c
index a926c40b9b..e573711434 100644
--- a/src/bin/psql/help.c
+++ b/src/bin/psql/help.c
@@ -167,7 +167,7 @@ slashUsage(unsigned short int pager)
 	 * Use "psql --help=commands | wc" to count correctly.  It's okay to count
 	 * the USE_READLINE line even in builds without that.
 	 */
-	output = PageOutput(125, pager ? &(pset.popt.topt) : NULL);
+	output = PageOutput(126, pager ? &(pset.popt.topt) : NULL);
 
 	fprintf(output, _("General\n"));
 	fprintf(output, _("  \\copyright show PostgreSQL usage and distribution terms\n"));
@@ -176,6 +176,7 @@ slashUsage(unsigned short int pager)
 	fprintf(output, _("  \\g [FILE] or ; execute query (and send results to file or |pipe)\n"));
 	fprintf(output, _("  \\gdesc 

[HACKERS] different content of pg_depend after pg_upgrade

2017-11-09 Thread Pavel Stehule
Hi

We checked some check query based on some operations on pg_depend table.
This query did different result when database was migrated with pg_dump or
with pg_upgrade. I found so this query was broken, but I found interesting
thing.

The count is 1 for any objid

select distinct count(distinct classid), objid from pg_depend group by
objid;

when system was loaded from dump

but when we used pg_upgrade, then previous rule was invalid.

Is it expected behave?

Regards

Pavel


Re: [HACKERS] proposal: psql command \graw

2017-11-09 Thread Pavel Stehule
2017-11-09 21:03 GMT+01:00 Fabien COELHO :

>
> Hello Pavel,
>
> I hope so I fixed all mentioned issues.
>>
>
> Patch applies with a warning:
>
>  > git apply ~/psql-graw-2.patch
>  /home/fabien/psql-graw-2.patch:192: new blank line at EOF.
>  +
>  warning: 1 line adds whitespace errors.
>
> Otherwise it compiles. "make check" ok. doc gen ok.
>
> Two spurious empty lines are added before StoreQueryTuple.
>
> Doc: "If + is appended to the command name, a column
> names are displayed."
>
> I suggest instead: "When + is appended, column names
> are also displayed."
>
> ISTM that you can remove "force_column_header" and just set "tuple_only"
> to what you need, that is you do not need to change anything in function
> "print_unaligned_text".
>

Last point is not possible - I would not to break original tuple only mode.

Pavel

>
> --
> Fabien.
>


Re: [HACKERS] proposal: psql command \graw

2017-11-09 Thread Pavel Stehule
Hi

2017-08-24 5:50 GMT+02:00 Fabien COELHO :

>
> Hello Pavel,
>
> I have added the patch to the next commitfest.
>
> Patch applies, compiles, works.
>
> I'm okay with the names graw/graw+, and for having such short-hands.
>
> Missing break in switch, even if last item and useless, because other
> items do it... Also should be added at its place in alphabetical order?
>
> "column_header" is somehow redundant with "tuples_only". Use the
> existing one instead of adding a new one?
>
> More generally, ISTM that the same effect could be achieved without
> adding a new print function, but by setting more options (separator,
> ...) and calling an existing print function. If so, I think it would
> reduce the code size.
>
> Missing help entry.
>
> Missing non regression tests.
>
> Missing documentation.
>
>
I hope so I fixed all mentioned issues.

Regards

Pavel


> --
> Fabien.
>
diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml
index e520cdf3ba..9e7030f247 100644
--- a/doc/src/sgml/ref/psql-ref.sgml
+++ b/doc/src/sgml/ref/psql-ref.sgml
@@ -2020,6 +2020,19 @@ CREATE INDEX
   
 
 
+  
+\graw[+] [ filename ]
+\graw[+]  [ |command ]
+
+
+\graw is equivalent to \g, but
+forces unaligned output mode for this query. If +
+is appended to the command name, a column names are displayed.
+
+
+  
+
+
   
 \gset [ prefix ]
 
diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c
index 8cc4de3878..b3461291eb 100644
--- a/src/bin/psql/command.c
+++ b/src/bin/psql/command.c
@@ -332,7 +332,8 @@ exec_command(const char *cmd,
 		status = exec_command_errverbose(scan_state, active_branch);
 	else if (strcmp(cmd, "f") == 0)
 		status = exec_command_f(scan_state, active_branch);
-	else if (strcmp(cmd, "g") == 0 || strcmp(cmd, "gx") == 0)
+	else if (strcmp(cmd, "g") == 0 || strcmp(cmd, "gx") == 0 ||
+			 strcmp(cmd, "graw") == 0 || strcmp(cmd, "graw+") == 0)
 		status = exec_command_g(scan_state, active_branch, cmd);
 	else if (strcmp(cmd, "gdesc") == 0)
 		status = exec_command_gdesc(scan_state, active_branch);
@@ -1232,6 +1233,7 @@ exec_command_f(PsqlScanState scan_state, bool active_branch)
 
 /*
  * \g [filename] -- send query, optionally with output to file/pipe
+ * \graw [filename] -- same as \g with raw format
  * \gx [filename] -- same as \g, with expanded mode forced
  */
 static backslashResult
@@ -1254,6 +1256,10 @@ exec_command_g(PsqlScanState scan_state, bool active_branch, const char *cmd)
 		free(fname);
 		if (strcmp(cmd, "gx") == 0)
 			pset.g_expanded = true;
+		else if (strcmp(cmd, "graw") == 0)
+			pset.g_raw = true;
+		else if (strcmp(cmd, "graw+") == 0)
+			pset.g_raw_header = true;
 		status = PSQL_CMD_SEND;
 	}
 	else
diff --git a/src/bin/psql/common.c b/src/bin/psql/common.c
index 7a91a44b2b..aeec302eae 100644
--- a/src/bin/psql/common.c
+++ b/src/bin/psql/common.c
@@ -865,6 +865,14 @@ PrintQueryTuples(const PGresult *results)
 	if (pset.g_expanded)
 		my_popt.topt.expanded = 1;
 
+	/* one-shot raw output requested by \raw and \graw+ */
+	else if (pset.g_raw || pset.g_raw_header)
+	{
+		my_popt.topt.format = PRINT_UNALIGNED;
+		my_popt.topt.tuples_only = true;
+		my_popt.topt.force_column_header = pset.g_raw_header;
+	}
+
 	/* write output to \g argument, if any */
 	if (pset.gfname)
 	{
@@ -893,6 +901,8 @@ PrintQueryTuples(const PGresult *results)
 }
 
 
+
+
 /*
  * StoreQueryTuple: assuming query result is OK, save data into variables
  *
@@ -1517,6 +1527,10 @@ sendquery_cleanup:
 	/* reset \gx's expanded-mode flag */
 	pset.g_expanded = false;
 
+	/* reset \graw flags */
+	pset.g_raw = false;
+	pset.g_raw_header = false;
+
 	/* reset \gset trigger */
 	if (pset.gset_prefix)
 	{
diff --git a/src/bin/psql/help.c b/src/bin/psql/help.c
index a926c40b9b..e573711434 100644
--- a/src/bin/psql/help.c
+++ b/src/bin/psql/help.c
@@ -167,7 +167,7 @@ slashUsage(unsigned short int pager)
 	 * Use "psql --help=commands | wc" to count correctly.  It's okay to count
 	 * the USE_READLINE line even in builds without that.
 	 */
-	output = PageOutput(125, pager ? &(pset.popt.topt) : NULL);
+	output = PageOutput(126, pager ? &(pset.popt.topt) : NULL);
 
 	fprintf(output, _("General\n"));
 	fprintf(output, _("  \\copyright show PostgreSQL usage and distribution terms\n"));
@@ -176,6 +176,7 @@ slashUsage(unsigned short int pager)
 	fprintf(output, _("  \\g [FILE] or ; execute query (and send results to file or |pipe)\n"));
 	fprintf(output, _("  \\gdesc describe result of query, without executing it\n"));
 	fprintf(output, _("  \\gexec execute query, then execute each value in its result\n"));
+	fprintf(output, _("  \\graw[+] [FILE]as \\g, but forces unaligned raw output mode\n"));
 	fprintf(output, _("  \\gset [PREFIX] execute query and store results in psql variables\n"));
 	fprintf(output, _("  \\gx [FILE] 

Re: [HACKERS] proposal - Default namespaces for XPath expressions (PostgreSQL 11)

2017-11-09 Thread Pavel Stehule
Hi

2017-11-06 14:00 GMT+01:00 Kyotaro HORIGUCHI <
horiguchi.kyot...@lab.ntt.co.jp>:

> Thank you for the new patch.
>
> - The latest patch is missing xpath_parser.h at least since
>   ns-3. That of the first (not-numbered) version was still
>   usable.
>
> - c29c578 conflicts on doc/src/sgml/func.sgml
>
>
> At Sun, 15 Oct 2017 12:06:11 +0200, Pavel Stehule <pavel.steh...@gmail.com>
> wrote in <CAFj8pRCYBH+a6oJoEYUFDUpBQ1ySwtt2CfnFZxs2A
> b9efon...@mail.gmail.com>
> > 2017-10-02 12:22 GMT+02:00 Kyotaro HORIGUCHI <
> > horiguchi.kyot...@lab.ntt.co.jp>:
> >
> > > Hi, thanks for the new patch.
> > >
> > > # The patch is missing xpath_parser.h. That of the first patch was
> usable.
> > >
> > > At Thu, 28 Sep 2017 07:59:41 +0200, Pavel Stehule <
> pavel.steh...@gmail.com>
> > > wrote in 

Re: [HACKERS] SQL procedures

2017-11-08 Thread Pavel Stehule
2017-11-08 15:31 GMT+01:00 Pavel Stehule <pavel.steh...@gmail.com>:

>
>
> 2017-11-08 15:23 GMT+01:00 Peter Eisentraut <peter.eisentraut@2ndquadrant.
> com>:
>
>> On 10/31/17 16:50, Pavel Stehule wrote:
>> > Not sure if disabling RETURN is good idea. I can imagine so optional
>> > returning something like int status can be good idea. Cheaper than
>> > raising a exception.
>>
>> We could allow a RETURN without argument in PL/pgSQL, if you just want
>> to exit early.  That syntax is currently not available, but it should
>> not be hard to add.
>>
>> I don't understand the point about wanting to return an int.  How would
>> you pass that around, since there is no declared return type?
>>
>
> We can create auto session variable STATUS. This variable can be 0 if
> procedure was returned without explicit RETURN value. Or it can hold
> different value specified by RETURN expr.
>
> This value can be read by GET DIAGNOSTICS xxx = STATUS
>
> or some similar.
>

The motivation is allow some mechanism cheaper than our exceptions.

Regards

Pavel

>
>
>
>> --
>> Peter Eisentraut  http://www.2ndQuadrant.com/
>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>>
>
>


Re: [HACKERS] SQL procedures

2017-11-08 Thread Pavel Stehule
2017-11-08 15:23 GMT+01:00 Peter Eisentraut <
peter.eisentr...@2ndquadrant.com>:

> On 10/31/17 16:50, Pavel Stehule wrote:
> > Not sure if disabling RETURN is good idea. I can imagine so optional
> > returning something like int status can be good idea. Cheaper than
> > raising a exception.
>
> We could allow a RETURN without argument in PL/pgSQL, if you just want
> to exit early.  That syntax is currently not available, but it should
> not be hard to add.
>
> I don't understand the point about wanting to return an int.  How would
> you pass that around, since there is no declared return type?
>

We can create auto session variable STATUS. This variable can be 0 if
procedure was returned without explicit RETURN value. Or it can hold
different value specified by RETURN expr.

This value can be read by GET DIAGNOSTICS xxx = STATUS

or some similar.



> --
> Peter Eisentraut  http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>


Re: [HACKERS] possible encoding issues with libxml2 functions

2017-11-05 Thread Pavel Stehule
Hi

2017-11-05 4:07 GMT+01:00 Noah Misch <n...@leadboat.com>:

> On Tue, Oct 17, 2017 at 06:06:40AM +0200, Pavel Stehule wrote:
> > Please, if you can, try it write. I am little bit lost :)
>
> I'm attaching the patch I desired.  Please review.  This will probably miss
> this week's minor releases.  If there's significant support, I could
> instead
> push before the wrap.
>

I have not any objection to this solution. It fixes my regress tests too.

I checked it and it is working.

Regards

Pavel


Re: [HACKERS] proposal: schema variables

2017-11-02 Thread Pavel Stehule
2017-11-02 13:35 GMT+01:00 Robert Haas <robertmh...@gmail.com>:

> On Thu, Oct 26, 2017 at 12:51 PM, Pavel Stehule <pavel.steh...@gmail.com>
> wrote:
> > The variables can be modified by SQL command SET (this is taken from
> > standard, and it natural)
> >
> > SET varname = expression;
>
> Overloading SET to handle both variables and GUCs seems likely to
> create problems, possibly including security problems.  For example,
> maybe a security-definer function could leave behind variables to
> trick the calling code into failing to set GUCs that it intended to
> set.  Or maybe creating a variable at the wrong time will just break
> things randomly.
>

The syntax CREATE OR REPLACE FUNCTION xxx $$ ... $$ SET GUC=, ... is always
related only to GUC. So there should not be any security risk.

It is another reason why GUC and variables should be separated.

I know so there is risk of possibility of collision. There are two
possibilities

a) use different keyword - but it is out of SQL/PSM and out of another
databases.

b) detect possible collision and raise error when assignment is ambiguous.
I am thinking about similar solution used in plpgsql, where is a
possibility of collision between SQL identifier and plpgsql variable.

Regards

Pavel




>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>


Re: [HACKERS] proposal: schema variables

2017-11-02 Thread Pavel Stehule
2017-11-02 16:07 GMT+01:00 Craig Ringer <cr...@2ndquadrant.com>:

> On 26 October 2017 at 15:21, Pavel Stehule <pavel.steh...@gmail.com>
> wrote:
> > Hi,
> >
> > I propose a  new database object - a variable.
>
> Didn't we have a pretty long discussion about this already in
>
> Yeah.
>
> https://www.postgresql.org/message-id/flat/CAMsr%2BYF0G8_
> FehQyFS8gSfnEer9OPsMOvpfniDJOVGQzJzHzsw%40mail.gmail.com#CAMsr+YF0G8_
> fehqyfs8gsfneer9opsmovpfnidjovgqzjzh...@mail.gmail.com
>
> It'd be nice if you summarised any outcomes from that and addressed
> it, rather than taking this as a new topic.
>

I am sorry. This thread follow mentioned and I started with small
recapitulation.

Regards

Pavel


> --
>  Craig Ringer   http://www.2ndQuadrant.com/
>  PostgreSQL Development, 24x7 Support, Training & Services
>


Re: [HACKERS] proposal: schema variables

2017-11-02 Thread Pavel Stehule
2017-11-02 16:35 GMT+01:00 Nico Williams <n...@cryptonector.com>:

> On Thu, Nov 02, 2017 at 06:05:54PM +0530, Robert Haas wrote:
> > On Thu, Oct 26, 2017 at 12:51 PM, Pavel Stehule <pavel.steh...@gmail.com>
> wrote:
> > > The variables can be modified by SQL command SET (this is taken from
> > > standard, and it natural)
> > >
> > > SET varname = expression;
> >
> > Overloading SET to handle both variables and GUCs seems likely to
> > create problems, possibly including security problems.  For example,
> > maybe a security-definer function could leave behind variables to
> > trick the calling code into failing to set GUCs that it intended to
> > set.  Or maybe creating a variable at the wrong time will just break
> > things randomly.
>
> That's already true of GUCs, since there are no access controls on
> set_config()/current_setting().
>
> Presumably "schema variables" would really just be GUC-like and not at
> all like lexically scoped variables.  And also subject to access
> controls, thus an overall improvement on set_config()/current_setting().
>
> With access controls, GUCs could become schema variables, and settings
> from postgresql.conf could move into the database itself (which I think
> would be nice).
>

I am sorry, but I don't plan it. the behave of GUC is too different than
behave of variables. But I am planning so system GUC can be "moved" to
pg_catalog to be possibility to specify any object exactly.

Regards

Pavel

>
> Nico
> --
>


Re: [HACKERS] SQL/JSON in PostgreSQL

2017-11-02 Thread Pavel Stehule
Hi

2017-11-02 3:39 GMT+01:00 Peter Eisentraut :

> Could someone clarify the status of this patch set?  It has been in
> "Waiting" mode since the previous CF and no new patch, just a few
> questions from the author.
>

There was a state "needs review". I looked to the patch, and found some
issues, so I sent mail about these issues and switched state to "waiting on
author"

Regards

Pavel

>
> --
> Peter Eisentraut  http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>


Re: [HACKERS] proposal: schema variables

2017-11-01 Thread Pavel Stehule
2017-11-01 19:03 GMT+01:00 Mark Dilger :

>
> > Comments, notes?
>
> How would variables behave on transaction rollback?
>
> CREATE TEMP VARIABLE myvar;
> SET myvar := 1;
> BEGIN;
> SET myvar := 2;
> COMMIT;
> BEGIN;
> SET myvar := 3;
> ROLLBACK;
> SELECT myvar;
>
> How would variables behave when modified in a procedure
> that aborts rather than returning cleanly?
>
>
The result is 3

When you create variable like you did, then there are not any relation
between variable content and transactions. Almost every where session -
package - schema variables are untransactional. It can be changed, but with
negative impact on performance - so I propose relative simply solution -
reset to default on rollback, when variables was changed in transaction -
but it is not default behave.

Variables are variables like you know from PlpgSQL. But the holder is not
the plpgsql function. The holder is a schema in this case. The variable
(meta) is permanent. The content of variable is session based
untransactional.

Regards

Pavel


> mark
>


Re: [HACKERS] [PATCH] Add two-arg for of current_setting(NAME, FALLBACK)

2017-11-01 Thread Pavel Stehule
Hi

better to send it as attachment

Regards

Pavel

2015-03-19 23:41 GMT+01:00 David Christensen :

> The two-arg form of the current_setting() function will allow a
> fallback value to be returned instead of throwing an error when an
> unknown GUC is provided.  This would come in most useful when using
> custom GUCs; e.g.:
>
>   -- errors out if the 'foo.bar' setting is unset
>   SELECT current_setting('foo.bar');
>
>   -- returns current setting of foo.bar, or 'default' if not set
>   SELECT current_setting('foo.bar', 'default')
>
> This would save you having to wrap the use of the function in an
> exception block just to catch and utilize a default setting value
> within a function.
> ---
>  src/backend/utils/misc/guc.c  | 50 ++
> ++---
>  src/include/catalog/pg_proc.h |  2 ++
>  src/include/utils/builtins.h  |  1 +
>  src/include/utils/guc.h   |  1 +
>  src/test/regress/expected/guc.out | 19 +++
>  src/test/regress/sql/guc.sql  | 12 ++
>  6 files changed, 82 insertions(+), 3 deletions(-)
>
> diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
> index 26275bd..002a926 100644
> --- a/src/backend/utils/misc/guc.c
> +++ b/src/backend/utils/misc/guc.c
> @@ -7703,13 +7703,32 @@ ShowAllGUCConfig(DestReceiver *dest)
>  char *
>  GetConfigOptionByName(const char *name, const char **varname)
>  {
> +   return GetConfigOptionByNameFallback(name, NULL, varname);
> +}
> +
> +/*
> + * Return GUC variable value by name; optionally return canonical form of
> + * name.  If GUC is NULL then optionally return a fallback value instead
> of an
> + * error.  Return value is palloc'd.
> + */
> +char *
> +GetConfigOptionByNameFallback(const char *name, const char
> *default_value, const char **varname)
> +{
> struct config_generic *record;
>
> record = find_option(name, false, ERROR);
> if (record == NULL)
> -   ereport(ERROR,
> -   (errcode(ERRCODE_UNDEFINED_OBJECT),
> -  errmsg("unrecognized configuration parameter
> \"%s\"", name)));
> +   {
> +   if (default_value) {
> +   return pstrdup(default_value);
> +   }
> +   else
> +   {
> +   ereport(ERROR,
> +   (errcode(ERRCODE_UNDEFINED_
> OBJECT),
> +  errmsg("unrecognized configuration
> parameter \"%s\"", name)));
> +   }
> +   }
> if ((record->flags & GUC_SUPERUSER_ONLY) && !superuser())
> ereport(ERROR,
> (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
> @@ -8008,6 +8027,31 @@ show_config_by_name(PG_FUNCTION_ARGS)
>  }
>
>  /*
> + * show_config_by_name_fallback - equiv to SHOW X command but implemented
> as
> + * a function.  If X does not exist, return a fallback datum instead of
> erroring
> + */
> +Datum
> +show_config_by_name_fallback(PG_FUNCTION_ARGS)
> +{
> +   char   *varname;
> +   char   *varfallback;
> +   char   *varval;
> +
> +   /* Get the GUC variable name */
> +   varname = TextDatumGetCString(PG_GETARG_DATUM(0));
> +
> +   /* Get the fallback value */
> +   varfallback = TextDatumGetCString(PG_GETARG_DATUM(1));
> +
> +   /* Get the value */
> +   varval = GetConfigOptionByNameFallback(varname, varfallback,
> NULL);
> +
> +   /* Convert to text */
> +   PG_RETURN_TEXT_P(cstring_to_text(varval));
> +}
> +
> +
> +/*
>   * show_all_settings - equiv to SHOW ALL command but implemented as
>   * a Table Function.
>   */
> diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
> index 6a757f3..71efed2 100644
> --- a/src/include/catalog/pg_proc.h
> +++ b/src/include/catalog/pg_proc.h
> @@ -3025,6 +3025,8 @@ DESCR("convert bitstring to int8");
>
>  DATA(insert OID = 2077 (  current_setting  PGNSP PGUID 12 1 0 0 0 f f
> f f t f s 1 0 25 "25" _null_ _null_ _null_ _null_ show_config_by_name
> _null_ _null_ _null_ ));
>  DESCR("SHOW X as a function");
> +DATA(insert OID = 3280 (  current_setting  PGNSP PGUID 12 1 0 0 0 f f
> f f t f s 2 0 25 "25 25" _null_ _null_ _null_ _null_
> show_config_by_name_fallback _null_ _null_ _null_ ));
> +DESCR("SHOW X as a function");
>  DATA(insert OID = 2078 (  set_config   PGNSP PGUID 12 1 0 0 0 f f
> f f f f v 3 0 25 "25 25 16" _null_ _null_ _null_ _null_ set_config_by_name
> _null_ _null_ _null_ ));
>  DESCR("SET X as a function");
>  DATA(insert OID = 2084 (  pg_show_all_settings PGNSP PGUID 12 1 1000 0 0
> f f f f t t s 0 0 2249 "" 
> "{25,25,25,25,25,25,25,25,25,25,25,1009,25,25,25,23}"
> "{o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o}" "{name,setting,unit,category,
> short_desc,extra_desc,context,vartype,source,min_val,max_
> val,enumvals,boot_val,reset_val,sourcefile,sourceline}" _null_
> show_all_settings _null_ _null_ 

Re: [HACKERS] Re: proposal - psql: possibility to specify sort for describe commands, when size is printed

2017-11-01 Thread Pavel Stehule
Hi


In general, this approach looks good for me.
> Regarding current state of patch, I'd like to see new options documented.
> Also, it would be better to replace "bool sort_size" with enum assuming
> there could be other sorting orders in future.
>

I am sending updated patch with some basic doc

Regards

Pavel

>
> --
> Alexander Korotkov
> Postgres Professional: http://www.postgrespro.com
> The Russian Postgres Company
>
diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml
index e520cdf3ba..7d816fe701 100644
--- a/doc/src/sgml/ref/psql-ref.sgml
+++ b/doc/src/sgml/ref/psql-ref.sgml
@@ -1303,10 +1303,10 @@ testdb=
 
   
 \dE[S+] [ pattern ]
-\di[S+] [ pattern ]
-\dm[S+] [ pattern ]
+\di[Ssd+] [ pattern ]
+\dm[Ssd+] [ pattern ]
 \ds[S+] [ pattern ]
-\dt[S+] [ pattern ]
+\dt[Ssd+] [ pattern ]
 \dv[S+] [ pattern ]
 
 
@@ -1328,6 +1328,13 @@ testdb=
 pattern or the S modifier to include system
 objects.
 
+
+
+When command contains s, then a result is
+sorted by size. When command contains d then
+result is in descend order. \dtsd+ shows list
+of tables sorted by size with descend order.
+
 
   
 
@@ -2253,7 +2260,7 @@ SELECT
 
 
   
-\l[+] or \list[+] [ pattern ]
+\l[sd+] or \list[+] [ pattern ]
 
 
 List the databases in the server and show their names, owners,
@@ -2265,6 +2272,12 @@ SELECT
 (Size information is only available for databases that the current
 user can connect to.)
 
+
+
+If s is used in command name, then the list is
+sorted by size. When d is used there, then result
+is in descend order.
+
 
   
 
diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c
index 041b5e0c87..aae88b08b4 100644
--- a/src/bin/psql/command.c
+++ b/src/bin/psql/command.c
@@ -349,8 +349,9 @@ exec_command(const char *cmd,
 		status = exec_command_include(scan_state, active_branch, cmd);
 	else if (strcmp(cmd, "if") == 0)
 		status = exec_command_if(scan_state, cstack, query_buf);
-	else if (strcmp(cmd, "l") == 0 || strcmp(cmd, "list") == 0 ||
-			 strcmp(cmd, "l+") == 0 || strcmp(cmd, "list+") == 0)
+	else if (strcmp(cmd, "list") == 0 || strcmp(cmd, "list+") == 0 ||
+			 strcmp(cmd, "l") == 0 || strncmp(cmd, "l+", 2) == 0 ||
+			 strncmp(cmd, "ls", 2) == 0)
 		status = exec_command_list(scan_state, active_branch, cmd);
 	else if (strncmp(cmd, "lo_", 3) == 0)
 		status = exec_command_lo(scan_state, active_branch, cmd);
@@ -702,7 +703,9 @@ exec_command_d(PsqlScanState scan_state, bool active_branch, const char *cmd)
 	{
 		char	   *pattern;
 		bool		show_verbose,
-	show_system;
+	show_system,
+	sort_desc;
+		sortby_type	sortby;
 
 		/* We don't do SQLID reduction on the pattern yet */
 		pattern = psql_scan_slash_option(scan_state,
@@ -711,6 +714,9 @@ exec_command_d(PsqlScanState scan_state, bool active_branch, const char *cmd)
 		show_verbose = strchr(cmd, '+') ? true : false;
 		show_system = strchr(cmd, 'S') ? true : false;
 
+		sortby = SORTBY_SCHEMA_NAME;
+		sort_desc = false;
+
 		switch (cmd[1])
 		{
 			case '\0':
@@ -720,7 +726,8 @@ exec_command_d(PsqlScanState scan_state, bool active_branch, const char *cmd)
 	success = describeTableDetails(pattern, show_verbose, show_system);
 else
 	/* standard listing of interesting things */
-	success = listTables("tvmsE", NULL, show_verbose, show_system);
+	success = listTables("tvmsE", NULL, show_verbose, show_system,
+		 false, false);
 break;
 			case 'A':
 success = describeAccessMethods(pattern, show_verbose);
@@ -789,12 +796,20 @@ exec_command_d(PsqlScanState scan_state, bool active_branch, const char *cmd)
 success = describeTypes(pattern, show_verbose, show_system);
 break;
 			case 't':
-			case 'v':
 			case 'm':
 			case 'i':
+if (strlen(cmd) >= 2)
+{
+	if (strchr([2], 's') != NULL)
+		sortby = SORTBY_SIZE;
+	sort_desc = strchr([2], 'd') ? true : false;
+}
+
+			case 'v':
 			case 's':
 			case 'E':
-success = listTables([1], pattern, show_verbose, show_system);
+success = listTables([1], pattern, show_verbose, show_system,
+	 sortby, sort_desc);
 break;
 			case 'r':
 if (cmd[2] == 'd' && cmd[3] == 's')
@@ -1655,13 +1670,17 @@ exec_command_list(PsqlScanState scan_state, bool active_branch, const char *cmd)
 	{
 		char	   *pattern;
 		bool		show_verbose;
+		bool		sort_desc;
+		sortby_type	sortby;
 
 		pattern = psql_scan_slash_option(scan_state,
 		 OT_NORMAL, NULL, true);
 
 		show_verbose = strchr(cmd, '+') ? true : false;
+		sortby = strchr(cmd, 's') != NULL ? SORTBY_SIZE : SORTBY_NAME;
+		sort_desc = strchr(cmd, 'd') ? true : false;
 
-		success = listAllDbs(pattern, show_verbose);
+		success = listAllDbs(pattern, show_verbose, 

Re: [HACKERS] Dynamic result sets from procedures

2017-11-01 Thread Pavel Stehule
2017-10-31 22:08 GMT+01:00 Peter Eisentraut <
peter.eisentr...@2ndquadrant.com>:

> This patch is more of a demo of what could be done, not my primary
> focus, but if there is interest and some assistance, maybe we can make
> something out of it.  This patch also goes on top of "SQL procedures"
> version 1.
>
> The purpose is to return multiple result sets from a procedure.  This
> is, I think, a common request when coming from MS SQL and DB2.  MS SQL
> has a completely different procedure syntax, but this proposal is
> compatible with DB2, which as usual was the model for the SQL standard.
> So this is what it can do:
>
> CREATE PROCEDURE pdrstest1()
> LANGUAGE SQL
> AS $$
> DECLARE c1 CURSOR WITH RETURN FOR SELECT * FROM cp_test2;
> DECLARE c2 CURSOR WITH RETURN FOR SELECT * FROM cp_test3;
> $$;
>
> CALL pdrstest1();
>
> and that returns those two result sets to the client.
>
> That's all it does for now.  Things get more complex when you consider
> nested calls.  The SQL standard describes additional facilities how an
> outer procedure can accept a called procedure's result sets, or not.  In
> the thread on transaction control, I mentioned that we might need some
> kind of procedure call stack.  Something like that would be needed here
> as well.  There are also probably some namespacing issues around the
> cursors that need more investigation.
>
> A more mundane issue is how we get psql to print multiple result sets.
> I have included here a patch that does that, and you can see that new
> result sets start popping up in the regression tests already.  There is
> also one need error that needs further investigation.
>
> We need to think about how the \timing option should work in such
> scenarios.  Right now it does
>
> start timer
> run query
> fetch result
> stop timer
> print result
>
> If we had multiple result sets, the most natural flow would be
>
> start timer
> run query
> while result sets
> fetch result
> print result
> stop timer
> print time
>
> but that would include the printing time in the total time, which the
> current code explicitly does not.  We could also temporarily save the
> result sets, like
>
> start timer
> run query
> while result sets
> fetch result
> stop timer
> foreach result set
> print result
>
> but that would have a lot more overhead, potentially.
>
> Thoughts?
>

Has the total time sense  in this case?

should not be total time related to any fetched result?

Regards

Pavel


> --
> Peter Eisentraut  http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
>


Re: [HACKERS] proposal: schema variables

2017-10-31 Thread Pavel Stehule
2017-11-01 6:07 GMT+01:00 Serge Rielau :

> "Although the syntax of CREATE TEMPORARY TABLE resembles that of the SQL
> standard, the effect is not the same. In the standard, temporary tables are
> defined just once and automatically exist (starting with empty contents) in
> every session that needs them. PostgreSQL instead requires each session
> to issue its own CREATE TEMPORARY TABLE command for each temporary table
> to be used. This allows different sessions to use the same temporary table
> name for different purposes, whereas the standard's approach constrains all
> instances of a given temporary table name to have the same table structure.”
> Yeah, that’s a DECLAREd table in my book. No wonder we didn’t link up.
>

This is known discussion about local / global temp tables in PostgresSQL.
And ToDo point: implementation of global temp tables in Postgres.

This temporary behave is marginal part of proposal - so I can to remove it
from proposal - and later open discussion about CREATE TEMPORARY VARIABLE
versus DECLARE VARIABLE

Regards

Pavel

Serge

>


Re: [HACKERS] proposal: schema variables

2017-10-31 Thread Pavel Stehule
2017-10-31 22:28 GMT+01:00 srielau :

> Pavel,
>
> There is no
> DECLARE TEMP CURSOR
> or
> DECLARE TEMP variable in PLpgSQL
> and
>

sure .. DECLARE TEMP has no sense, I talked about similarity DECLARE and
CREATE TEMP


CREATE TEMP TABLE has a different meaning from what I understand you
> envision for variables.
>
> But maybe I'm mistaken. Your original post did not describe the entire
> syntax:
> CREATE [TEMP] VARIABLE [IF NOT EXISTS] name AS type
>   [ DEFAULT expression ] [[NOT] NULL]
>   [ ON TRANSACTION END { RESET | DROP } ]
>   [ { VOLATILE | STABLE } ];
>
> Especially the TEMP is not spelled out and how its presence affects or
> doesn't ON TRANSACTION END.
> So may be if you elaborate I understand where you are coming from.
>

TEMP has same functionality (and implementation) like our temp tables - so
at session end the temp variables are destroyed, but it can be assigned to
transaction.



>
>
>
>
> --
> Sent from: http://www.postgresql-archive.org/PostgreSQL-hackers-
> f1928748.html
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>


Re: [HACKERS] proposal: schema variables

2017-10-31 Thread Pavel Stehule
2017-10-31 22:08 GMT+01:00 Serge Rielau :

> Pavel,
>
> I can imagine, so DECLARE command will be introduced as short cut for
> CREATE TEMP VARIABLE, but in this moment I would not to open this topic. I
> afraid of bikeshedding and I hope so CREATE TEMP VAR is anough.
>
> Language is important because language stays.
> You choice of syntax will outlive your code and possibly yourself.
>

sure. But in this moment I don't see difference between DECLARE VARIABLE
and CREATE TEMP VARIABLE different than "TEMP" keyword.

Regards

Pavel


> My 2 cents
> Serge
>


Re: [HACKERS] SQL procedures

2017-10-31 Thread Pavel Stehule
2017-10-31 18:23 GMT+01:00 Peter Eisentraut <
peter.eisentr...@2ndquadrant.com>:

> I've been working on SQL procedures.  (Some might call them "stored
> procedures", but I'm not aware of any procedures that are not stored, so
> that's not a term that I'm using here.)
>
> Everything that follows is intended to align with the SQL standard, at
> least in spirit.
>
> This first patch does a bunch of preparation work.  It adds the
> CREATE/ALTER/DROP PROCEDURE commands and the CALL statement to call a
> procedure.  It also adds ROUTINE syntax which can refer to a function or
> procedure.  I have extended that to include aggregates.  And then there
> is a bunch of leg work, such as psql and pg_dump support.  The
> documentation is a lot of copy-and-paste right now; that can be
> revisited sometime.  The provided procedural languages (an ever more
> confusing term) each needed a small touch-up to handle pg_proc entries
> with prorettype == 0.
>
> Right now, there is no support for returning values from procedures via
> OUT parameters.  That will need some definitional pondering; and see
> also below for a possible alternative.
>
> With this, you can write procedures that are somewhat compatible with
> DB2, MySQL, and to a lesser extent Oracle.
>
> Separately, I will send patches that implement (the beginnings of) two
> separate features on top of this:
>
> - Transaction control in procedure bodies
>
> - Returning multiple result sets
>
> (In various previous discussions on "real stored procedures" or
> something like that, most people seemed to have one of these two
> features in mind.  I think that depends on what other SQL systems one
> has worked with previously.)
>

Not sure if disabling RETURN is good idea. I can imagine so optional
returning something like int status can be good idea. Cheaper than raising
a exception.

Regards

Pavel


> --
> Peter Eisentraut  http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
>


Re: [HACKERS] proposal: schema variables

2017-10-31 Thread Pavel Stehule
Hi

2017-10-30 22:42 GMT+01:00 srielau :

> Pavel,
>
> I wouldn't put in the DROP option.
> Or at least not in that form of syntax.
>
> By convention CREATE persists DDL and makes object definitions visible
> across sessions.
> DECLARE defines session private objects which cannot collide with other
> sessions.
>
> If you want variables with a short lifetime that get dropped at the end of
> the transaction that by definition would imply a session private object. So
> it ought to be DECLARE'd.
>
> As far as I can see PG has been following this practice so far.
>

I am thinking so there is little bit overlap between DECLARE and CREATE
TEMP VARIABLE command. With DECLARE command, you are usually has not any
control when variable will be destroyed. For CREATE TEMP  is DROP IF
EXISTS, but it should not be used.

It should be very similar to our current temporary tables, that are created
in session related temp schema.

I can imagine, so DECLARE command will be introduced as short cut for
CREATE TEMP VARIABLE, but in this moment I would not to open this topic. I
afraid of bikeshedding and I hope so CREATE TEMP VAR is anough.

Regards

Pavel


> Cheers
> Serge Rielau
> Salesforce.com
>
>
>
> --
> Sent from: http://www.postgresql-archive.org/PostgreSQL-hackers-
> f1928748.html
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>


Re: [HACKERS] SQL procedures

2017-10-31 Thread Pavel Stehule
2017-10-31 18:23 GMT+01:00 Peter Eisentraut <
peter.eisentr...@2ndquadrant.com>:

> I've been working on SQL procedures.  (Some might call them "stored
> procedures", but I'm not aware of any procedures that are not stored, so
> that's not a term that I'm using here.)
>
> Everything that follows is intended to align with the SQL standard, at
> least in spirit.
>
> This first patch does a bunch of preparation work.  It adds the
> CREATE/ALTER/DROP PROCEDURE commands and the CALL statement to call a
> procedure.  It also adds ROUTINE syntax which can refer to a function or
> procedure.  I have extended that to include aggregates.  And then there
> is a bunch of leg work, such as psql and pg_dump support.  The
> documentation is a lot of copy-and-paste right now; that can be
> revisited sometime.  The provided procedural languages (an ever more
> confusing term) each needed a small touch-up to handle pg_proc entries
> with prorettype == 0.
>
> Right now, there is no support for returning values from procedures via
> OUT parameters.  That will need some definitional pondering; and see
> also below for a possible alternative.
>
> With this, you can write procedures that are somewhat compatible with
> DB2, MySQL, and to a lesser extent Oracle.
>
> Separately, I will send patches that implement (the beginnings of) two
> separate features on top of this:
>
> - Transaction control in procedure bodies
>
> - Returning multiple result sets
>
> (In various previous discussions on "real stored procedures" or
> something like that, most people seemed to have one of these two
> features in mind.  I think that depends on what other SQL systems one
> has worked with previously.)
>

great. I hope so I can help with testing

Regards

Pavel

>
> --
> Peter Eisentraut  http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
>


Re: [HACKERS] Re: proposal - psql: possibility to specify sort for describe commands, when size is printed

2017-10-28 Thread Pavel Stehule
2017-10-28 23:35 GMT+02:00 Alexander Korotkov <a.korot...@postgrespro.ru>:

> On Sat, Oct 28, 2017 at 3:46 PM, Pavel Stehule <pavel.steh...@gmail.com>
> wrote:
>
>> 2017-09-22 21:31 GMT+02:00 Pavel Stehule <pavel.steh...@gmail.com>:
>>
>>>
>>>
>>> 2017-09-22 21:12 GMT+02:00 Peter Eisentraut <
>>> peter.eisentr...@2ndquadrant.com>:
>>>
>>>> On 9/22/17 09:16, Pavel Stehule wrote:
>>>> > Example: somebody set SORT_COLUMNS to schema_name value. This is
>>>> > nonsense for \l command
>>>> >
>>>> > Now, I am thinking so more correct and practical design is based on
>>>> > special mode, activated by variable
>>>> >
>>>> > PREFER_SIZE_SORT .. (off, asc, desc)
>>>> >
>>>> > This has sense for wide group of commands that can show size. And when
>>>> > size is not visible, then this option is not active.
>>>>
>>>> Maybe this shouldn't be a variable at all.  It's not like you'll set
>>>> this as a global preference.  You probably want it for one command only.
>>>>  So a per-command option might make more sense.
>>>>
>>>
>>> Sure, I cannot to know, what users will do. But, when I need to see a
>>> size of objects, then I prefer the sort by size desc every time. If I need
>>> to find some object, then I can to use a searching in pager. So in my case,
>>> this settings will be in psqlrc. In GoodData we used years own
>>> customization - the order by size was hardcoded and nobody reported me any
>>> issue.
>>>
>>> Alexander proposed some per command option, but current syntax of psql
>>> commands don't allows some simple parametrization. If it can be user
>>> friendly, then it should be short. From implementation perspective, it
>>> should be simply parsed. It should be intuitive too - too much symbols
>>> together is not good idea.
>>>
>>> Maybe some prefix design - but it is not design for common people
>>> (although these people don't use psql usually)
>>>
>>> '\sort size \dt ?
>>>
>>> \dt:sort_by_size
>>> \dt+:sort_by_size ?
>>>
>>> I don't see any good design in this direction
>>>
>>>
>> I though about Alexander proposal, and I am thinking so it can be
>> probably best if we respect psql design. I implemented two command suffixes
>> (supported only when it has sense) "s" sorted by size and "d" as descent
>>
>> so list of tables can be sorted with commands:
>>
>> \dt+sd (in this case, the order is not strict), so command
>> \dtsd+ is working too (same \disd+ or \di+sd)
>>
>> These two chars are acceptable. Same principle is used for \l command
>>
>> \lsd+ or \l+sd
>>
>> What do you think about it?
>>
>
> I think \lsd+ command would be another postgres meme :)
> BTW, are you going to provide an ability to sort by name, schema?
>

It has sense only for tables - probably only \dtn "n" like name



> --
> Alexander Korotkov
> Postgres Professional: http://www.postgrespro.com
> The Russian Postgres Company
>


Re: [HACKERS] proposal: schema variables

2017-10-28 Thread Pavel Stehule
Hi

2017-10-28 16:24 GMT+02:00 Chris Travers <chris.trav...@adjust.com>:

>
>
> On Thu, Oct 26, 2017 at 9:21 AM, Pavel Stehule <pavel.steh...@gmail.com>
> wrote:
>
>> Hi,
>>
>> I propose a  new database object - a variable. The variable is persistent
>> object, that holds unshared session based not transactional in memory value
>> of any type. Like variables in any other languages. The persistence is
>> required for possibility to do static checks, but can be limited to session
>> - the variables can be temporal.
>>
>> My proposal is related to session variables from Sybase, MSSQL or MySQL
>> (based on prefix usage @ or @@), or package variables from Oracle (access
>> is controlled by scope), or schema variables from DB2. Any design is coming
>> from different sources, traditions and has some advantages or
>> disadvantages. The base of my proposal is usage schema variables as session
>> variables for stored procedures. It should to help to people who try to
>> port complex projects to PostgreSQL from other databases.
>>
>> The Sybase  (T-SQL) design is good for interactive work, but it is weak
>> for usage in stored procedures - the static check is not possible. Is not
>> possible to set some access rights on variables.
>>
>> The ADA design (used on Oracle) based on scope is great, but our
>> environment is not nested. And we should to support other PL than PLpgSQL
>> more strongly.
>>
>> There is not too much other possibilities - the variable that should be
>> accessed from different PL, different procedures (in time) should to live
>> somewhere over PL, and there is the schema only.
>>
>> The variable can be created by CREATE statement:
>>
>> CREATE VARIABLE public.myvar AS integer;
>> CREATE VARIABLE myschema.myvar AS mytype;
>>
>> CREATE [TEMP] VARIABLE [IF NOT EXISTS] name AS type
>>   [ DEFAULT expression ] [[NOT] NULL]
>>   [ ON TRANSACTION END { RESET | DROP } ]
>>   [ { VOLATILE | STABLE } ];
>>
>> It is dropped by command DROP VARIABLE  [ IF EXISTS] varname.
>>
>> The access rights is controlled by usual access rights - by commands
>> GRANT/REVOKE. The possible rights are: READ, WRITE
>>
>> The variables can be modified by SQL command SET (this is taken from
>> standard, and it natural)
>>
>> SET varname = expression;
>>
>> Unfortunately we use the SET command for different purpose. But I am
>> thinking so we can solve it with few tricks. The first is moving our GUC to
>> pg_catalog schema. We can control the strictness of SET command. In one
>> variant, we can detect custom GUC and allow it, in another we can disallow
>> a custom GUC and allow only schema variables. A new command LET can be
>> alternative.
>>
>> The variables should be used in queries implicitly (without JOIN)
>>
>> SELECT varname;
>>
>> The SEARCH_PATH is used, when varname is located. The variables can be
>> used everywhere where query parameters are allowed.
>>
>> I hope so this proposal is good enough and simple.
>>
>> Comments, notes?
>>
>
>
> I have a question on this.  Since one can issue set commands on arbitrary
> settings (and later ALTER database/role/system on settings you have created
> in the current session) I am wondering how much overlap there is between a
> sort of extended GUC with custom settings and variables.
>
> Maybe it would be simpler to treat variables and GUC settings to be
> similar and see what can be done to extend GUC in this way?
>
> I mean if instead we allowed restricting SET to known settings then we
> could have a CREATE SETTING command which would behave like this and then
> use SET the same way across both.
>
> In essence I am wondering if this really needs to be as separate from GUC
> as you are proposing.
>
> If done this way then:
>
> 1.  You could issue grant or revoke on GUC settings, allowing some users
> but not others to set things like work_mem for their queries
> 2.  You could specify allowed types in custom settings.
> 3.  In a subsequent stage you might be able to SELECT  INTO
> setting_name FROM ;  allowing access to setting writes based on queries.
>
>
The creating database objects and necessary infrastructure is the most
simple task of this project. I'll be more happy if there are zero
intersection because variables and GUC are designed for different purposes.
But due SET keyword the intersection there is.

When I thinking about it, I have only one, but important reason, why I
prefer design new type of database object -the GUC are stack based with
different default g

Re: [HACKERS] Re: proposal - psql: possibility to specify sort for describe commands, when size is printed

2017-10-28 Thread Pavel Stehule
Hi

2017-09-22 21:31 GMT+02:00 Pavel Stehule <pavel.steh...@gmail.com>:

>
>
> 2017-09-22 21:12 GMT+02:00 Peter Eisentraut <peter.eisentraut@2ndquadrant.
> com>:
>
>> On 9/22/17 09:16, Pavel Stehule wrote:
>> > Example: somebody set SORT_COLUMNS to schema_name value. This is
>> > nonsense for \l command
>> >
>> > Now, I am thinking so more correct and practical design is based on
>> > special mode, activated by variable
>> >
>> > PREFER_SIZE_SORT .. (off, asc, desc)
>> >
>> > This has sense for wide group of commands that can show size. And when
>> > size is not visible, then this option is not active.
>>
>> Maybe this shouldn't be a variable at all.  It's not like you'll set
>> this as a global preference.  You probably want it for one command only.
>>  So a per-command option might make more sense.
>>
>
> Sure, I cannot to know, what users will do. But, when I need to see a size
> of objects, then I prefer the sort by size desc every time. If I need to
> find some object, then I can to use a searching in pager. So in my case,
> this settings will be in psqlrc. In GoodData we used years own
> customization - the order by size was hardcoded and nobody reported me any
> issue.
>
> Alexander proposed some per command option, but current syntax of psql
> commands don't allows some simple parametrization. If it can be user
> friendly, then it should be short. From implementation perspective, it
> should be simply parsed. It should be intuitive too - too much symbols
> together is not good idea.
>
> Maybe some prefix design - but it is not design for common people
> (although these people don't use psql usually)
>
> '\sort size \dt ?
>
> \dt:sort_by_size
> \dt+:sort_by_size ?
>
> I don't see any good design in this direction
>
>
I though about Alexander proposal, and I am thinking so it can be probably
best if we respect psql design. I implemented two command suffixes
(supported only when it has sense) "s" sorted by size and "d" as descent

so list of tables can be sorted with commands:

\dt+sd (in this case, the order is not strict), so command
\dtsd+ is working too (same \disd+ or \di+sd)

These two chars are acceptable. Same principle is used for \l command

\lsd+ or \l+sd

What do you think about it?

Regards

Pavel



> Regards
>
> Pavel
>
>
>
>
>
>
>
>>
>> --
>> Peter Eisentraut  http://www.2ndQuadrant.com/
>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>>
>
>
diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c
index 041b5e0c87..548b0d8d41 100644
--- a/src/bin/psql/command.c
+++ b/src/bin/psql/command.c
@@ -349,8 +349,9 @@ exec_command(const char *cmd,
 		status = exec_command_include(scan_state, active_branch, cmd);
 	else if (strcmp(cmd, "if") == 0)
 		status = exec_command_if(scan_state, cstack, query_buf);
-	else if (strcmp(cmd, "l") == 0 || strcmp(cmd, "list") == 0 ||
-			 strcmp(cmd, "l+") == 0 || strcmp(cmd, "list+") == 0)
+	else if (strcmp(cmd, "list") == 0 || strcmp(cmd, "list+") == 0 ||
+			 strcmp(cmd, "l") == 0 || strncmp(cmd, "l+", 2) == 0 ||
+			 strncmp(cmd, "ls", 2) == 0)
 		status = exec_command_list(scan_state, active_branch, cmd);
 	else if (strncmp(cmd, "lo_", 3) == 0)
 		status = exec_command_lo(scan_state, active_branch, cmd);
@@ -702,7 +703,9 @@ exec_command_d(PsqlScanState scan_state, bool active_branch, const char *cmd)
 	{
 		char	   *pattern;
 		bool		show_verbose,
-	show_system;
+	show_system,
+	sort_size,
+	sort_desc;
 
 		/* We don't do SQLID reduction on the pattern yet */
 		pattern = psql_scan_slash_option(scan_state,
@@ -711,6 +714,9 @@ exec_command_d(PsqlScanState scan_state, bool active_branch, const char *cmd)
 		show_verbose = strchr(cmd, '+') ? true : false;
 		show_system = strchr(cmd, 'S') ? true : false;
 
+		sort_size = false;
+		sort_desc = false;
+
 		switch (cmd[1])
 		{
 			case '\0':
@@ -720,7 +726,8 @@ exec_command_d(PsqlScanState scan_state, bool active_branch, const char *cmd)
 	success = describeTableDetails(pattern, show_verbose, show_system);
 else
 	/* standard listing of interesting things */
-	success = listTables("tvmsE", NULL, show_verbose, show_system);
+	success = listTables("tvmsE", NULL, show_verbose, show_system,
+		 false, false);
 break;
 			case 'A':
 success = describeAccessMethods(pattern, show_verbose);
@@ -789,12 +796,19 @@ exec_command_d(PsqlScanState scan_state, bool active_branch, const char *cmd)
 success = describeTypes(pattern, show_verbose, show_system);
 break;
 			case 't':
-			cas

Re: [HACKERS] proposal: schema variables

2017-10-27 Thread Pavel Stehule
2017-10-27 15:38 GMT+02:00 Gilles Darold <gilles.dar...@dalibo.com>:

> Le 26/10/2017 à 09:21, Pavel Stehule a écrit :
> > Hi,
> >
> > I propose a  new database object - a variable. The variable is
> > persistent object, that holds unshared session based not transactional
> > in memory value of any type. Like variables in any other languages.
> > The persistence is required for possibility to do static checks, but
> > can be limited to session - the variables can be temporal.
> >
> > My proposal is related to session variables from Sybase, MSSQL or
> > MySQL (based on prefix usage @ or @@), or package variables from
> > Oracle (access is controlled by scope), or schema variables from DB2.
> > Any design is coming from different sources, traditions and has some
> > advantages or disadvantages. The base of my proposal is usage schema
> > variables as session variables for stored procedures. It should to
> > help to people who try to port complex projects to PostgreSQL from
> > other databases.
> >
> > The Sybase  (T-SQL) design is good for interactive work, but it is
> > weak for usage in stored procedures - the static check is not
> > possible. Is not possible to set some access rights on variables.
> >
> > The ADA design (used on Oracle) based on scope is great, but our
> > environment is not nested. And we should to support other PL than
> > PLpgSQL more strongly.
> >
> > There is not too much other possibilities - the variable that should
> > be accessed from different PL, different procedures (in time) should
> > to live somewhere over PL, and there is the schema only.
> >
> > The variable can be created by CREATE statement:
> >
> > CREATE VARIABLE public.myvar AS integer;
> > CREATE VARIABLE myschema.myvar AS mytype;
> >
> > CREATE [TEMP] VARIABLE [IF NOT EXISTS] name AS type
> >   [ DEFAULT expression ] [[NOT] NULL]
> >   [ ON TRANSACTION END { RESET | DROP } ]
> >   [ { VOLATILE | STABLE } ];
> >
> > It is dropped by command DROP VARIABLE  [ IF EXISTS] varname.
> >
> > The access rights is controlled by usual access rights - by commands
> > GRANT/REVOKE. The possible rights are: READ, WRITE
> >
> > The variables can be modified by SQL command SET (this is taken from
> > standard, and it natural)
> >
> > SET varname = expression;
> >
> > Unfortunately we use the SET command for different purpose. But I am
> > thinking so we can solve it with few tricks. The first is moving our
> > GUC to pg_catalog schema. We can control the strictness of SET
> > command. In one variant, we can detect custom GUC and allow it, in
> > another we can disallow a custom GUC and allow only schema variables.
> > A new command LET can be alternative.
> >
> > The variables should be used in queries implicitly (without JOIN)
> >
> > SELECT varname;
> >
> > The SEARCH_PATH is used, when varname is located. The variables can be
> > used everywhere where query parameters are allowed.
> >
> > I hope so this proposal is good enough and simple.
> >
> > Comments, notes?
> >
> > regards
> >
> > Pavel
> >
> >
>
> Great feature that will help for migration. How will you handle CONSTANT
> declaration? With Oracle it is possible to declare a constant as follow:
>
>
>   varname CONSTANT INTEGER:= 500;
>
>
> for a variable that can't be changed. Do you plan to add a CONSTANT or
> READONLY keyword or do you want use GRANT on the object to deal with
> this case?
>

Plpgsql  declaration supports CONSTANT

I forgot it. Thank you

Pavel



>
> Regards
>
> --
> Gilles Darold
> Consultant PostgreSQL
> http://dalibo.com - http://dalibo.org
>
>
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>


Re: [HACKERS] inconsistency in process names - bgworker: logical replication launcher

2017-10-27 Thread Pavel Stehule
2017-10-27 13:03 GMT+02:00 Peter Eisentraut <
peter.eisentr...@2ndquadrant.com>:

> On 10/27/17 04:06, Pavel Stehule wrote:
> > Why buildin process has prefix bgworker?
>
> Implementation detail.  This has been changed in master already.
>

ok

Thank you

Pavel

>
> --
> Peter Eisentraut  http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>


[HACKERS] inconsistency in process names - bgworker: logical replication launcher

2017-10-27 Thread Pavel Stehule
Hi

Why buildin process has prefix bgworker?

 1907 ?Ss13:00 postgres: ides ides_immaj_prac
192.168.1.50(3524) idle
 1941 ?Ss 0:05 postgres: ides ides_immaj_prac
192.168.1.50(3527) idle
 3706 ?Ss 0:00 postgres: ides ides_immaj_prac
192.168.1.50(4012) idle
11924 pts/0S+ 0:00 grep postgres
18710 ?Ss 0:01 postgres: logger
process
18712 ?Ss 0:06 postgres: checkpointer
process
18713 ?Ss 0:02 postgres: writer
process
18714 ?Ss 0:05 postgres: wal writer
process
18715 ?Ss 0:04 postgres: autovacuum launcher
process
18716 ?Ss 0:39 postgres: stats collector
process
18717 ?Ss 0:00 postgres: bgworker: logical replication
launcher
32202 ?Ss 0:00 postgres: ides postgres 192.168.1.50(3109)
idle
32226 ?Ss 0:00 postgres: ides ides_immaj_prac
192.168.1.50(3114) idle
32274 ?Ss 0:00 postgres: ides ides_jmmaj_akt 192.168.1.50(3124)
idle

regards

Pavel


Re: [HACKERS] proposal: schema variables

2017-10-27 Thread Pavel Stehule
2017-10-27 7:47 GMT+02:00 Tsunakawa, Takayuki <
tsunakawa.ta...@jp.fujitsu.com>:

> From: pgsql-hackers-ow...@postgresql.org
> > [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Pavel Stehule
> > I propose a  new database object - a variable. The variable is persistent
> > object, that holds unshared session based not transactional in memory
> value
> > of any type. Like variables in any other languages. The persistence is
> > required for possibility to do static checks, but can be limited to
> session
> > - the variables can be temporal.
> >
> >
> > My proposal is related to session variables from Sybase, MSSQL or MySQL
> > (based on prefix usage @ or @@), or package variables from Oracle (access
> > is controlled by scope), or schema variables from DB2. Any design is
> coming
> > from different sources, traditions and has some advantages or
> disadvantages.
> > The base of my proposal is usage schema variables as session variables
> for
> > stored procedures. It should to help to people who try to port complex
> > projects to PostgreSQL from other databases.
>
> Very interesting.  I hope I could join the review and testing.
>

you are welcome. I wrote a prototype last year based on envelope functions.
But the integration must be much more close to SQL to be some clear benefit
of this feature. So there is lot of work. I hope so I have a prototype
after this winter. It is my plan for winter.


>
> How do you think this would contribute to easing the port of Oracle PL/SQL
> procedures?  Would the combination of orafce and this feature promote
> auto-translation of PL/SQL procedures?  I'm curious what will be the major
> road blocks after adding the schema variable.
>

It depends on creativity of PL/SQL developers. Usual .. 80% application is
possible to migrate with current GUC - some work does ora2pg. But GUC is
little bit slower (not too important) and is not simple possibility to
secure it.

So work with variables will be similar like GUC, but significantly more
natural (not necessary to build wrap functions). It should be much better
when value is of some composite type. The migrations will need some
inteligence still, but less work and code will be more readable and cleaner.

I talked already about "schema pined" functions (schema private/public
objects) - but I didn't think about it more deeply. There can be special
access right to schema variables, the pined schema can be preferred before
search_path. With this feature the schema will have very similar behave
like Oracle Modules. Using different words - we can implement scope access
rights based on schemas. But it is far horizon. What is important -
proposal doesn't block any future enhancing in this case, and is consistent
with current state. In future you can work with schema private functions,
tables, variables, sequences. So variables are nothing special.

Regards

Pavel


Regards
> Takayuki Tsunakawa
>
>
>


Re: [HACKERS] proposal: schema variables

2017-10-26 Thread Pavel Stehule
Hi

2017-10-27 0:07 GMT+02:00 Nico Williams <n...@cryptonector.com>:

> On Thu, Oct 26, 2017 at 09:21:24AM +0200, Pavel Stehule wrote:
> > Comments, notes?
>
> I like it.
>
> I would further like to move all of postgresql.conf into the database,
> as much as possible, as well as pg_ident.conf and pg_hba.conf.
>
> Variables like current_user have a sort of nesting context
> functionality: calling a SECURITY DEFINER function "pushes" a new value
> onto current_user, then when the function returns the new value of
> current_user is "popped" and the previous value restored.
>

My proposal doesn't expecting with nesting, because there is only one scope
- schema / session - but I don't think so it is necessary

current_user is a function - it is based on parser magic in Postgres. The
origin from Oracle uses the feature of ADA language. When function has no
parameters then parenthesis are optional. So current_user, current_time are
functions current_user(), current_time().


> It might be nice to be able to generalize this.
>
> Questions that then arise:
>
>  - can one see up the stack?
>  - are there permissions issues with seeing up the stack?
>

these variables are pined to schema - so there is not any relation to
stack. It is like global variables.

Theoretically we can introduce "functional" variables, where the value is
based on immediate evaluation of expression. It can be very similar to
current current_user.

>
>
> I recently posted proposing a feature such that SECURITY DEFINER
> functions could observe the _caller_'s current_user.
>

your use case is good example - this proposed feature doesn't depend on
stack, depends on security context (security context stack) what is super
set of call stack

Regards

Pavel



> Nico
> --
>


[HACKERS] proposal: schema variables

2017-10-26 Thread Pavel Stehule
Hi,

I propose a  new database object - a variable. The variable is persistent
object, that holds unshared session based not transactional in memory value
of any type. Like variables in any other languages. The persistence is
required for possibility to do static checks, but can be limited to session
- the variables can be temporal.

My proposal is related to session variables from Sybase, MSSQL or MySQL
(based on prefix usage @ or @@), or package variables from Oracle (access
is controlled by scope), or schema variables from DB2. Any design is coming
from different sources, traditions and has some advantages or
disadvantages. The base of my proposal is usage schema variables as session
variables for stored procedures. It should to help to people who try to
port complex projects to PostgreSQL from other databases.

The Sybase  (T-SQL) design is good for interactive work, but it is weak for
usage in stored procedures - the static check is not possible. Is not
possible to set some access rights on variables.

The ADA design (used on Oracle) based on scope is great, but our
environment is not nested. And we should to support other PL than PLpgSQL
more strongly.

There is not too much other possibilities - the variable that should be
accessed from different PL, different procedures (in time) should to live
somewhere over PL, and there is the schema only.

The variable can be created by CREATE statement:

CREATE VARIABLE public.myvar AS integer;
CREATE VARIABLE myschema.myvar AS mytype;

CREATE [TEMP] VARIABLE [IF NOT EXISTS] name AS type
  [ DEFAULT expression ] [[NOT] NULL]
  [ ON TRANSACTION END { RESET | DROP } ]
  [ { VOLATILE | STABLE } ];

It is dropped by command DROP VARIABLE  [ IF EXISTS] varname.

The access rights is controlled by usual access rights - by commands
GRANT/REVOKE. The possible rights are: READ, WRITE

The variables can be modified by SQL command SET (this is taken from
standard, and it natural)

SET varname = expression;

Unfortunately we use the SET command for different purpose. But I am
thinking so we can solve it with few tricks. The first is moving our GUC to
pg_catalog schema. We can control the strictness of SET command. In one
variant, we can detect custom GUC and allow it, in another we can disallow
a custom GUC and allow only schema variables. A new command LET can be
alternative.

The variables should be used in queries implicitly (without JOIN)

SELECT varname;

The SEARCH_PATH is used, when varname is located. The variables can be used
everywhere where query parameters are allowed.

I hope so this proposal is good enough and simple.

Comments, notes?

regards

Pavel


Re: [HACKERS] pgbench - allow to store select results into variables

2017-10-25 Thread Pavel Stehule
Hi

2017-10-20 18:37 GMT+02:00 Fabien COELHO :

>
> Here is a v12.
>>
>
> Here is a v13, which is just a rebase after the documentation xml-ization.
>

I am looking to this patch.

Not sure if "cset" is best name - maybe "eset" .. like embeded set?

The code of append_sql_command is not too readable and is not enough
commented.

I don't understand why you pass a param compounds to append_sql_command,
when this value is stored in my_command->compound from create_sql_command?

Or maybe some unhappy field or variable names was chosen.

Regards

Pavel

>
> --
> Fabien.


Re: [HACKERS] pgbench more operators & functions

2017-10-25 Thread Pavel Stehule
Hi

2017-10-20 18:36 GMT+02:00 Fabien COELHO :

>
> Here is a v13. No code changes, but TAP tests added to maintain pgbench
 coverage to green.

>>>
> Here is a v14, which is just a rebase after the documentation xml-ization.
>

all tests passed
no problems with doc building


> --
> Fabien.


[HACKERS] new field for structured exception - query

2017-10-22 Thread Pavel Stehule
Hi

Currently we don't allow a access to internalquery field from PLpgSQL via
GET STACKED DIAGNOSTICS.

Do you think so has sense to allow the access to this field? The patch can
be very small.

Regards

Pavel


Re: [HACKERS] stalled post to mailing list - wrong filter?

2017-10-22 Thread Pavel Stehule
2017-10-22 9:08 GMT+02:00 Magnus Hagander <mag...@hagander.net>:

>
>
> On Sun, Oct 22, 2017 at 9:02 AM, Pavel Stehule <pavel.steh...@gmail.com>
> wrote:
>
>> Hi
>>
>> I sent correct mail, that requires the approval - maybe bad filter?
>>
>
> There are some pretty restrictive filters in place on the mj2 lists in
> order to deal with cases where people send admin requests. This happens now
> and then, and will get released from the moderation queue quickly I'm sure
> :)
>
> Long term we'll be getting rid of those filters, but they're there for a
> while longer.
>

ok

Thank you

Pavel


> --
>  Magnus Hagander
>  Me: https://www.hagander.net/ <http://www.hagander.net/>
>  Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>
>


[HACKERS] stalled post to mailing list - wrong filter?

2017-10-22 Thread Pavel Stehule
Hi

I sent correct mail, that requires the approval - maybe bad filter?

Your message to pgsql-hackers has been delayed, and requires the approval
of the moderators, for the following reason(s):

GLOBAL ADMIN BODY:  /^\s*get\s+\S+\s+\S+\s*$/i matched "G#E#T STACKED
DIAGNOSTICS." at line number 4.

If you do not wish the message to be posted, or have other concerns,
please send a message to the list owners at the following address:
  pgsql-hackers-ow...@postgresql.org

Regards

Pavel


Re: [HACKERS] Interest in a SECURITY DEFINER function current_user stack access mechanism?

2017-10-18 Thread Pavel Stehule
2017-10-18 22:01 GMT+02:00 Nico Williams :

> It'd be nice if SECURITY DEFINER functions could see what user invoked
> them, but current_user is the DEFINER user, naturally, since that's how
> this is done in fmgr_security_definer().
>
> I was thinking that fmgr_security_definer() could keep a global pointer
> to a linked list (with automatic nodes) of the save_userid values.  Then
> we could have a SQL function for accessing these, something like
> pg_current_user(level int) returning text, where level 0 is
> current_user, level 1 is "the previous current_user in the stack", and
> so on, returning null when level is beyond the top-level.
>
> This seems like a simple, small, easy patch, and since I [think I] need
> it I suspect others probably do as well.
>
> Thoughts?
>

there is a function session_user() already

regards

Pavel


> Nico
> --
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>


Re: [HACKERS] possible encoding issues with libxml2 functions

2017-10-16 Thread Pavel Stehule
2017-10-17 1:57 GMT+02:00 Noah Misch <n...@leadboat.com>:

> On Sun, Aug 20, 2017 at 10:37:10PM +0200, Pavel Stehule wrote:
> > > We have xpath-bugfix.patch and xpath-parsing-error-fix.patch.  Both are
> > > equivalent under supported use cases (xpath in UTF8 databases).  Among
> > > non-supported use cases, they each make different things better and
> > > different
> > > things worse.  We should prefer to back-patch the version harming fewer
> > > applications.  I expect non-ASCII data is more common than xml
> declarations
> > > with "encoding" attribute, so xpath-bugfix.patch will harm fewer
> > > applications.
> > >
> > > Having said that, I now see a third option.  Condition this thread's
> > > patch's
> > > effects on GetDatabaseEncoding()==PG_UTF8.  That way, we fix supported
> > > cases,
> > > and we remain bug-compatible in unsupported cases.  I think that's
> better
> > > than
> > > the other options discussed so far.  If you agree, please send a patch
> > > based
> > > on xpath-bugfix.patch with the GetDatabaseEncoding()==PG_UTF8 change
> and
> > > the
> > > two edits I described earlier.
> > >
> >
> > I am sorry -  too long day today. Do you think some like
> >
> > diff --git a/src/backend/utils/adt/xml.c b/src/backend/utils/adt/xml.c
> > index 24229c2dff..9fd6f3509f 100644
> > --- a/src/backend/utils/adt/xml.c
> > +++ b/src/backend/utils/adt/xml.c
> > @@ -3914,7 +3914,14 @@ xpath_internal(text *xpath_expr_text, xmltype
> *data,
> > ArrayType *namespaces,
> > if (ctxt == NULL || xmlerrcxt->err_occurred)
> > xml_ereport(xmlerrcxt, ERROR, ERRCODE_OUT_OF_MEMORY,
> > "could not allocate parser context");
> > -   doc = xmlCtxtReadMemory(ctxt, (char *) string, len, NULL, NULL,
> 0);
> > +
> > +   /*
> > +* Passed XML is always in server encoding. When server encoding
> > +* is UTF8, we can pass this information to libxml2 to ignore
> > +* possible invalid encoding declaration in XML document.
> > +*/
> > +   doc = xmlCtxtReadMemory(ctxt, (char *) string, len, NULL,
> > +   GetDatabaseEncoding() == PG_UTF8 ? "UTF-8" : NULL, 0);
> > if (doc == NULL || xmlerrcxt->err_occurred)
> > xml_ereport(xmlerrcxt, ERROR, ERRCODE_INVALID_XML_DOCUMENT,
> > "could not parse XML document");
>
> No, that doesn't match my description above.  I don't see a way to clarify
> the
> description.  Feel free to try again.  Alternately, if you wait, I will
> eventually construct the patch I described.
>

Please, if you can, try it write. I am little bit lost :)

Regards

Pavel


Re: [HACKERS] possible encoding issues with libxml2 functions

2017-10-15 Thread Pavel Stehule
2017-08-21 6:25 GMT+02:00 Pavel Stehule <pavel.steh...@gmail.com>:

>
>
>> xpath-bugfix.patch affected only xml values containing an xml declaration
>> with
>> "encoding" attribute.  In UTF8 databases, this latest proposal
>> (xpath-parsing-error-fix.patch) is equivalent to xpath-bugfix.patch.  In
>> non-UTF8 databases, xpath-parsing-error-fix.patch affects all xml values
>> containing non-ASCII data.  In a LATIN1 database, the following works
>> today
>> but breaks under your latest proposal:
>>
>>   SELECT xpath('text()', ('' || convert_from('\xc2b0', 'LATIN1') ||
>> '')::xml);
>>
>> It's acceptable to break that, since the documentation explicitly
>> disclaims
>> support for it.  xpath-bugfix.patch breaks different use cases, which are
>> likewise acceptable to break.  See my 2017-08-08 review for details.
>>
>
> The fact so this code is working shows so a universe is pretty dangerous
> place :)
>
>
ping?

will we continue in this topic?



>


Re: [HACKERS] proposal - Default namespaces for XPath expressions (PostgreSQL 11)

2017-10-15 Thread Pavel Stehule
2017-10-02 12:22 GMT+02:00 Kyotaro HORIGUCHI <
horiguchi.kyot...@lab.ntt.co.jp>:

> Hi, thanks for the new patch.
>
> # The patch is missing xpath_parser.h. That of the first patch was usable.
>
> At Thu, 28 Sep 2017 07:59:41 +0200, Pavel Stehule <pavel.steh...@gmail.com>
> wrote in 

Re: [HACKERS] fresh regression - regproc result contains unwanted schema

2017-10-14 Thread Pavel Stehule
2017-10-14 17:26 GMT+02:00 Tom Lane <t...@sss.pgh.pa.us>:

> Pavel Stehule <pavel.steh...@gmail.com> writes:
> > When function is overwritten, then regproc result contains schema,
> although
> > it is on search_path
>
> There's no "fresh regression" here, it's done that more or less since
> we invented schemas.  See regprocout:
>
>  * Would this proc be found (uniquely!) by regprocin? If not,
>  * qualify it.
>
> git blame dates that comment to commit 52200bef of 2002-04-25.
>
> Admittedly, qualifying the name might not be sufficient to disambiguate,
> but regprocout doesn't have any other tool in its toolbox, so it uses
> the hammer it's got.  If you're overloading functions, you really need
> to use regprocedure not regproc.
>

It is false alarm. I am sorry. I shot by self. Thank you for explanation

Nice evening.

Pavel


> regards, tom lane
>


[HACKERS] fresh regression - regproc result contains unwanted schema

2017-10-14 Thread Pavel Stehule
Hi

when I fixed old bug of plpgsql_check I found new regression of regproc
output.

set check_function_bodies TO off;

postgres=# create or replace function f1() returns int as $$ begin end $$
language plpgsql;
CREATE FUNCTION
postgres=# select 'f1()'::regprocedure::oid::regproc;
 regproc
-
 f1
(1 row)

postgres=# create or replace function f1(int) returns int as $$ begin end
$$ language plpgsql;
CREATE FUNCTION
postgres=# select 'f1()'::regprocedure::oid::regproc;
  regproc
---
 public.f1
(1 row)

When function is overwritten, then regproc result contains schema, although
it is on search_path

This behave breaks regress tests (and it is not consistent)

Tested on master

Regards

Pavel


Re: [HACKERS] PoC plpgsql - possibility to force custom or generic plan

2017-10-12 Thread Pavel Stehule
2017-09-19 20:49 GMT+02:00 Merlin Moncure <mmonc...@gmail.com>:

> On Tue, Sep 19, 2017 at 1:37 PM, Robert Haas <robertmh...@gmail.com>
> wrote:
> > On Tue, Sep 19, 2017 at 12:45 PM, Pavel Stehule <pavel.steh...@gmail.com>
> wrote:
> >>> You can already set a GUC with function scope.  I'm not getting your
> >>> point.
> >>
> >> yes, it is true. But implementation of #option is limited to PLpgSQL -
> so
> >> there is not any too much questions - GUC is global - there is lot of
> >> points:
> >>
> >> * what is correct impact on PREPARE
> >> * what is correct impact on EXECUTE
> >> * what should be done if this GUC is changed ..
> >
> > For better or for worse, as a project we've settled on GUCs as a way
> > to control behavior.  I think it makes more sense to try to apply that
> > option to new behaviors we want to control than to invent some new
> > system.
>
> This seems very sensible.
>
> We also have infrastructure at the SQL level (SET) to manage the GUC.
> Tom upthread (for pretty good reasons) extending SET to pl/pgsql
> specific scoping but TBH I'm struggling as to why we need to implement
> new syntax for this; the only thing missing is being able to scope SET
> statements to a code block FWICT.
>
>
here is a GUC based patch for plancache controlling. Looks so this code is
working.

It is hard to create regress tests. Any ideas?

Regards

Pavel



> merlin
>
diff --git a/src/backend/utils/cache/plancache.c b/src/backend/utils/cache/plancache.c
index ad8a82f1e3..cc99cf6dcc 100644
--- a/src/backend/utils/cache/plancache.c
+++ b/src/backend/utils/cache/plancache.c
@@ -106,6 +106,8 @@ static void PlanCacheRelCallback(Datum arg, Oid relid);
 static void PlanCacheFuncCallback(Datum arg, int cacheid, uint32 hashvalue);
 static void PlanCacheSysCallback(Datum arg, int cacheid, uint32 hashvalue);
 
+/* GUC parameter */
+int	plancache_mode;
 
 /*
  * InitPlanCache: initialize module during InitPostgres.
@@ -1031,6 +1033,12 @@ choose_custom_plan(CachedPlanSource *plansource, ParamListInfo boundParams)
 	if (IsTransactionStmtPlan(plansource))
 		return false;
 
+	/* See if settings wants to force the decision */
+	if (plancache_mode & PLANCACHE_FORCE_GENERIC_PLAN)
+		return false;
+	if (plancache_mode & PLANCACHE_FORCE_CUSTOM_PLAN)
+		return true;
+
 	/* See if caller wants to force the decision */
 	if (plansource->cursor_options & CURSOR_OPT_GENERIC_PLAN)
 		return false;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index ae22185fbd..4ce275e39d 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -403,6 +403,13 @@ static const struct config_enum_entry force_parallel_mode_options[] = {
 	{NULL, 0, false}
 };
 
+static const struct config_enum_entry plancache_mode_options[] = {
+	{"default", PLANCACHE_DEFAULT, false},
+	{"force_generic_plan", PLANCACHE_FORCE_GENERIC_PLAN, false},
+	{"force_custom_plan", PLANCACHE_FORCE_CUSTOM_PLAN, false},
+	{NULL, 0, false}
+};
+
 /*
  * password_encryption used to be a boolean, so accept all the likely
  * variants of "on", too. "off" used to store passwords in plaintext,
@@ -3916,6 +3923,16 @@ static struct config_enum ConfigureNamesEnum[] =
 		NULL, NULL, NULL
 	},
 
+	{
+		{"plancache_mode", PGC_USERSET, QUERY_TUNING_OTHER,
+			gettext_noop("Forces use of custom or generic plans."),
+			gettext_noop("It can control query plan cache.")
+		},
+		_mode,
+		PLANCACHE_DEFAULT, plancache_mode_options,
+		NULL, NULL, NULL
+	},
+
 	/* End-of-list marker */
 	{
 		{NULL, 0, 0, NULL, NULL}, NULL, 0, NULL, NULL, NULL, NULL
diff --git a/src/include/utils/plancache.h b/src/include/utils/plancache.h
index 87fab19f3c..962895cc1a 100644
--- a/src/include/utils/plancache.h
+++ b/src/include/utils/plancache.h
@@ -143,7 +143,6 @@ typedef struct CachedPlan
 	MemoryContext context;		/* context containing this CachedPlan */
 } CachedPlan;
 
-
 extern void InitPlanCache(void);
 extern void ResetPlanCache(void);
 
@@ -182,4 +181,16 @@ extern CachedPlan *GetCachedPlan(CachedPlanSource *plansource,
 			  QueryEnvironment *queryEnv);
 extern void ReleaseCachedPlan(CachedPlan *plan, bool useResOwner);
 
+/* possible values for plancache_mode */
+typedef enum
+{
+	PLANCACHE_DEFAULT,
+	PLANCACHE_FORCE_GENERIC_PLAN,
+	PLANCACHE_FORCE_CUSTOM_PLAN
+}			PlanCacheMode;
+
+
+/* GUC parameter */
+extern int plancache_mode;
+
 #endif			/* PLANCACHE_H */

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] is possible cache tupledesc templates in execution plan? significant performance issue, maybe bug?

2017-10-08 Thread Pavel Stehule
2017-10-08 19:10 GMT+02:00 Pavel Stehule <pavel.steh...@gmail.com>:

>
>
> 2017-10-08 19:04 GMT+02:00 Pavel Stehule <pavel.steh...@gmail.com>:
>
>>
>>
>> 2017-10-08 18:59 GMT+02:00 Andres Freund <and...@anarazel.de>:
>>
>>> On 2017-10-08 18:57:28 +0200, Pavel Stehule wrote:
>>> > 2017-10-08 18:44 GMT+02:00 Andres Freund <and...@anarazel.de>:
>>> >
>>> > > Hi,
>>> > >
>>> > > On 2017-10-08 18:36:23 +0200, Pavel Stehule wrote:
>>> > > > 2. Lot of used tables are pretty wide - 60, 120, .. columns
>>> > > >
>>> > > > Now, I am doing profiling, and I see so most time is related to
>>> > > >
>>> > > > ExecTypeFromTLInternal(List *targetList, bool hasoid, bool
>>> skipjunk)
>>> > >
>>> > > Yea, that's known - I've complained about this a couple times. You
>>> could
>>> > > try whether the following master branch helps:
>>> > > https://git.postgresql.org/gitweb/?p=users/andresfreund/
>>> > > postgres.git;a=shortlog;h=refs/heads/simple_statement_perf
>>> > >
>>> > > That's just micro-optimization though, not a more fundamental
>>> > > solution. But for me it yields pretty nice speedups for cases with
>>> long
>>> > > tlists.
>>> > >
>>> > >
>>> > it is just this patch
>>> >
>>> > HeapTuple   tup;
>>> > Form_pg_type typTup;
>>> >
>>> > +   if (typid < FirstBootstrapObjectId)
>>> > +   break;
>>> > +
>>> > tup = SearchSysCache1(TYPEOID, ObjectIdGetDatum(typid));
>>> > if (!HeapTupleIsValid(tup))
>>> > elog(ERROR, "cache lookup failed for type %u", typid);
>>>
>>> No.
>>>
>>
>> please, how I can clone your repo?
>>
>>
>> I found it
>
>

With your branch the execution is about 15-20% faster - so overhead of
exec  init is  more significant.

Unfortunately Oracle is significantly faster for this pattern

Regards

Pavel


[HACKERS] Re: is possible cache tupledesc templates in execution plan? significant performance issue, maybe bug?

2017-10-08 Thread Pavel Stehule
The following workaround is working
>
> create view  as  select CISLOEXEKUCE, MT.ID_NAJDATSPLT
>   from najzalobpr MT, najvzallok A1,
> NAJZALOBST A2, NAJZALOBCE A3 where
> MT.ID_NAJVZALLOK= A1.ID_NAJVZALLOK AND
> A1.ID_NAJZALOBST=A2.ID_NAJZALOBST AND
> A2.ID_NAJZALOBCE= A3.ID_NAJZALOBCE;
>
>
> and function should be changed to
>
> BEGIN
>BEGIN
> select CISLOEXEKUCE INTO STRICT mADRA
>   from 
>   WHERE id_najdatsplt = mID_najdatsplt LIMIT 1;
> EXCEPTION
>   WHEN OTHERS THEN
> mADRA := NULL;
> END;
>
>
>
>   Result:=mADRA;
>   return(Result);
> end;
>
> So this issue is really related to tupleDesc management
>

I found a bug in this workaround. It doesn't work



>
>
>> Pavel
>>
>>
>>
>


Re: [HACKERS] is possible cache tupledesc templates in execution plan? significant performance issue, maybe bug?

2017-10-08 Thread Pavel Stehule
2017-10-08 19:04 GMT+02:00 Pavel Stehule <pavel.steh...@gmail.com>:

>
>
> 2017-10-08 18:59 GMT+02:00 Andres Freund <and...@anarazel.de>:
>
>> On 2017-10-08 18:57:28 +0200, Pavel Stehule wrote:
>> > 2017-10-08 18:44 GMT+02:00 Andres Freund <and...@anarazel.de>:
>> >
>> > > Hi,
>> > >
>> > > On 2017-10-08 18:36:23 +0200, Pavel Stehule wrote:
>> > > > 2. Lot of used tables are pretty wide - 60, 120, .. columns
>> > > >
>> > > > Now, I am doing profiling, and I see so most time is related to
>> > > >
>> > > > ExecTypeFromTLInternal(List *targetList, bool hasoid, bool skipjunk)
>> > >
>> > > Yea, that's known - I've complained about this a couple times. You
>> could
>> > > try whether the following master branch helps:
>> > > https://git.postgresql.org/gitweb/?p=users/andresfreund/
>> > > postgres.git;a=shortlog;h=refs/heads/simple_statement_perf
>> > >
>> > > That's just micro-optimization though, not a more fundamental
>> > > solution. But for me it yields pretty nice speedups for cases with
>> long
>> > > tlists.
>> > >
>> > >
>> > it is just this patch
>> >
>> > HeapTuple   tup;
>> > Form_pg_type typTup;
>> >
>> > +   if (typid < FirstBootstrapObjectId)
>> > +   break;
>> > +
>> > tup = SearchSysCache1(TYPEOID, ObjectIdGetDatum(typid));
>> > if (!HeapTupleIsValid(tup))
>> > elog(ERROR, "cache lookup failed for type %u", typid);
>>
>> No.
>>
>
> please, how I can clone your repo?
>
>
> I found it


Re: [HACKERS] is possible cache tupledesc templates in execution plan? significant performance issue, maybe bug?

2017-10-08 Thread Pavel Stehule
2017-10-08 18:59 GMT+02:00 Andres Freund <and...@anarazel.de>:

> On 2017-10-08 18:57:28 +0200, Pavel Stehule wrote:
> > 2017-10-08 18:44 GMT+02:00 Andres Freund <and...@anarazel.de>:
> >
> > > Hi,
> > >
> > > On 2017-10-08 18:36:23 +0200, Pavel Stehule wrote:
> > > > 2. Lot of used tables are pretty wide - 60, 120, .. columns
> > > >
> > > > Now, I am doing profiling, and I see so most time is related to
> > > >
> > > > ExecTypeFromTLInternal(List *targetList, bool hasoid, bool skipjunk)
> > >
> > > Yea, that's known - I've complained about this a couple times. You
> could
> > > try whether the following master branch helps:
> > > https://git.postgresql.org/gitweb/?p=users/andresfreund/
> > > postgres.git;a=shortlog;h=refs/heads/simple_statement_perf
> > >
> > > That's just micro-optimization though, not a more fundamental
> > > solution. But for me it yields pretty nice speedups for cases with long
> > > tlists.
> > >
> > >
> > it is just this patch
> >
> > HeapTuple   tup;
> > Form_pg_type typTup;
> >
> > +   if (typid < FirstBootstrapObjectId)
> > +   break;
> > +
> > tup = SearchSysCache1(TYPEOID, ObjectIdGetDatum(typid));
> > if (!HeapTupleIsValid(tup))
> > elog(ERROR, "cache lookup failed for type %u", typid);
>
> No.
>

please, how I can clone your repo?


Re: [HACKERS] is possible cache tupledesc templates in execution plan? significant performance issue, maybe bug?

2017-10-08 Thread Pavel Stehule
2017-10-08 18:44 GMT+02:00 Andres Freund <and...@anarazel.de>:

> Hi,
>
> On 2017-10-08 18:36:23 +0200, Pavel Stehule wrote:
> > 2. Lot of used tables are pretty wide - 60, 120, .. columns
> >
> > Now, I am doing profiling, and I see so most time is related to
> >
> > ExecTypeFromTLInternal(List *targetList, bool hasoid, bool skipjunk)
>
> Yea, that's known - I've complained about this a couple times. You could
> try whether the following master branch helps:
> https://git.postgresql.org/gitweb/?p=users/andresfreund/
> postgres.git;a=shortlog;h=refs/heads/simple_statement_perf
>
> That's just micro-optimization though, not a more fundamental
> solution. But for me it yields pretty nice speedups for cases with long
> tlists.
>
>
it is just this patch

HeapTuple   tup;
Form_pg_type typTup;

+   if (typid < FirstBootstrapObjectId)
+   break;
+
tup = SearchSysCache1(TYPEOID, ObjectIdGetDatum(typid));
if (!HeapTupleIsValid(tup))
elog(ERROR, "cache lookup failed for type %u", typid);

?



>
> > This function is executed in exec init time - in this case pretty often.
> > Although there are used few columns from the table, the target list is
> > build for columns (maybe it is bug)
>
> It's probably just the physical tlist "optimization".
>
>
> > 2. If is not possible to reduce the number of fields of target list, is
> > possible to store tupledesc template to plan?
>
> We should do that, but it's not a small change.
>
> Greetings,
>
> Andres Freund
>


[HACKERS] Re: is possible cache tupledesc templates in execution plan? significant performance issue, maybe bug?

2017-10-08 Thread Pavel Stehule
2017-10-08 18:36 GMT+02:00 Pavel Stehule <pavel.steh...@gmail.com>:

> Hi
>
> I am looking why some queries are significantly slower on PostgreSQL than
> on Oracle, although there is pretty simple plan. The queries are usually
> 10x times slower on Postgres than on Oracle.
>
> I migrate old Oracle application to Postgres. There are important two
> factors:
>
> 1. Often usage of "view" functions (I don't know better terminology) like:
>
> CREATE OR REPLACE FUNCTION foo(_id integer)
> RETURNS text AS $$
> BEGIN
>   RETURN (SELECT xx FROM a JOIN b ON a.id = b.id WHERE b.y = _id)
> END;
> $$ LANGUAGE plpgsql;
>
> These functions are used in views
>
> CREATE VIEW xx AS
> SELECT a, b, c, foo(id) as d, ...
>
> And sometimes are used in filters
>
> SELECT * FROM xx WHERE d IN NOT NULL;
>
> 2. Lot of used tables are pretty wide - 60, 120, .. columns
>
> Now, I am doing profiling, and I see so most time is related to
>
> ExecTypeFromTLInternal(List *targetList, bool hasoid, bool skipjunk)
>
> This function is executed in exec init time - in this case pretty often.
> Although there are used few columns from the table, the target list is
> build for columns (maybe it is bug)
>
> I have a function
>
> CREATE OR REPLACE FUNCTION ides_funcs.najdatsplt_cislo_exekuce(mid_najdatsplt
> bigint)
>  RETURNS character varying
>  LANGUAGE plpgsql
>  STABLE SECURITY DEFINER COST 1000
> AS $function$
> DECLARE
>
>  Result varchar(200);
>
>   --mZALOBCISLO NAJZALOBCE.ZALOBCISLO%TYPE;
>   mAdra varchar(200);
>
>
> BEGIN
>BEGIN
> -- there are only tables
>  select CISLOEXEKUCE INTO STRICT mADRA
>   from najzalobpr MT, najvzallok A1,
> NAJZALOBST A2, NAJZALOBCE A3 where
> MT.ID_NAJVZALLOK= A1.ID_NAJVZALLOK AND
> A1.ID_NAJZALOBST=A2.ID_NAJZALOBST AND
> A2.ID_NAJZALOBCE= A3.ID_NAJZALOBCE AND
> MT.ID_NAJDATSPLT = mID_NAJDATSPLT  LIMIT 1;
> EXCEPTION
>   WHEN OTHERS THEN
> mADRA := NULL;
> END;
>
>
>
>   Result:=mADRA;
>   return(Result);
> end;
> $function$
>
> where is necessary only few columns:
>
> but it processing target list of length
>
> NOTICE:  plpgsql_exec_function: ides_funcs.najdatsplt_cislo_
> exekuce(bigint)
> NOTICE:  >>len: 38, hasoid: 0, skipjunk: 0
> NOTICE:  >>len: 21, hasoid: 0, skipjunk: 0
> NOTICE:  >>len: 1, hasoid: 0, skipjunk: 0
> NOTICE:  >>len: 65, hasoid: 0, skipjunk: 0
> NOTICE:  >>len: 1, hasoid: 0, skipjunk: 0
> NOTICE:  >>len: 93, hasoid: 0, skipjunk: 0
> NOTICE:  >>len: 1, hasoid: 0, skipjunk: 0
> NOTICE:  >>len: 1, hasoid: 0, skipjunk: 0
>
> len is length of targetlist
>
> The numbers are related to number of columns of tables najzalobpr,
> najvzallok, NAJZALOBST, ..
>
> Because these tables are wide, then the queries are too slow
>
> So, my questions?
>
> 1. Why target list is too long in this case. It should be reduced to few
> fields?
>
> 2. If is not possible to reduce the number of fields of target list, is
> possible to store tupledesc template to plan?
>
> Without this issue, the Postgres has same speed or is faster than Ora.
>
> I can send a schema by some private channel.
>
> Regards
>
>
The following workaround is working

create view  as  select CISLOEXEKUCE, MT.ID_NAJDATSPLT
  from najzalobpr MT, najvzallok A1,
NAJZALOBST A2, NAJZALOBCE A3 where
MT.ID_NAJVZALLOK= A1.ID_NAJVZALLOK AND
A1.ID_NAJZALOBST=A2.ID_NAJZALOBST AND
A2.ID_NAJZALOBCE= A3.ID_NAJZALOBCE;


and function should be changed to

BEGIN
   BEGIN
select CISLOEXEKUCE INTO STRICT mADRA
  from 
  WHERE id_najdatsplt = mID_najdatsplt LIMIT 1;
EXCEPTION
  WHEN OTHERS THEN
mADRA := NULL;
END;



  Result:=mADRA;
  return(Result);
end;

So this issue is really related to tupleDesc management



> Pavel
>
>
>


[HACKERS] is possible cache tupledesc templates in execution plan? significant performance issue, maybe bug?

2017-10-08 Thread Pavel Stehule
Hi

I am looking why some queries are significantly slower on PostgreSQL than
on Oracle, although there is pretty simple plan. The queries are usually
10x times slower on Postgres than on Oracle.

I migrate old Oracle application to Postgres. There are important two
factors:

1. Often usage of "view" functions (I don't know better terminology) like:

CREATE OR REPLACE FUNCTION foo(_id integer)
RETURNS text AS $$
BEGIN
  RETURN (SELECT xx FROM a JOIN b ON a.id = b.id WHERE b.y = _id)
END;
$$ LANGUAGE plpgsql;

These functions are used in views

CREATE VIEW xx AS
SELECT a, b, c, foo(id) as d, ...

And sometimes are used in filters

SELECT * FROM xx WHERE d IN NOT NULL;

2. Lot of used tables are pretty wide - 60, 120, .. columns

Now, I am doing profiling, and I see so most time is related to

ExecTypeFromTLInternal(List *targetList, bool hasoid, bool skipjunk)

This function is executed in exec init time - in this case pretty often.
Although there are used few columns from the table, the target list is
build for columns (maybe it is bug)

I have a function

CREATE OR REPLACE FUNCTION
ides_funcs.najdatsplt_cislo_exekuce(mid_najdatsplt bigint)
 RETURNS character varying
 LANGUAGE plpgsql
 STABLE SECURITY DEFINER COST 1000
AS $function$
DECLARE

 Result varchar(200);

  --mZALOBCISLO NAJZALOBCE.ZALOBCISLO%TYPE;
  mAdra varchar(200);


BEGIN
   BEGIN
-- there are only tables
 select CISLOEXEKUCE INTO STRICT mADRA
  from najzalobpr MT, najvzallok A1,
NAJZALOBST A2, NAJZALOBCE A3 where
MT.ID_NAJVZALLOK= A1.ID_NAJVZALLOK AND
A1.ID_NAJZALOBST=A2.ID_NAJZALOBST AND
A2.ID_NAJZALOBCE= A3.ID_NAJZALOBCE AND
MT.ID_NAJDATSPLT = mID_NAJDATSPLT  LIMIT 1;
EXCEPTION
  WHEN OTHERS THEN
mADRA := NULL;
END;



  Result:=mADRA;
  return(Result);
end;
$function$

where is necessary only few columns:

but it processing target list of length

NOTICE:  plpgsql_exec_function: ides_funcs.najdatsplt_cislo_exekuce(bigint)
NOTICE:  >>len: 38, hasoid: 0, skipjunk: 0
NOTICE:  >>len: 21, hasoid: 0, skipjunk: 0
NOTICE:  >>len: 1, hasoid: 0, skipjunk: 0
NOTICE:  >>len: 65, hasoid: 0, skipjunk: 0
NOTICE:  >>len: 1, hasoid: 0, skipjunk: 0
NOTICE:  >>len: 93, hasoid: 0, skipjunk: 0
NOTICE:  >>len: 1, hasoid: 0, skipjunk: 0
NOTICE:  >>len: 1, hasoid: 0, skipjunk: 0

len is length of targetlist

The numbers are related to number of columns of tables najzalobpr,
najvzallok, NAJZALOBST, ..

Because these tables are wide, then the queries are too slow

So, my questions?

1. Why target list is too long in this case. It should be reduced to few
fields?

2. If is not possible to reduce the number of fields of target list, is
possible to store tupledesc template to plan?

Without this issue, the Postgres has same speed or is faster than Ora.

I can send a schema by some private channel.

Regards

Pavel


Re: [HACKERS] [PATCH] A hook for session start

2017-10-06 Thread Pavel Stehule
2017-10-07 6:49 GMT+02:00 Nico Williams <n...@cryptonector.com>:

> On Sat, Oct 07, 2017 at 05:44:00AM +0200, Pavel Stehule wrote:
> > 2017-10-06 21:36 GMT+02:00 Nico Williams <n...@cryptonector.com>:
> > > But the nice thing about them is that you need only create them once,
> so
> > > leave them in the catalog.  Stats about them should not be gathered nor
> > > stored, since they could be different per-session.
> >
> > Unfortunately one field from pg_class are not static - reltuples should
> be
> > per session.
>
> It's "only an estimate" "used by the query planner".  We could estimate
> zero for global temp tables, and the query planner can get the true
> value from an internal temp table.
>

It can be solution.


> > But it can be moved to different table
>
> That too, if it's OK.
>
> Nico
> --
>


Re: [HACKERS] [PATCH] A hook for session start

2017-10-06 Thread Pavel Stehule
2017-10-06 21:36 GMT+02:00 Nico Williams <n...@cryptonector.com>:

> On Fri, Oct 06, 2017 at 08:51:53PM +0200, Pavel Stehule wrote:
> > 2017-10-06 20:39 GMT+02:00 Nico Williams <n...@cryptonector.com>:
> > > On Fri, Oct 06, 2017 at 06:37:57PM +0200, Pavel Stehule wrote:
> > > > When we talked about this topic, there are two issues:
> > > >
> > > > a) probably not too hard issue - some internal data can be in
> session sys
> > > > cache.
> > > >
> > > > b) the session sys data should be visible on SQL level too (for some
> > > tools
> > > > and consistency) - it is hard task.
> > >
> > > Can you expand on this?
> >
> > If global temporary tables should be effective, then you have not have
> > modify system catalogue after creating. But lot of processes requires it
> -
> > ANALYZE, query planning.
>
> But the nice thing about them is that you need only create them once, so
> leave them in the catalog.  Stats about them should not be gathered nor
> stored, since they could be different per-session.
>

Unfortunately one field from pg_class are not static - reltuples should be
per session.

But it can be moved to different table

Regards

Pavel


Re: [HACKERS] [PATCH] A hook for session start

2017-10-06 Thread Pavel Stehule
2017-10-06 20:39 GMT+02:00 Nico Williams <n...@cryptonector.com>:

> On Fri, Oct 06, 2017 at 06:37:57PM +0200, Pavel Stehule wrote:
> > 2017-10-06 6:48 GMT+02:00 Nico Williams <n...@cryptonector.com>:
> > > On Fri, Oct 06, 2017 at 04:52:09AM +0200, Pavel Stehule wrote:
> > > > Current TEMP tables, if you do it for any session has pretty
> significant
> > > > overhead  - with possible risk of performance lost (system catalog
> > > bloat).
> > >
> > > Because of the DDLs for them?
> >
> > yes - pg_attribute, pg_class, pg_stats are bloating - and when these
> tables
> > are bloated, then DDL is slow.
>
> :(
>
> > > No, I want GLOBAL TEMP tables.
> >
> > me too :) - and lot of customer and users.
>
> > I though about it, but I have other on my top priority. GLOBAL TEMP TABLE
> > is on 90% unlogged table. But few fields should be session based instead
> > shared persistent - statistics, rows in pg_class, filenode.
>
> Unlogged tables don't provide isolation between sessions the way temp
> tables do, so I don't see the connection.
>
> But the necessary components (temp heaps and such) are all there, and I
> suspect a PoC could be done fairly quickly.  But there are some
> subtleties like that FKs between GLOBAL TEMP and persistent tables must
> not be allowed (in either direction), so a complete implementation will
> take significant work.
>
> The work looks like:
>
>  - add syntax (trivial)
>
>  - add new kind of persistence (lots of places to touch, but it's mostly
>mechanical)
>
>  - redirect all references to global temp table contents to temp
>heaps/indexes/whatever
>
>  - add logic to prevent FKs between persistent and global temp tables
>
>  - what else?
>
> > When we talked about this topic, there are two issues:
> >
> > a) probably not too hard issue - some internal data can be in session sys
> > cache.
> >
> > b) the session sys data should be visible on SQL level too (for some
> tools
> > and consistency) - it is hard task.
>
> Can you expand on this?
>

If global temporary tables should be effective, then you have not have
modify system catalogue after creating. But lot of processes requires it -
ANALYZE, query planning.

>
> Nico
> --
>


Re: [HACKERS] [PATCH] A hook for session start

2017-10-06 Thread Pavel Stehule
2017-10-06 6:48 GMT+02:00 Nico Williams <n...@cryptonector.com>:

> On Fri, Oct 06, 2017 at 04:52:09AM +0200, Pavel Stehule wrote:
> > 2017-10-05 22:31 GMT+02:00 Nico Williams <n...@cryptonector.com>:
> > > On Tue, Aug 01, 2017 at 03:36:23PM -0400, Peter Eisentraut wrote:
> > > > On 7/21/17 13:14, Jim Mlodgenski wrote:
> > > > > When I first saw this thread, my initial thought of a use case is
> to
> > > > > prepare some key application queries so they are there and ready
> to go.
> > > > > That would need to be before the ExecutorStart_hook or
> > > > > ProcessUtility_hook if an app would just want to execute the
> prepared
> > > > > statement.
> > > >
> > > > Isn't that what the preprepare extension does already?
> > >
> > > more generic facility -> more useful
> > >
> > > My use case is to pre-create TEMP schema elements that VIEWs,
> FUNCTIONs,
> > > and TRIGGERs, might need.
> >
> > It is better to work on GLOBAL TEMP tables.
>
> I don't disagree.
>
> In fact, I was scoping out what it might take to do that just yesterday.
>
> I've too thoughts on that: either a new relpersistence kind that is very
> similar to persistent, but which always uses temp heaps, or a modifier
> for the persistent kind that says to use temp heaps.  Either way it
> looks like it should be fairly straightforward (but then, i've only
> ever written one thing for PG, earlier this week, the ALWAYS DEFERRED
> thing).
>
> > Current TEMP tables, if you do it for any session has pretty significant
> > overhead  - with possible risk of performance lost (system catalog
> bloat).
>
> Because of the DDLs for them?
>

yes - pg_attribute, pg_class, pg_stats are bloating - and when these tables
are bloated, then DDL is slow.



> > So often creating local temp tables is antipattern (in Postgres)
> > unfortunately.
>
> I do it plenty, but sometimes I use an UNLOGGED table with a txid column
> in the PK set to txid_current(), then I clean up where I can.  It'd be
> nice to have COMMIT triggers for cleaning up such rows, among other
> things.  I've implemented that using DDL event triggers, but to perform
> well it needs to be a native feature.
>
> > I am not sure, if we should to support this case more :( Probably is
> > better, so it is hard to use local TEMP tables.
>
> No, I want GLOBAL TEMP tables.
>

me too :) - and lot of customer and users.

There is a workaround - you can use a array instead temp tables in 50%. But
it is not a solution in other 50%.

I though about it, but I have other on my top priority. GLOBAL TEMP TABLE
is on 90% unlogged table. But few fields should be session based instead
shared persistent - statistics, rows in pg_class, filenode.

When we talked about this topic, there are two issues:

a) probably not too hard issue - some internal data can be in session sys
cache.

b) the session sys data should be visible on SQL level too (for some tools
and consistency) - it is hard task.

Regards

Pavel


> Nico
> --
>


Re: [HACKERS] [PATCH] A hook for session start

2017-10-05 Thread Pavel Stehule
2017-10-05 22:31 GMT+02:00 Nico Williams :

> On Tue, Aug 01, 2017 at 03:36:23PM -0400, Peter Eisentraut wrote:
> > On 7/21/17 13:14, Jim Mlodgenski wrote:
> > > When I first saw this thread, my initial thought of a use case is to
> > > prepare some key application queries so they are there and ready to go.
> > > That would need to be before the ExecutorStart_hook or
> > > ProcessUtility_hook if an app would just want to execute the prepared
> > > statement.
> >
> > Isn't that what the preprepare extension does already?
>
> more generic facility -> more useful
>
> My use case is to pre-create TEMP schema elements that VIEWs, FUNCTIONs,
> and TRIGGERs, might need.
>

It is better to work on GLOBAL TEMP tables.

Current TEMP tables, if you do it for any session has pretty significant
overhead  - with possible risk of performance lost (system catalog bloat).

pretty significant performance issue of my customers are related to temp
tables usage (under high load)

So often creating local temp tables is antipattern (in Postgres)
unfortunately.

I am not sure, if we should to support this case more :( Probably is
better, so it is hard to use local TEMP tables.

Regards

Pavel

>
> Nico
> --
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>


Re: [HACKERS] why subplan is 10x faster then function?

2017-10-04 Thread Pavel Stehule
2017-10-01 12:45 GMT+02:00 Sokolov Yura <funny.fal...@postgrespro.ru>:

> 1 октября 2017 г. 12:42:14 GMT+03:00, Pavel Stehule <
> pavel.steh...@gmail.com> пишет:
> >2017-09-30 23:23 GMT+02:00 Pavel Stehule <pavel.steh...@gmail.com>:
> >
> >> Hi
> >>
> >> I have some strange slow queries based on usage "view" functions
> >>
> >> one function looks like this:
> >>
> >> CREATE OR REPLACE FUNCTION
> >ides_funcs.najdatsplt_cislo_exekuce(mid_najdatsplt
> >> bigint)
> >>  RETURNS character varying
> >>  LANGUAGE sql
> >>  STABLE
> >> AS $function$
> >> select CISLOEXEKUCE
> >>   from najzalobpr MT, najvzallok A1,
> >> NAJZALOBST A2, NAJZALOBCE A3 where
> >> MT.ID_NAJVZALLOK= A1.ID_NAJVZALLOK AND
> >> A1.ID_NAJZALOBST=A2.ID_NAJZALOBST AND
> >> A2.ID_NAJZALOBCE= A3.ID_NAJZALOBCE AND
> >> MT.ID_NAJDATSPLT = mID_NAJDATSPLT  LIMIT 1;
> >> $function$ cost 20
> >> ;
> >>
> >> I know so using this kind of functions is not good idea - it is
> >customer
> >> old code generated from Oracle. I had idea about possible planner
> >issues.
> >> But this is a executor issue.
> >>
> >> when this function is evaluated as function, then execution needs
> >about 46
> >> sec
> >>
> >> ->  Nested Loop Left Join  (cost=0.71..780360.31 rows=589657
> >> width=2700) (actual time=47796.588..47796.588 rows=0 loops=1)
> >>   ->  Nested Loop  (cost=0.29..492947.20 rows=589657
> >width=2559)
> >> (actual time=47796.587..47796.587 rows=0 loops=1)
> >> ->  Seq Scan on najdatsplt mt  (cost=0.00..124359.24
> >> rows=1106096 width=1013) (actual time=47796.587..47796.587 rows=0
> >loops=1)
> >>   Filter:
> >(najdatsplt_cislo_exekuce(id_najdatsplt) IS
> >> NOT NULL)
> >>   Rows Removed by Filter: 654
> >>
> >> When I use correlated subquery, then
> >>
> >>  ->  Nested Loop  (cost=0.29..19876820.11 rows=589657 width=2559)
> >(actual
> >> time=3404.154..3404.154 rows=0 loops=1)
> >>   ->  Seq Scan on najdatsplt mt  (cost=0.00..19508232.15 rows=1106096
> >> width=1013) (actual time=3404.153..3404.153 rows=0 loops=1)
> >>   Filter: ((SubPlan 11) IS NOT NULL)
> >>   Rows Removed by Filter: 654
> >>   SubPlan 11
> >> ->  Limit  (cost=1.10..17.49 rows=1 width=144) (actual
> >> time=0.002..0.002 rows=0 loops=654)
> >>   ->  Nested Loop  (cost=1.10..17.49 rows=1 width=144)
> >(actual
> >> time=0.002..0.002 rows=0 loops=654)
> >> ->  Nested Loop  (cost=0.83..17.02 rows=1
> >width=8)
> >> (actual time=0.002..0.002 rows=0 loops=654)
> >>   ->  Nested Loop  (cost=0.56..16.61 rows=1
> >> width=8) (actual time=0.002..0.002 rows=0 loops=654)
> >>
> >> The execution plan is +/- same - the bottleneck is in function
> >execution
> >>
> >> Tested with same result on 9.6, 10.
> >>
> >> Is known overhead of function execution?
> >>
> >>
> >profile of slow execution looks like
> >
> >+   24,71%24,40% 48235  postmaster  [.] SearchCatCache
> >+   14,25% 0,00% 0  postmaster  [unknown]   [.]
> >
> >+9,76% 9,65% 19071  postmaster  [.]
> >TupleDescInitEntry
> >+3,91% 3,86%  7625  postmaster  [.]
> >ExecAssignScanProjectionInfoWithVarno
> >+3,56% 3,52%  6955  postmaster  [.] AllocSetAlloc
> >+2,66% 2,63%  5193  postmaster  [.]
> >FunctionCall2Coll
> >+2,65% 2,62%  5183  postmaster  [.]
> >ResourceArrayRemove
> >+2,42% 2,39%  4719  postmaster  [.]
> >ExecTypeFromTLInternal
> >+2,21% 2,19%  4321  postmaster  [.]
> >DirectFunctionCall1Coll
> >+2,02% 2,00%  3961  postmaster  [.]
> >heap_getsysattr
> >+1,85% 1,82%  3604  postmaster  [.]
> >exprTypmod
> >+1,81% 1,79%  3540  postmaster  [.]
> >ResourceArrayAdd
> >+1,68% 1,66%  3282  postmaster  [.]
> >hash_uint32
> >+1,65% 1,63%  3214  postmaster  [.]
> >hash

Re: [HACKERS] issue: record or row variable cannot be part of multiple-item INTO list

2017-10-02 Thread Pavel Stehule
2017-10-02 18:44 GMT+02:00 Tom Lane :

> Robert Haas  writes:
> > On Mon, Oct 2, 2017 at 12:28 PM, Tom Lane  wrote:
> >> I'm not sure if that's true or not.  I am sure, though, that since
> >> we've done B for twenty years we can't just summarily change to A.
>
> > I agree, but so what?  You said that we couldn't adopt Pavel's
> > proposal for this reason:
>
> > ===
> > IIRC, the reason for disallowing that is that it's totally unclear what
> > the semantics ought to be.  Is that variable a single target (demanding
> > a compatible composite-valued column from the source query), or does it
> > eat one source column per field within the record/row?  The former is
> 100%
> > inconsistent with what happens if the record/row is the only INTO target;
> > while the latter would be very bug-prone, and it's especially unclear
> what
> > ought to happen if it's an as-yet-undefined record variable.
> > ===
>
> > And I'm saying - that argument is bogus.  Regardless of what people
> > want or what we have historically done in the case where the
> > record/row is the only INTO target, when there are multiple targets it
> > seems clear that they want to match up the query's output columns with
> > the INTO targets 1:1.  So let's just do that.
>
> Arguing that that's what people want (even if I granted your argument,
> which I do not) does not make the inconsistency magically disappear,
> nor will it stop people from being confused by that inconsistency.
> Furthermore, if we do it like this, we will be completely backed into
> a backwards-compatibility corner if someone does come along and say
> "hey, I wish I could do the other thing".
>
> I'm fine with doing something where we add new notation to dispel
> the ambiguity.  I don't want to put in another layer of inconsistency
> and then have even more backwards-compatibility problems constraining
> our response to the inevitable complaints.
>

I didn't talk about record type. I talked just only about composite
variables (ROW in our terminology).

I don't think so for this case the special syntax is necessary, although we
can use a parallel assignment with different semantics for this case.

What is a motivation for this thread?

I had to migrate lot of Oracle procedures where was usually two OUT
variables - first - known composite type (some state variable), and second
- result (text or int variable). Now, the CALL of this function in Postgres
is:

SELECT fx() INTO rec;
var_state := rec.state;
var_result := rec.result;

It works, Ora2pg supports it, plpgsql_check is able to check it, but it is
not elegant and less readable.

So, when target is not clean REC or ROW, I am think so we can allow
assignment with few limits

1. The REC type should not be used
2. The target and source fields should be same - this assignment should not
be tolerant like now. Because, this situation is not supported now, there
is not a compatibility risk

Some modern and now well known languages like GO supports parallel
assignment. Can be it the special syntax requested by Tom?

So there are two proposals:

1. Implement safe restrictive SELECT INTO where target can be combination
of REC or scalars
2. Parallel assignment with new behave, that allows any list of REC, ROW or
scalar as target - but composite should be attached to composite var, and
scalar to scalar. List of scalars should be disallowed as target for
composite value should be a) disallowed every time, b) disallowed when some
target var is a composite.

The differences between assign command and INTO command can be messy too.
So the best solution should be one rules for := and INTO - but I am not
sure if it is possible

Comments?

Regards

Pavel



> regards, tom lane
>


Re: [HACKERS] proposal - Default namespaces for XPath expressions (PostgreSQL 11)

2017-10-02 Thread Pavel Stehule
2017-10-02 12:22 GMT+02:00 Kyotaro HORIGUCHI <
horiguchi.kyot...@lab.ntt.co.jp>:

> Hi, thanks for the new patch.
>
> # The patch is missing xpath_parser.h. That of the first patch was usable.
>
> At Thu, 28 Sep 2017 07:59:41 +0200, Pavel Stehule <pavel.steh...@gmail.com>
> wrote in 

Re: [HACKERS] SQL/JSON in PostgreSQL

2017-10-01 Thread Pavel Stehule
2017-09-30 1:06 GMT+02:00 Nikita Glukhov <n.glu...@postgrespro.ru>:

> On 29.09.2017 20:07, Pavel Stehule wrote:
>
> 2017-09-29 12:15 GMT+02:00 Pavel Stehule <pavel.steh...@gmail.com>:
>
>>
>> 2017-09-29 12:09 GMT+02:00 Nikita Glukhov <n.glu...@postgrespro.ru>:
>>
>>>
>>>
>>> I have some free time now. Is it last version?
>>>
>>> Regards
>>>
>>> Pavel
>>>
>>> Yes, this is still the latest version. Now I am working only on
>>> unfinished WIP
>>> patch no. 9, but I think it should be reviewed the last.
>>>
>>>
>>
>> ok
>>
>> Thank you
>>
>
> I have few queries and notes
>
> 1. Why first patch holds Gin related functionality? Can be it separated?
>
> Yes, it can be easily separated. Attached archive with separated GIN patch
> no.2.
>
> 2. Why Json path functions starts by "_" ? These functions are not removed
> by other patches.
>
> Originally, these functions were created only for testing purposes and
> should
> be treated as "internal". But with introduction of jsonpath operators
> jsonpath
> tests can be completely rewritten using this operators.
>

yes - it should be removed.

Probably separation to jsonpath and sqljson is not happy (or sqljson part
should not contains JSON_QUERY and related functions).

Why this code is in patch?

















*+/Example functions for
JsonPath***/++static Datum+returnDATUM(void *arg,
bool *isNull)+{+<->*isNull =
false;+<->return<>PointerGetDatum(arg);+}++static Datum+returnNULL(void
*arg, bool *isNull)+{+<->*isNull = true;+<->return Int32GetDatum(0);+}+*
Regards

Pavel


> 3. What is base for jsonpath-extensions? ANSI/SQL?
>
> Our jsonpath extensions are not based on any standards, so they are quite
> dangerous because they can conflict with the standard in the future.
>
> This patch is pretty big - so I propose to push JSONPath and SQL/JSON
> related patches first, and then in next iteration to push JSON_TABLE patch.
> Is it acceptable strategy?
>
> I think it's acceptable. And this was the main reason for the separation
> of patches.
>
> I am sure so JSON_TABLE is pretty important function, but it is pretty
> complex too (significantly more complex than XMLTABLE), so it can be
> practiacal to move this function to separate project. I hope so all patches
> will be merged in release 11 time.
>
>
> --
> Nikita Glukhov
> Postgres Professional: http://www.postgrespro.com
> The Russian Postgres Company
>


Re: [HACKERS] why subplan is 10x faster then function?

2017-10-01 Thread Pavel Stehule
2017-09-30 23:23 GMT+02:00 Pavel Stehule <pavel.steh...@gmail.com>:

> Hi
>
> I have some strange slow queries based on usage "view" functions
>
> one function looks like this:
>
> CREATE OR REPLACE FUNCTION ides_funcs.najdatsplt_cislo_exekuce(mid_najdatsplt
> bigint)
>  RETURNS character varying
>  LANGUAGE sql
>  STABLE
> AS $function$
> select CISLOEXEKUCE
>   from najzalobpr MT, najvzallok A1,
> NAJZALOBST A2, NAJZALOBCE A3 where
> MT.ID_NAJVZALLOK= A1.ID_NAJVZALLOK AND
> A1.ID_NAJZALOBST=A2.ID_NAJZALOBST AND
> A2.ID_NAJZALOBCE= A3.ID_NAJZALOBCE AND
> MT.ID_NAJDATSPLT = mID_NAJDATSPLT  LIMIT 1;
> $function$ cost 20
> ;
>
> I know so using this kind of functions is not good idea - it is customer
> old code generated from Oracle. I had idea about possible planner issues.
> But this is a executor issue.
>
> when this function is evaluated as function, then execution needs about 46
> sec
>
> ->  Nested Loop Left Join  (cost=0.71..780360.31 rows=589657
> width=2700) (actual time=47796.588..47796.588 rows=0 loops=1)
>   ->  Nested Loop  (cost=0.29..492947.20 rows=589657 width=2559)
> (actual time=47796.587..47796.587 rows=0 loops=1)
> ->  Seq Scan on najdatsplt mt  (cost=0.00..124359.24
> rows=1106096 width=1013) (actual time=47796.587..47796.587 rows=0 loops=1)
>   Filter: (najdatsplt_cislo_exekuce(id_najdatsplt) IS
> NOT NULL)
>   Rows Removed by Filter: 654
>
> When I use correlated subquery, then
>
>  ->  Nested Loop  (cost=0.29..19876820.11 rows=589657 width=2559) (actual
> time=3404.154..3404.154 rows=0 loops=1)
>   ->  Seq Scan on najdatsplt mt  (cost=0.00..19508232.15 rows=1106096
> width=1013) (actual time=3404.153..3404.153 rows=0 loops=1)
>   Filter: ((SubPlan 11) IS NOT NULL)
>   Rows Removed by Filter: 654
>   SubPlan 11
> ->  Limit  (cost=1.10..17.49 rows=1 width=144) (actual
> time=0.002..0.002 rows=0 loops=654)
>   ->  Nested Loop  (cost=1.10..17.49 rows=1 width=144) (actual
> time=0.002..0.002 rows=0 loops=654)
> ->  Nested Loop  (cost=0.83..17.02 rows=1 width=8)
> (actual time=0.002..0.002 rows=0 loops=654)
>   ->  Nested Loop  (cost=0.56..16.61 rows=1
> width=8) (actual time=0.002..0.002 rows=0 loops=654)
>
> The execution plan is +/- same - the bottleneck is in function execution
>
> Tested with same result on 9.6, 10.
>
> Is known overhead of function execution?
>
>
looks like this nested query are  expensive - some expensive operatiions
are pushed to exec_init_node. When the query are executed from function,
then exec_init_note is called too often




> Regards
>
> Pavel
>


Re: [HACKERS] why subplan is 10x faster then function?

2017-10-01 Thread Pavel Stehule
2017-10-01 12:45 GMT+02:00 Sokolov Yura <funny.fal...@postgrespro.ru>:

> 1 октября 2017 г. 12:42:14 GMT+03:00, Pavel Stehule <
> pavel.steh...@gmail.com> пишет:
> >2017-09-30 23:23 GMT+02:00 Pavel Stehule <pavel.steh...@gmail.com>:
> >
> >> Hi
> >>
> >> I have some strange slow queries based on usage "view" functions
> >>
> >> one function looks like this:
> >>
> >> CREATE OR REPLACE FUNCTION
> >ides_funcs.najdatsplt_cislo_exekuce(mid_najdatsplt
> >> bigint)
> >>  RETURNS character varying
> >>  LANGUAGE sql
> >>  STABLE
> >> AS $function$
> >> select CISLOEXEKUCE
> >>   from najzalobpr MT, najvzallok A1,
> >> NAJZALOBST A2, NAJZALOBCE A3 where
> >> MT.ID_NAJVZALLOK= A1.ID_NAJVZALLOK AND
> >> A1.ID_NAJZALOBST=A2.ID_NAJZALOBST AND
> >> A2.ID_NAJZALOBCE= A3.ID_NAJZALOBCE AND
> >> MT.ID_NAJDATSPLT = mID_NAJDATSPLT  LIMIT 1;
> >> $function$ cost 20
> >> ;
> >>
> >> I know so using this kind of functions is not good idea - it is
> >customer
> >> old code generated from Oracle. I had idea about possible planner
> >issues.
> >> But this is a executor issue.
> >>
> >> when this function is evaluated as function, then execution needs
> >about 46
> >> sec
> >>
> >> ->  Nested Loop Left Join  (cost=0.71..780360.31 rows=589657
> >> width=2700) (actual time=47796.588..47796.588 rows=0 loops=1)
> >>   ->  Nested Loop  (cost=0.29..492947.20 rows=589657
> >width=2559)
> >> (actual time=47796.587..47796.587 rows=0 loops=1)
> >> ->  Seq Scan on najdatsplt mt  (cost=0.00..124359.24
> >> rows=1106096 width=1013) (actual time=47796.587..47796.587 rows=0
> >loops=1)
> >>   Filter:
> >(najdatsplt_cislo_exekuce(id_najdatsplt) IS
> >> NOT NULL)
> >>   Rows Removed by Filter: 654
> >>
> >> When I use correlated subquery, then
> >>
> >>  ->  Nested Loop  (cost=0.29..19876820.11 rows=589657 width=2559)
> >(actual
> >> time=3404.154..3404.154 rows=0 loops=1)
> >>   ->  Seq Scan on najdatsplt mt  (cost=0.00..19508232.15 rows=1106096
> >> width=1013) (actual time=3404.153..3404.153 rows=0 loops=1)
> >>   Filter: ((SubPlan 11) IS NOT NULL)
> >>   Rows Removed by Filter: 654
> >>   SubPlan 11
> >> ->  Limit  (cost=1.10..17.49 rows=1 width=144) (actual
> >> time=0.002..0.002 rows=0 loops=654)
> >>   ->  Nested Loop  (cost=1.10..17.49 rows=1 width=144)
> >(actual
> >> time=0.002..0.002 rows=0 loops=654)
> >> ->  Nested Loop  (cost=0.83..17.02 rows=1
> >width=8)
> >> (actual time=0.002..0.002 rows=0 loops=654)
> >>   ->  Nested Loop  (cost=0.56..16.61 rows=1
> >> width=8) (actual time=0.002..0.002 rows=0 loops=654)
> >>
> >> The execution plan is +/- same - the bottleneck is in function
> >execution
> >>
> >> Tested with same result on 9.6, 10.
> >>
> >> Is known overhead of function execution?
> >>
> >>
> >profile of slow execution looks like
> >
> >+   24,71%24,40% 48235  postmaster  [.] SearchCatCache
> >+   14,25% 0,00% 0  postmaster  [unknown]   [.]
> >
> >+9,76% 9,65% 19071  postmaster  [.]
> >TupleDescInitEntry
> >+3,91% 3,86%  7625  postmaster  [.]
> >ExecAssignScanProjectionInfoWithVarno
> >+3,56% 3,52%  6955  postmaster  [.] AllocSetAlloc
> >+2,66% 2,63%  5193  postmaster  [.]
> >FunctionCall2Coll
> >+2,65% 2,62%  5183  postmaster  [.]
> >ResourceArrayRemove
> >+2,42% 2,39%  4719  postmaster  [.]
> >ExecTypeFromTLInternal
> >+2,21% 2,19%  4321  postmaster  [.]
> >DirectFunctionCall1Coll
> >+2,02% 2,00%  3961  postmaster  [.]
> >heap_getsysattr
> >+1,85% 1,82%  3604  postmaster  [.]
> >exprTypmod
> >+1,81% 1,79%  3540  postmaster  [.]
> >ResourceArrayAdd
> >+1,68% 1,66%  3282  postmaster  [.]
> >hash_uint32
> >+1,65% 1,63%  3214  postmaster  [.]
> >hash_search_with_hash_value
> >+1,64% 1,62%  3208  postmaster  [.]
> >CatalogCacheComputeHashValue
> >+1,28% 1,26%  2498  postmaster  [.]
> >MemoryContextAllocZeroAligned
> >+1,25% 1,24%  2446  postmaster  [.] palloc0
> >
> >Any ides why SearchCatCache is called too often?
> >
> >
> >
> >> Regards
> >>
> >> Pavel
> >>
>
> Looks like you've already collected profile with call-graph. So you can
> tell us where it were called from.
>

There was zero info. I'll try to install this database on my notebook, and
I'll see

Pavel


>
> With regards,
> --
> Sokolov Yura aka funny_falcon
>


Re: [HACKERS] why subplan is 10x faster then function?

2017-10-01 Thread Pavel Stehule
2017-09-30 23:23 GMT+02:00 Pavel Stehule <pavel.steh...@gmail.com>:

> Hi
>
> I have some strange slow queries based on usage "view" functions
>
> one function looks like this:
>
> CREATE OR REPLACE FUNCTION ides_funcs.najdatsplt_cislo_exekuce(mid_najdatsplt
> bigint)
>  RETURNS character varying
>  LANGUAGE sql
>  STABLE
> AS $function$
> select CISLOEXEKUCE
>   from najzalobpr MT, najvzallok A1,
> NAJZALOBST A2, NAJZALOBCE A3 where
> MT.ID_NAJVZALLOK= A1.ID_NAJVZALLOK AND
> A1.ID_NAJZALOBST=A2.ID_NAJZALOBST AND
> A2.ID_NAJZALOBCE= A3.ID_NAJZALOBCE AND
> MT.ID_NAJDATSPLT = mID_NAJDATSPLT  LIMIT 1;
> $function$ cost 20
> ;
>
> I know so using this kind of functions is not good idea - it is customer
> old code generated from Oracle. I had idea about possible planner issues.
> But this is a executor issue.
>
> when this function is evaluated as function, then execution needs about 46
> sec
>
> ->  Nested Loop Left Join  (cost=0.71..780360.31 rows=589657
> width=2700) (actual time=47796.588..47796.588 rows=0 loops=1)
>   ->  Nested Loop  (cost=0.29..492947.20 rows=589657 width=2559)
> (actual time=47796.587..47796.587 rows=0 loops=1)
> ->  Seq Scan on najdatsplt mt  (cost=0.00..124359.24
> rows=1106096 width=1013) (actual time=47796.587..47796.587 rows=0 loops=1)
>   Filter: (najdatsplt_cislo_exekuce(id_najdatsplt) IS
> NOT NULL)
>   Rows Removed by Filter: 654
>
> When I use correlated subquery, then
>
>  ->  Nested Loop  (cost=0.29..19876820.11 rows=589657 width=2559) (actual
> time=3404.154..3404.154 rows=0 loops=1)
>   ->  Seq Scan on najdatsplt mt  (cost=0.00..19508232.15 rows=1106096
> width=1013) (actual time=3404.153..3404.153 rows=0 loops=1)
>   Filter: ((SubPlan 11) IS NOT NULL)
>   Rows Removed by Filter: 654
>   SubPlan 11
> ->  Limit  (cost=1.10..17.49 rows=1 width=144) (actual
> time=0.002..0.002 rows=0 loops=654)
>   ->  Nested Loop  (cost=1.10..17.49 rows=1 width=144) (actual
> time=0.002..0.002 rows=0 loops=654)
> ->  Nested Loop  (cost=0.83..17.02 rows=1 width=8)
> (actual time=0.002..0.002 rows=0 loops=654)
>   ->  Nested Loop  (cost=0.56..16.61 rows=1
> width=8) (actual time=0.002..0.002 rows=0 loops=654)
>
> The execution plan is +/- same - the bottleneck is in function execution
>
> Tested with same result on 9.6, 10.
>
> Is known overhead of function execution?
>
>
profile of slow execution looks like

+   24,71%24,40% 48235  postmaster  [.] SearchCatCache
+   14,25% 0,00% 0  postmaster  [unknown]   [.]

+9,76% 9,65% 19071  postmaster  [.]
TupleDescInitEntry
+3,91% 3,86%  7625  postmaster  [.]
ExecAssignScanProjectionInfoWithVarno
+3,56% 3,52%  6955  postmaster  [.] AllocSetAlloc
+2,66% 2,63%  5193  postmaster  [.]
FunctionCall2Coll
+2,65% 2,62%  5183  postmaster  [.]
ResourceArrayRemove
+2,42% 2,39%  4719  postmaster  [.]
ExecTypeFromTLInternal
+2,21% 2,19%  4321  postmaster  [.]
DirectFunctionCall1Coll
+2,02% 2,00%  3961  postmaster  [.]
heap_getsysattr
+1,85% 1,82%  3604  postmaster  [.]
exprTypmod
+1,81% 1,79%  3540  postmaster  [.]
ResourceArrayAdd
+1,68% 1,66%  3282  postmaster  [.]
hash_uint32
+1,65% 1,63%  3214  postmaster  [.]
hash_search_with_hash_value
+1,64% 1,62%  3208  postmaster  [.]
CatalogCacheComputeHashValue
+1,28% 1,26%  2498  postmaster  [.]
MemoryContextAllocZeroAligned
+1,25% 1,24%  2446  postmaster  [.] palloc0

Any ides why SearchCatCache is called too often?



> Regards
>
> Pavel
>


Re: [HACKERS] extension build issue with PostgreSQL 10 on Centos6

2017-09-30 Thread Pavel Stehule
2017-10-01 4:55 GMT+02:00 Devrim Gündüz :

>
> Hi,
>
> On Sat, 2017-09-30 at 11:15 -0400, Tom Lane wrote:
> > So the point is that postgresql-devel now needs to have a dependency
> > on icu-devel.
>
> Oh, I see. Ack, added. Will appear in 10.0 RPMs.
>

Thank you

Pavel

>
> Regards,
> --
> Devrim Gündüz
> EnterpriseDB: https://www.enterprisedb.com
> PostgreSQL Consultant, Red Hat Certified Engineer
> Twitter: @DevrimGunduz , @DevrimGunduzTR
>


[HACKERS] why subplan is 10x faster then function?

2017-09-30 Thread Pavel Stehule
Hi

I have some strange slow queries based on usage "view" functions

one function looks like this:

CREATE OR REPLACE FUNCTION
ides_funcs.najdatsplt_cislo_exekuce(mid_najdatsplt bigint)
 RETURNS character varying
 LANGUAGE sql
 STABLE
AS $function$
select CISLOEXEKUCE
  from najzalobpr MT, najvzallok A1,
NAJZALOBST A2, NAJZALOBCE A3 where
MT.ID_NAJVZALLOK= A1.ID_NAJVZALLOK AND
A1.ID_NAJZALOBST=A2.ID_NAJZALOBST AND
A2.ID_NAJZALOBCE= A3.ID_NAJZALOBCE AND
MT.ID_NAJDATSPLT = mID_NAJDATSPLT  LIMIT 1;
$function$ cost 20
;

I know so using this kind of functions is not good idea - it is customer
old code generated from Oracle. I had idea about possible planner issues.
But this is a executor issue.

when this function is evaluated as function, then execution needs about 46
sec

->  Nested Loop Left Join  (cost=0.71..780360.31 rows=589657
width=2700) (actual time=47796.588..47796.588 rows=0 loops=1)
  ->  Nested Loop  (cost=0.29..492947.20 rows=589657 width=2559)
(actual time=47796.587..47796.587 rows=0 loops=1)
->  Seq Scan on najdatsplt mt  (cost=0.00..124359.24
rows=1106096 width=1013) (actual time=47796.587..47796.587 rows=0 loops=1)
  Filter: (najdatsplt_cislo_exekuce(id_najdatsplt) IS
NOT NULL)
  Rows Removed by Filter: 654

When I use correlated subquery, then

 ->  Nested Loop  (cost=0.29..19876820.11 rows=589657 width=2559) (actual
time=3404.154..3404.154 rows=0 loops=1)
  ->  Seq Scan on najdatsplt mt  (cost=0.00..19508232.15 rows=1106096
width=1013) (actual time=3404.153..3404.153 rows=0 loops=1)
  Filter: ((SubPlan 11) IS NOT NULL)
  Rows Removed by Filter: 654
  SubPlan 11
->  Limit  (cost=1.10..17.49 rows=1 width=144) (actual
time=0.002..0.002 rows=0 loops=654)
  ->  Nested Loop  (cost=1.10..17.49 rows=1 width=144) (actual
time=0.002..0.002 rows=0 loops=654)
->  Nested Loop  (cost=0.83..17.02 rows=1 width=8)
(actual time=0.002..0.002 rows=0 loops=654)
  ->  Nested Loop  (cost=0.56..16.61 rows=1
width=8) (actual time=0.002..0.002 rows=0 loops=654)

The execution plan is +/- same - the bottleneck is in function execution

Tested with same result on 9.6, 10.

Is known overhead of function execution?

Regards

Pavel


Re: [HACKERS] extension build issue with PostgreSQL 10 on Centos6

2017-09-30 Thread Pavel Stehule
2017-09-30 8:18 GMT+02:00 Tom Lane <t...@sss.pgh.pa.us>:

> Pavel Stehule <pavel.steh...@gmail.com> writes:
> > /usr/pgsql-10/include/server/utils/pg_locale.h:19:26: error:
> > unicode/ucol.h: Adresář nebo soubor neexistuje
>
> > This mean not optional dependency on ICU.
>
> Really?  That file has
>
> #ifdef USE_ICU
> #include 
> #endif
>
> So if you managed to get that error without having said --with-icu,
> I'd be very interested to see how.
>

probably rpm is created with --with-icu



> regards, tom lane
>


Re: [HACKERS] extension build issue with PostgreSQL 10 on Centos6

2017-09-29 Thread Pavel Stehule
2017-09-30 7:54 GMT+02:00 Pavel Stehule <pavel.steh...@gmail.com>:

> Hi
>
> There is new, not well solved dependency. I have not any problem to build
> extension with PostgreSQL 9.6 there
>
> [oracle@ora2pg plpgsql_check]$ make
> gcc -Wall -Wmissing-prototypes -Wpointer-arith
> -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute
> -Wformat-security -fno-strict-aliasing -fwrapv -O2 -g -pipe -Wall
> -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector
> --param=ssp-buffer-size=4 -m64 -mtune=generic -DLINUX_OOM_SCORE_ADJ=0 -fPIC
> -I/usr/pgsql-10/lib/pgxs/src/makefiles/../../src/pl/plpgsql/src -I. -I./
> -I/usr/pgsql-10/include/server -I/usr/pgsql-10/include/internal
> -I/usr/include -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include  -c -o
> plpgsql_check.oplpgsql_check.c
> In file included from /usr/pgsql-10/include/server/tsearch/ts_locale.h:18,
>  from plpgsql_check.c:88:
> /usr/pgsql-10/include/server/utils/pg_locale.h:19:26: error:
> unicode/ucol.h: Adresář nebo soubor neexistuje
> In file included from /usr/pgsql-10/include/server/tsearch/ts_locale.h:18,
>  from plpgsql_check.c:88:
> /usr/pgsql-10/include/server/utils/pg_locale.h:94: error: expected
> specifier-qualifier-list before ‘UCollator’
> /usr/pgsql-10/include/server/utils/pg_locale.h:108: error: expected ‘)’
> before ‘*’ token
> /usr/pgsql-10/include/server/utils/pg_locale.h:109: warning: type
> defaults to ‘int’ in declaration of ‘UChar’
> /usr/pgsql-10/include/server/utils/pg_locale.h:109: error: expected ‘;’,
> ‘,’ or ‘)’ before ‘*’ token
> make: *** [plpgsql_check.o] Error 1
>
> This mean not optional dependency on ICU.
>

It was fixed by libicu-devel install - so it should be on dep list of
postgresql10-devel



> Installed from rpm
>
> Regards
>
> Pavel
>


[HACKERS] extension build issue with PostgreSQL 10 on Centos6

2017-09-29 Thread Pavel Stehule
Hi

There is new, not well solved dependency. I have not any problem to build
extension with PostgreSQL 9.6 there

[oracle@ora2pg plpgsql_check]$ make
gcc -Wall -Wmissing-prototypes -Wpointer-arith
-Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute
-Wformat-security -fno-strict-aliasing -fwrapv -O2 -g -pipe -Wall
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector
--param=ssp-buffer-size=4 -m64 -mtune=generic -DLINUX_OOM_SCORE_ADJ=0 -fPIC
-I/usr/pgsql-10/lib/pgxs/src/makefiles/../../src/pl/plpgsql/src -I. -I./
-I/usr/pgsql-10/include/server -I/usr/pgsql-10/include/internal
-I/usr/include -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include  -c -o
plpgsql_check.oplpgsql_check.c
In file included from /usr/pgsql-10/include/server/tsearch/ts_locale.h:18,
 from plpgsql_check.c:88:
/usr/pgsql-10/include/server/utils/pg_locale.h:19:26: error:
unicode/ucol.h: Adresář nebo soubor neexistuje
In file included from /usr/pgsql-10/include/server/tsearch/ts_locale.h:18,
 from plpgsql_check.c:88:
/usr/pgsql-10/include/server/utils/pg_locale.h:94: error: expected
specifier-qualifier-list before ‘UCollator’
/usr/pgsql-10/include/server/utils/pg_locale.h:108: error: expected ‘)’
before ‘*’ token
/usr/pgsql-10/include/server/utils/pg_locale.h:109: warning: type defaults
to ‘int’ in declaration of ‘UChar’
/usr/pgsql-10/include/server/utils/pg_locale.h:109: error: expected ‘;’,
‘,’ or ‘)’ before ‘*’ token
make: *** [plpgsql_check.o] Error 1

This mean not optional dependency on ICU.

Installed from rpm

Regards

Pavel


Re: [HACKERS] SQL/JSON in PostgreSQL

2017-09-29 Thread Pavel Stehule
2017-09-30 1:06 GMT+02:00 Nikita Glukhov <n.glu...@postgrespro.ru>:

> On 29.09.2017 20:07, Pavel Stehule wrote:
>
> 2017-09-29 12:15 GMT+02:00 Pavel Stehule <pavel.steh...@gmail.com>:
>
>>
>> 2017-09-29 12:09 GMT+02:00 Nikita Glukhov <n.glu...@postgrespro.ru>:
>>
>>>
>>>
>>> I have some free time now. Is it last version?
>>>
>>> Regards
>>>
>>> Pavel
>>>
>>> Yes, this is still the latest version. Now I am working only on
>>> unfinished WIP
>>> patch no. 9, but I think it should be reviewed the last.
>>>
>>>
>>
>> ok
>>
>> Thank you
>>
>
> I have few queries and notes
>
> 1. Why first patch holds Gin related functionality? Can be it separated?
>
> Yes, it can be easily separated. Attached archive with separated GIN patch
> no.2.
>
> 2. Why Json path functions starts by "_" ? These functions are not removed
> by other patches.
>
> Originally, these functions were created only for testing purposes and
> should
> be treated as "internal". But with introduction of jsonpath operators
> jsonpath
> tests can be completely rewritten using this operators.
>
> 3. What is base for jsonpath-extensions? ANSI/SQL?
>
> Our jsonpath extensions are not based on any standards, so they are quite
> dangerous because they can conflict with the standard in the future.
>
> This patch is pretty big - so I propose to push JSONPath and SQL/JSON
> related patches first, and then in next iteration to push JSON_TABLE patch.
> Is it acceptable strategy?
>
> I think it's acceptable. And this was the main reason for the separation
> of patches.
>

I prefer to move it to another commit fest item. It will simplify a
communication between us and possible committers - and we can better
concentrate to smaller set of code.



> I am sure so JSON_TABLE is pretty important function, but it is pretty
> complex too (significantly more complex than XMLTABLE), so it can be
> practiacal to move this function to separate project. I hope so all patches
> will be merged in release 11 time.
>
>
> --
> Nikita Glukhov
> Postgres Professional: http://www.postgrespro.com
> The Russian Postgres Company
>


Re: [HACKERS] SQL/JSON in PostgreSQL

2017-09-29 Thread Pavel Stehule
2017-09-29 12:15 GMT+02:00 Pavel Stehule <pavel.steh...@gmail.com>:

>
>
> 2017-09-29 12:09 GMT+02:00 Nikita Glukhov <n.glu...@postgrespro.ru>:
>
>>
>>
>> I have some free time now. Is it last version?
>>
>> Regards
>>
>> Pavel
>>
>> Yes, this is still the latest version. Now I am working only on
>> unfinished WIP
>> patch no. 9, but I think it should be reviewed the last.
>>
>>
>
> ok
>
> Thank you
>

I have few queries and notes

1. Why first patch holds Gin related functionality? Can be it separated?

2. Why Json path functions starts by "_" ? These functions are not removed
by other patches.

3. What is base for jsonpath-extensions? ANSI/SQL?

This patch is pretty big - so I propose to push JSONPath and SQL/JSON
related patches first, and then in next iteration to push JSON_TABLE patch.
Is it acceptable strategy? I am sure so JSON_TABLE is pretty important
function, but it is pretty complex too (significantly more complex than
XMLTABLE), so it can be practiacal to move this function to separate
project. I hope so all patches will be merged in release 11 time.

Regards

Pavel



> Pavel
>
> --
>> Nikita Glukhov
>> Postgres Professional: http://www.postgrespro.com
>> The Russian Postgres Company
>>
>
>


Re: [HACKERS] SQL/JSON in PostgreSQL

2017-09-29 Thread Pavel Stehule
2017-09-29 12:09 GMT+02:00 Nikita Glukhov :

>
>
> I have some free time now. Is it last version?
>
> Regards
>
> Pavel
>
> Yes, this is still the latest version. Now I am working only on unfinished
> WIP
> patch no. 9, but I think it should be reviewed the last.
>
>

ok

Thank you

Pavel

-- 
> Nikita Glukhov
> Postgres Professional: http://www.postgrespro.com
> The Russian Postgres Company
>


Re: [HACKERS] SQL/JSON in PostgreSQL

2017-09-29 Thread Pavel Stehule
Hi

2017-09-16 1:31 GMT+02:00 Nikita Glukhov :

> On 15.09.2017 22:36, Oleg Bartunov wrote:
>
> On Fri, Sep 15, 2017 at 7:31 PM, Robert Haas 
>> wrote:
>>
>>> On Fri, Sep 15, 2017 at 10:10 AM, Daniel Gustafsson 
>>> wrote:
>>>
 Can we expect a rebased version of this patch for this commitfest?
 Since it’s
 a rather large feature it would be good to get it in as early as we can
 in the
 process.

>>> Again, given that this needs a "major" rebase and hasn't been updated
>>> in a month, and given that the CF is already half over, this should
>>> just be bumped to the next CF.  We're supposed to be trying to review
>>> things that were ready to go by the start of the CF, not the end.
>>>
>> We are supporting v10 branch in our github repository
>> https://github.com/postgrespro/sqljson/tree/sqljson_v10
>>
>> Since the first post we made a lot of changes, mostly because of
>> better understanding the standard and availability of technical report
>> (http://standards.iso.org/ittf/PubliclyAvailableStandards/c0
>> 67367_ISO_IEC_TR_19075-6_2017.zip).
>> Most important are:
>>
>> 1.We abandoned FORMAT support, which could confuse our users, since we
>> have data types json[b].
>>
>> 2. We use XMLTABLE infrastructure, extended for  JSON_TABLE support.
>>
>> 3. Reorganize commits, so we could split one big patch by several
>> smaller patches, which could be reviewed independently.
>>
>> 4. The biggest problem is documentation, we are working on it.
>>
>> Nikita will submit patches soon.
>>
>
> Attached archive with 9 patches rebased onto latest master.
>
> 0001-jsonpath-v02.patch:
>  - jsonpath type
>  - jsonpath execution on jsonb type
>  - jsonpath operators for jsonb type
>  - GIN support for jsonpath operators
>
> 0002-jsonpath-json-v02.patch:
>  - jsonb-like iterators for json type
>  - jsonpath execution on json type
>  - jsonpath operators for json type
>
> 0003-jsonpath-extensions-v02.patch:
> 0004-jsonpath-extensions-tests-for-json-v02.patch:
>  - some useful standard extensions with tests
>  0005-sqljson-v02.patch:
>  - SQL/JSON constructors (JSON_OBJECT[AGG], JSON_ARRAY[AGG])
>  - SQL/JSON query functions (JSON_VALUE, JSON_QUERY, JSON_EXISTS)
>  - IS JSON predicate
>
> 0006-sqljson-json-v02.patch:
>  - SQL/JSON support for json type and tests
>
> 0007-json_table-v02.patch:
>  - JSON_TABLE using XMLTABLE infrastructure
>
> 0008-json_table-json-v02.patch:
>  - JSON_TABLE support for json type
>
> 0009-wip-extensions-v02.patch:
>  - FORMAT JSONB
>  - jsonb to/from bytea casts
>  - jsonpath operators
>  - some unfinished jsonpath extensions
>
>
> Originally, JSON path was implemented only for jsonb type, and I decided to
> add jsonb-like iterators for json type for json support implementation with
> minimal changes in JSON path code.  This solution (see jsonpath_json.c from
> patch 0002) looks a little dubious to me, so I separated json support into
> independent patches.
>
> The last WIP patch 0009 is unfinished and contains a lot of FIXMEs.  But
> the ability to use arbitrary Postgres operators in JSON path with
> explicitly
> specified  types is rather interesting, and I think it should be shown now
> to get a some kind of pre-review.
>
> We are supporting v11 and v10 branches in our github repository:
>
> https://github.com/postgrespro/sqljson/tree/sqljson
> https://github.com/postgrespro/sqljson/tree/sqljson_wip
> https://github.com/postgrespro/sqljson/tree/sqljson_v10
> https://github.com/postgrespro/sqljson/tree/sqljson_v10_wip
>
> Attached patches can be produced simply by combining groups of consecutive
> commits from these branches.
>
>
I have some free time now. Is it last version?

Regards

Pavel

--
> Nikita Glukhov
> Postgres Professional:http://www.postgrespro.com
> The Russian Postgres Company
>
>


[HACKERS] plpgsql_check future

2017-09-28 Thread Pavel Stehule
Hi

The plpgsql_check is mature project now, and I am would to start discussion
about future of this project. It is still private project, although it is
important for one from key PostgreSQL feature - PLpgSQL. I would be happy
if the community can take some responsibility for this project. This
project is too small to create own community and infrastructure like
PostGIS.

The development is almost time quiet. There are two issues:

1. It is placed on my personal repository on GitHub. It is far to perfect
from security, from substitutability perspective.

2. There is problems with builds for other than Linux platforms. The builds
on MS Win are difficult for me, because I don't use Windows.

The plpgsql_check is too big be part of contrib. But I invite some similar,
what help me with mentioned issues.

Any ideas?

Regards

Pavel


Re: [HACKERS] proposal - Default namespaces for XPath expressions (PostgreSQL 11)

2017-09-28 Thread Pavel Stehule
Hi

now xpath and xpath_exists supports default namespace too

updated doc,
fixed all variants of expected result test file

Regards

Pavel
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 2f036015cc..610f709933 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -10477,7 +10477,8 @@ SELECT xml_is_well_formed_document('http://postgresql.org/stuf
  second the namespace URI. It is not required that aliases provided in
  this array be the same as those being used in the XML document itself (in
  other words, both in the XML document and in the xpath
- function context, aliases are local).
+ function context, aliases are local). Default namespace has
+ empty name (empty string) and should be only one.
 
 
 
@@ -10496,8 +10497,8 @@ SELECT xpath('/my:a/text()', 'http://example.com;>test',
 
  To deal with default (anonymous) namespaces, do something like this:
 

Re: [HACKERS] proposal - Default namespaces for XPath expressions (PostgreSQL 11)

2017-09-27 Thread Pavel Stehule
Hi

2017-09-25 13:25 GMT+02:00 Kyotaro HORIGUCHI <
horiguchi.kyot...@lab.ntt.co.jp>:

> Hello, this patch have been ignored for a long time since its proposal...
>
> At Sat, 11 Mar 2017 20:44:31 +0100, Pavel Stehule <pavel.steh...@gmail.com>
> wrote in <CAFj8pRB+WDyDcZyGmfRdJ0HOoXugeaL-KNFeK9YA5Z10JN9qfA@mail.gmail.
> com>
> > Hi
> >
> > This proposal is followup of implementation of XMLTABLE.
> >
> > Lot of XML documents has assigned document namespace.
> >
> > http://x.y;>10
> >
> > For these XML document any search path must use schema "http://x.y;.
> This
> > is not too intuitive, and from XMLTABLE usage is not too user friendly,
> > because the default column path (same like column name) cannot be used. A
> > solution of this issue is default namespace - defined in SQL/XML.
> >
> > example - related to previous xml
> >
> > without default namespace:
> > XMLTABLE(NAMESPACES('http://x.y' AS aux),
> > '/aux:rows/aux:row' PASSING ...
> > COLUMNS a int PATH 'aux:a')
> >
> > with default namespace
> > XMLTABLE(NAMESPACES(DEFAULT 'http://x.y'),
> > '/rows/row' PASSING ...
> > COLUMNS a int);
> >
> >
> > Unfortunately the libxml2 doesn't support default namespaces in XPath
> > expressions. Because the libxml2 functionality is frozen, there is not
> big
> > chance for support in near future. A implementation is not too hard -
> > although it requires simple XPath expressions state translator.
> >
> > The databases with XMLTABLE implementation supports default namespace for
> > XPath expressions.
> >
> > The patch for initial implementation is attached.
>
> The original message is a bit less informative for those who
> wants to review this but are not accustomed (like me) to this
> area. I try to augment this with a bit more information. (Perhaps)
>
> An example of this issue can be as follows.
>
> create table t1 (id int, doc xml);
> insert into t1
>   values
>   (1, 'http://x.y;>10'),
>   (2, 'http://x.y;>20'),
>   (3, 'http://x.y;>30'),
>   (4, 'http://x.y;>40');
> select x.* from t1, xmltable('/rows/row' passing t1.doc columns data int
> PATH 'a') as x;
> |  data
> | --
> | (0 rows)
> select x.* from t1, xmltable(XMLNAMESPACES('http://x.y' as n),
> '/n:rows/n:row' passing t1.doc columns data int PATH 'n:a') as x;
> |  data
> | --
> |10
> |20
> |30
> |40
> | (4 rows)
>
> But, currently the follwing command fails with error.
>
> select x.* from t1, xmltable(XMLNAMESPACES(DEFAULT 'http://x.y'),
> '/rows/row' passing t1.doc columns data int PATH 'a') as x;
> | ERROR:  DEFAULT namespace is not supported
>
> This patch let PostgreSQL allow this syntax by transforming xpath
> string when DEFAULT namespace is defined.
>

> ===
> I have some review comments.
>
> This patch still applies with shifts and works as expected.
>
> 1. Uniformity among simliar features
>
>   As mentioned in the proposal, but it is lack of uniformity that
>   the xpath transformer is applied only to xmltable and not for
>   other xpath related functions.
>

I have to fix the XPath function. The SQL/XML function Xmlexists doesn't
support namespaces/

>
>
> 2. XML comformance
>
>I'm not yet sure this works for the all extent of the
>available syntax but at least fails for the following
>expression.
>
> (delete from t1;)
> insert into t1
>values
>(5, 'http://x.y;>50 rows>');
>
> select x.* from t1, xmltable(XMLNAMESPACES(DEFAULT 'http://x.y'),
> '/rows/row' passing t1.doc columns data int PATH 'a[1][@hoge]') as x;
> >  data
> > --
> >
> > (1 row)
>
>
>   The following expression works.
>
> select x.* from t1, xmltable(XMLNAMESPACES('http://x.y' as x),
> '/x:rows/x:row' passing t1.doc columns data int PATH 'x:a[1][@hoge]') as x;
> >  data
> > --
> >50
> > (1 row)
>
>   The w3c says as follows.
>
>   https://www.w3.org/TR/xml-names/#defaulting
>   > The namespace name for an unprefixed attribute name always has no
> value.
>
>   We currently don't have a means to make sure that this works
>   correctly for the whole extent. More regression test helps?
>

I fixed this issue and I used your examples as regression tests

>
>
> 3. The predefined prefix for default namespace
>
>   The patch defines the name of the defaut namespace as
>   "pgdefnamespace". If a default namespace is defined, a
>   namespace is ma

Re: [HACKERS] logical replication and statistics

2017-09-27 Thread Pavel Stehule
Hi

I did recheck and now it looks like it is expected.

Probably I did some wrong

Sorry for noise

Regards

Pavel


Re: [HACKERS] logical replication and statistics

2017-09-26 Thread Pavel Stehule
2017-09-26 11:56 GMT+02:00 Pavel Stehule <pavel.steh...@gmail.com>:

>
>
> 2017-09-26 11:51 GMT+02:00 Masahiko Sawada <sawada.m...@gmail.com>:
>
>> On Tue, Sep 26, 2017 at 2:50 AM, Pavel Stehule <pavel.steh...@gmail.com>
>> wrote:
>> >
>> >
>> > 2017-09-25 19:23 GMT+02:00 Petr Jelinek <petr.jeli...@2ndquadrant.com>:
>> >>
>> >> On 25/09/17 19:19, Tom Lane wrote:
>> >> > Pavel Stehule <pavel.steh...@gmail.com> writes:
>> >> >> I had two instances on one server with different port. I am sure, so
>> >> >> replication was functional. Only one issue is statistics
>> >> >
>> >> >> Master:
>> >> >
>> >> >> CREATE TABLE foo(id int primary key, a int);
>> >> >> CREATE PUBLICATION test_pub FOR TABLE foo;
>> >> >> INSERT INTO foo VALUES(1, 200);
>> >> >
>> >> >> slave
>> >> >
>> >> >> CREATE TABLE foo(id int primary key, a int);
>> >> >> CREATE SUBSCRIPTION test_sub CONNECTION 'port=5432' PUBLICATION
>> >> >> test_pub;
>> >> >
>> >> >> That was all
>> >> >
>> >> > In this example, nothing's been done yet by the actual replication
>> >> > apply process, only by the initial table sync.  Maybe that accounts
>> >> > for your not seeing stats?
>> >> >
>> >>
>> >> The main replication worker should still be running though. The output
>> >> of pg_stat_replication should only be empty if there is nothing
>> running.
>> >>
>> >
>> > I did some inserts, updates, ..
>> >
>> > I can recheck it - it was done on 10 RC
>>
>> I guess CREATE SUBSCRIPTION failed for whatever reason (e.g, wal_level
>> < logical on the master). Didn't you get errors from CREATE
>> SUBSCRIPTION?
>>
>
> sorry I had wal_level = logical
>

but if I remember - maybe I had this level only on master

>
> Pavel
>
>>
>> Regards,
>>
>> --
>> Masahiko Sawada
>> NIPPON TELEGRAPH AND TELEPHONE CORPORATION
>> NTT Open Source Software Center
>>
>
>


Re: [HACKERS] logical replication and statistics

2017-09-26 Thread Pavel Stehule
2017-09-26 11:51 GMT+02:00 Masahiko Sawada <sawada.m...@gmail.com>:

> On Tue, Sep 26, 2017 at 2:50 AM, Pavel Stehule <pavel.steh...@gmail.com>
> wrote:
> >
> >
> > 2017-09-25 19:23 GMT+02:00 Petr Jelinek <petr.jeli...@2ndquadrant.com>:
> >>
> >> On 25/09/17 19:19, Tom Lane wrote:
> >> > Pavel Stehule <pavel.steh...@gmail.com> writes:
> >> >> I had two instances on one server with different port. I am sure, so
> >> >> replication was functional. Only one issue is statistics
> >> >
> >> >> Master:
> >> >
> >> >> CREATE TABLE foo(id int primary key, a int);
> >> >> CREATE PUBLICATION test_pub FOR TABLE foo;
> >> >> INSERT INTO foo VALUES(1, 200);
> >> >
> >> >> slave
> >> >
> >> >> CREATE TABLE foo(id int primary key, a int);
> >> >> CREATE SUBSCRIPTION test_sub CONNECTION 'port=5432' PUBLICATION
> >> >> test_pub;
> >> >
> >> >> That was all
> >> >
> >> > In this example, nothing's been done yet by the actual replication
> >> > apply process, only by the initial table sync.  Maybe that accounts
> >> > for your not seeing stats?
> >> >
> >>
> >> The main replication worker should still be running though. The output
> >> of pg_stat_replication should only be empty if there is nothing running.
> >>
> >
> > I did some inserts, updates, ..
> >
> > I can recheck it - it was done on 10 RC
>
> I guess CREATE SUBSCRIPTION failed for whatever reason (e.g, wal_level
> < logical on the master). Didn't you get errors from CREATE
> SUBSCRIPTION?
>

sorry I had wal_level = logical

Pavel

>
> Regards,
>
> --
> Masahiko Sawada
> NIPPON TELEGRAPH AND TELEPHONE CORPORATION
> NTT Open Source Software Center
>


Re: [HACKERS] logical replication and statistics

2017-09-25 Thread Pavel Stehule
2017-09-25 19:23 GMT+02:00 Petr Jelinek <petr.jeli...@2ndquadrant.com>:

> On 25/09/17 19:19, Tom Lane wrote:
> > Pavel Stehule <pavel.steh...@gmail.com> writes:
> >> I had two instances on one server with different port. I am sure, so
> >> replication was functional. Only one issue is statistics
> >
> >> Master:
> >
> >> CREATE TABLE foo(id int primary key, a int);
> >> CREATE PUBLICATION test_pub FOR TABLE foo;
> >> INSERT INTO foo VALUES(1, 200);
> >
> >> slave
> >
> >> CREATE TABLE foo(id int primary key, a int);
> >> CREATE SUBSCRIPTION test_sub CONNECTION 'port=5432' PUBLICATION
> test_pub;
> >
> >> That was all
> >
> > In this example, nothing's been done yet by the actual replication
> > apply process, only by the initial table sync.  Maybe that accounts
> > for your not seeing stats?
> >
>
> The main replication worker should still be running though. The output
> of pg_stat_replication should only be empty if there is nothing running.
>
>
I did some inserts, updates, ..

I can recheck it - it was done on 10 RC

Pavel

--
>   Petr Jelinek  http://www.2ndQuadrant.com/
>   PostgreSQL Development, 24x7 Support, Training & Services
>


Re: [HACKERS] logical replication and statistics

2017-09-25 Thread Pavel Stehule
2017-09-25 18:30 GMT+02:00 Petr Jelinek <petr.jeli...@2ndquadrant.com>:

> On 25/09/17 13:33, Pavel Stehule wrote:
> >
> >
> > 2017-09-25 13:12 GMT+02:00 Masahiko Sawada <sawada.m...@gmail.com
> > <mailto:sawada.m...@gmail.com>>:
> >
> > On Mon, Sep 25, 2017 at 12:58 AM, Pavel Stehule
> > <pavel.steh...@gmail.com <mailto:pavel.steh...@gmail.com>> wrote:
> > > Hi
> > >
> > > I did trivial example of logical replication (one table, one
> publication,
> > > one subscription)
> > >
> > > I am little bit surprised so after some work - the replication is
> working,
> > > the statistics are empty
> > >
> > > #master
> > > postgres=# select * from pg_stat_replication ;
> > > (0 rows)
> > >
> > > #slave
> > > postgres=# select * from pg_stat_subscription ;
> > > -[ RECORD 1 ]-+-
> > > subid | 16472
> > > subname   | test_sub
> > > pid   |
> > > relid |
> > > received_lsn  |
> > > last_msg_send_time|
> > > last_msg_receipt_time |
> > > latest_end_lsn|
> > > latest_end_time   |
> > >
> > > Should be some enabled?
> > >
> >
> > If the subscription is disabled, the statistics of subscription is
> > empty and no wal sender processes launch. The test_sub can start the
> > replication by ALTER SUBSCRIPTION test_sub ENABLE.
> >
> >
> > I used this subscriptions for and it was warking.
>
>
> If there is no pid, the worker is not running. And if there is nothing
> in pg_stat_replication on master, the walsender is not running either,
> so it seems like it's not actually working.
>

I had two instances on one server with different port. I am sure, so
replication was functional. Only one issue is statistics

Master:

CREATE TABLE foo(id int primary key, a int);
CREATE PUBLICATION test_pub FOR TABLE foo;
INSERT INTO foo VALUES(1, 200);

slave

CREATE TABLE foo(id int primary key, a int);
CREATE SUBSCRIPTION test_sub CONNECTION 'port=5432' PUBLICATION test_pub;

That was all



> --
>   Petr Jelinek  http://www.2ndQuadrant.com/
>   PostgreSQL Development, 24x7 Support, Training & Services
>


Re: [HACKERS] logical replication and statistics

2017-09-25 Thread Pavel Stehule
2017-09-25 13:12 GMT+02:00 Masahiko Sawada <sawada.m...@gmail.com>:

> On Mon, Sep 25, 2017 at 12:58 AM, Pavel Stehule <pavel.steh...@gmail.com>
> wrote:
> > Hi
> >
> > I did trivial example of logical replication (one table, one publication,
> > one subscription)
> >
> > I am little bit surprised so after some work - the replication is
> working,
> > the statistics are empty
> >
> > #master
> > postgres=# select * from pg_stat_replication ;
> > (0 rows)
> >
> > #slave
> > postgres=# select * from pg_stat_subscription ;
> > -[ RECORD 1 ]-+-
> > subid | 16472
> > subname   | test_sub
> > pid   |
> > relid |
> > received_lsn  |
> > last_msg_send_time|
> > last_msg_receipt_time |
> > latest_end_lsn|
> > latest_end_time   |
> >
> > Should be some enabled?
> >
>
> If the subscription is disabled, the statistics of subscription is
> empty and no wal sender processes launch. The test_sub can start the
> replication by ALTER SUBSCRIPTION test_sub ENABLE.
>

I used this subscriptions for and it was warking.

Regards

Pavel


> Regards,
>
> --
> Masahiko Sawada
> NIPPON TELEGRAPH AND TELEPHONE CORPORATION
> NTT Open Source Software Center
>


[HACKERS] logical replication and statistics

2017-09-24 Thread Pavel Stehule
Hi

I did trivial example of logical replication (one table, one publication,
one subscription)

I am little bit surprised so after some work - the replication is working,
the statistics are empty

#master
postgres=# select * from pg_stat_replication ;
(0 rows)

#slave
postgres=# select * from pg_stat_subscription ;
-[ RECORD 1 ]-+-
subid | 16472
subname   | test_sub
pid   |
relid |
received_lsn  |
last_msg_send_time|
last_msg_receipt_time |
latest_end_lsn|
latest_end_time   |

Should be some enabled?

Regards

Pavel


Re: [HACKERS] pgbench - use enum for meta commands

2017-09-22 Thread Pavel Stehule
2017-09-23 5:45 GMT+02:00 Fabien COELHO :

>
> Minor code enhancement.
>
> While having a look at adding if/elif/else/endif to pgbench, and given the
> current gset/cset added meta commands in cf queue, it occured to me that
> repeated string comparisons to check for the various meta commands is
> neither efficient nor readable. Use an enum instead, which are extensively
> used already for other similar purposes.
>

+1

Pavel


> --
> Fabien.
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
>


Re: [HACKERS] Re: proposal - psql: possibility to specify sort for describe commands, when size is printed

2017-09-22 Thread Pavel Stehule
2017-09-22 21:12 GMT+02:00 Peter Eisentraut <
peter.eisentr...@2ndquadrant.com>:

> On 9/22/17 09:16, Pavel Stehule wrote:
> > Example: somebody set SORT_COLUMNS to schema_name value. This is
> > nonsense for \l command
> >
> > Now, I am thinking so more correct and practical design is based on
> > special mode, activated by variable
> >
> > PREFER_SIZE_SORT .. (off, asc, desc)
> >
> > This has sense for wide group of commands that can show size. And when
> > size is not visible, then this option is not active.
>
> Maybe this shouldn't be a variable at all.  It's not like you'll set
> this as a global preference.  You probably want it for one command only.
>  So a per-command option might make more sense.
>

Sure, I cannot to know, what users will do. But, when I need to see a size
of objects, then I prefer the sort by size desc every time. If I need to
find some object, then I can to use a searching in pager. So in my case,
this settings will be in psqlrc. In GoodData we used years own
customization - the order by size was hardcoded and nobody reported me any
issue.

Alexander proposed some per command option, but current syntax of psql
commands don't allows some simple parametrization. If it can be user
friendly, then it should be short. From implementation perspective, it
should be simply parsed. It should be intuitive too - too much symbols
together is not good idea.

Maybe some prefix design - but it is not design for common people (although
these people don't use psql usually)

'\sort size \dt ?

\dt:sort_by_size
\dt+:sort_by_size ?

I don't see any good design in this direction

Regards

Pavel







>
> --
> Peter Eisentraut  http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>


Re: [HACKERS] Re: proposal - psql: possibility to specify sort for describe commands, when size is printed

2017-09-22 Thread Pavel Stehule
2017-09-21 20:30 GMT+02:00 Pavel Stehule <pavel.steh...@gmail.com>:

>
>
> 2017-09-21 20:20 GMT+02:00 Peter Eisentraut <peter.eisentraut@2ndquadrant.
> com>:
>
>> On 9/21/17 13:54, Pavel Stehule wrote:
>> > I see where you are coming from, but there is no association in the
>> > existing UI that equates "+" to the word "verbose".  I think just
>> > removing the verbose prefix and applying the sorting behavior in all
>> > cases should be easier to explain and implement.
>> >
>> > I though about it - but I am not sure if one kind of these variables is
>> > practical.
>> >
>> > if I don't need a size, then sort by schema, name is ok (I didn't need
>> > any else ever). With only one kind of these variables, this setting is
>> > common - what is not practical.
>>
>> But you are proposing also to add a variable configuring the sort
>> direction.  It would be weird that \dX+ observed the sort direction but
>> \dX did not.
>>
>
> yes and no.
>
> schema_name, name_schema or SORT_DIRECTION has sense for both type of
> commands.
>
> size sort has sense only for \dX+ command.
>
> I am thinking about solution and the most clean I see two distinct
> variables:
>
> SORT_COLUMNS and VERBOSE_SORT_COLUMNS
>
> when VERBOSE_SORT_COLUMNS will be undefined, then SORT_COLUMNS is used for
> \dX+ command too.
>
> Is it acceptable?
>

I though more about it, and I am thinking so this direction is not good.

Example: somebody set SORT_COLUMNS to schema_name value. This is nonsense
for \l command

Now, I am thinking so more correct and practical design is based on special
mode, activated by variable

PREFER_SIZE_SORT .. (off, asc, desc)

This has sense for wide group of commands that can show size. And when size
is not visible, then this option is not active.

What do you think about this proposal?

Regards

Pavel



>
>
>
>> --
>> Peter Eisentraut  http://www.2ndQuadrant.com/
>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>>
>
>


Re: [HACKERS] [PROPOSAL] Temporal query processing with range types

2017-09-22 Thread Pavel Stehule
2017-09-22 10:15 GMT+02:00 Peter Moser <pitiz...@gmail.com>:

> 2017-09-22 10:06 GMT+02:00 Pavel Stehule <pavel.steh...@gmail.com>:
> > ANSI SQL 2011 has temporal data support
> >
> > https://www.slideshare.net/CraigBaumunk/temporal-
> extensions-tosql20112012010438
>
> As operations it only supports temporal inner joins using the overlap
> predicate.
> Temporal aggregation, temporal outer joins, temporal duplicate
> elimination, and temporal set operations are not supported in
> SQL:2011.
> Please see [1] Section 2.5 Future directions.
>
> Best regards,
> Anton, Johann, Michael, Peter
>
>
> [1] https://cs.ulb.ac.be/public/_media/teaching/infoh415/
> tempfeaturessql2011.pdf


Thank you for info.

Currently Postgres has zero support for SQL:2011 temporal tables. Isn't
better start with already standard features than appends some without
standard? The standard has some concept and if we start out of this
concept, then the result will be far to standard probably.

Regards

Pavel


Re: [HACKERS] [PROPOSAL] Temporal query processing with range types

2017-09-22 Thread Pavel Stehule
2017-09-22 9:59 GMT+02:00 Peter Moser <pitiz...@gmail.com>:

> 2017-09-12 16:33 GMT+02:00 Simon Riggs <si...@2ndquadrant.com>:
> > PostgreSQL tries really very hard to implement the SQL Standard and
> > just the standard. ISTM that the feedback you should have been given
> > is that this is very interesting but will not be committed in its
> > current form; I am surprised to see nobody has said that, though you
> > can see the truth of that since nobody is actively looking to review
> > or commit this. Obviously if the standard were changed to support
> > these things we'd suddenly be interested...
>
> Ok, we understand that PostgreSQL wants to strictly follow the SQL
> standard, which is not yet defined for temporal databases. In this
> context we understand your comment and agree on your position.
>

ANSI SQL 2011 has temporal data support

https://www.slideshare.net/CraigBaumunk/temporal-extensions-tosql20112012010438

Regards

Pavel Stehule


Re: [HACKERS] Re: proposal - psql: possibility to specify sort for describe commands, when size is printed

2017-09-21 Thread Pavel Stehule
2017-09-21 20:20 GMT+02:00 Peter Eisentraut <
peter.eisentr...@2ndquadrant.com>:

> On 9/21/17 13:54, Pavel Stehule wrote:
> > I see where you are coming from, but there is no association in the
> > existing UI that equates "+" to the word "verbose".  I think just
> > removing the verbose prefix and applying the sorting behavior in all
> > cases should be easier to explain and implement.
> >
> > I though about it - but I am not sure if one kind of these variables is
> > practical.
> >
> > if I don't need a size, then sort by schema, name is ok (I didn't need
> > any else ever). With only one kind of these variables, this setting is
> > common - what is not practical.
>
> But you are proposing also to add a variable configuring the sort
> direction.  It would be weird that \dX+ observed the sort direction but
> \dX did not.
>

yes and no.

schema_name, name_schema or SORT_DIRECTION has sense for both type of
commands.

size sort has sense only for \dX+ command.

I am thinking about solution and the most clean I see two distinct
variables:

SORT_COLUMNS and VERBOSE_SORT_COLUMNS

when VERBOSE_SORT_COLUMNS will be undefined, then SORT_COLUMNS is used for
\dX+ command too.

Is it acceptable?




> --
> Peter Eisentraut  http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>


Re: [HACKERS] Re: proposal - psql: possibility to specify sort for describe commands, when size is printed

2017-09-21 Thread Pavel Stehule
2017-09-21 15:30 GMT+02:00 Peter Eisentraut <
peter.eisentr...@2ndquadrant.com>:

> On 9/21/17 04:27, Pavel Stehule wrote:
> > yes. It was designed for + commands only. Can be enhanced to all
> > commands - then VERBOSE prefix should be removed - not sure if it is
> > necessary. For me interesting different order than default is only in
> > verbose mode.
>
> I see where you are coming from, but there is no association in the
> existing UI that equates "+" to the word "verbose".  I think just
> removing the verbose prefix and applying the sorting behavior in all
> cases should be easier to explain and implement.
>

I though about it - but I am not sure if one kind of these variables is
practical.

if I don't need a size, then sort by schema, name is ok (I didn't need any
else ever). With only one kind of these variables, this setting is common -
what is not practical.

I need sort by size in verbose mode (where size is visible) in 100% - so it
will be saved to psqlrc. And when size will be invisible, then sort by size
is not practical, and can be messy (because size is not visible).

So I don't think so removing VERBOSE prefix is a good idea - or we should
to do different design (have not a idea how)

Regards

Pavel


> --
> Peter Eisentraut  http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>


Re: [HACKERS] Re: proposal - psql: possibility to specify sort for describe commands, when size is printed

2017-09-21 Thread Pavel Stehule
2017-09-21 10:19 GMT+02:00 Alexander Korotkov <a.korot...@postgrespro.ru>:

> On Thu, Sep 21, 2017 at 1:53 AM, Peter Eisentraut <
> peter.eisentr...@2ndquadrant.com> wrote:
>
>> On 9/8/17 00:13, Pavel Stehule wrote:
>> > I am sending rebased patch
>> >
>> > rebased again + fix obsolete help
>>
>> Why are the variables called VERBOSE_SORT_* ?  What is verbose about them?
>
>
> I assume Pavel called them so, because they are working only for "verbose"
> mode of command.  I.e. they are working for \dt+ not \dt.
> However, in \dt 2 of 3 sorting modes might work: schema_name and
> name_schema.  Thus, I think it worths enabling these variables for "non
> verbose" mode of commands too.
>

yes. It was designed for + commands only. Can be enhanced to all commands -
then VERBOSE prefix should be removed - not sure if it is necessary. For me
interesting different order than default is only in verbose mode.





> --
> Alexander Korotkov
> Postgres Professional: http://www.postgrespro.com
> The Russian Postgres Company
>


Re: [HACKERS] type cache for concat functions

2017-09-19 Thread Pavel Stehule
2017-09-19 21:11 GMT+02:00 Tom Lane <t...@sss.pgh.pa.us>:

> Pavel Stehule <pavel.steh...@gmail.com> writes:
> > [ faster-concat-2.patch ]
>
> Pushed with some cosmetic adjustments (mostly better comments).
>

Thank you very much

Pavel


> regards, tom lane
>


Re: [HACKERS] Re: issue: record or row variable cannot be part of multiple-item INTO list

2017-09-19 Thread Pavel Stehule
2017-09-19 20:29 GMT+02:00 Tom Lane <t...@sss.pgh.pa.us>:

> Pavel Stehule <pavel.steh...@gmail.com> writes:
> > 2017-09-14 12:33 GMT+02:00 Anthony Bykov <a.by...@postgrespro.ru>:
> >> As far as I understand, this patch adds functionality (correct me if I'm
> >> wrong) for users. Shouldn't there be any changes in doc/src/sgml/ with
> the
> >> description of new functionality?
>
> > It removes undocumented limit. I recheck plpgsql documentation and there
> > are not any rows about prohibited combinations of data types.
>
> I remain of the opinion that this patch is a fundamentally bad idea.
> It creates an inconsistency between what happens if the INTO target list
> is a single record/row variable (it absorbs all the columns of the query
> output) and what happens if a record/row variable is part of a
> multi-variable target list (now it just absorbs one column, which had
> better be composite).  That's going to confuse people, especially since
> you haven't documented it.  But even with documentation, it doesn't seem
> like good design.  Aside from being inconsistent, it doesn't cover all
> the cases --- what if you have just one query output column, that is
> composite, and you'd like it to go into a composite variable?  That
> doesn't work today, and this patch doesn't fix it, but it does create
> enough confusion that we never would be able to fix it.
>
> I'd be much happier if there were some notational difference
> between I-want-the-composite-variable-to-absorb-a-composite-column
> and I-want-the-composite-variable-to-absorb-N-scalar-columns.
> For backwards compatibility with what happens now, the latter would
> have to be the default.  I'm wondering about "var.*" or "(var)" as
> the notation signaling that you want the former, though neither of
> those seem like they're very intuitive.
>
> If we had a notation like that, it'd be possible to ask for either
> behavior within a larger target list, except that we'd still need
> to say that I-want-a-RECORD-variable-to-absorb-N-scalar-columns
> has to be the only thing in its target list.  Otherwise it's not
> very clear what N ought to be.  (In some cases maybe you could
> reverse-engineer N from context, but I think that'd be overly
> complicated and bug prone.)
>

I am not sure if I understand to your objection. This patch do nothing with
RECORD variables - where is is impossible or pretty hard to implement any
clean solution.

If we do some sophisticated game with multiple RECORD type variables, then
probably some positional notations has sense, and in this case we cannot to
allow mix scalar and composite values.

so SELECT s,s, C,s,C TO sv, sv, CV, s, RV should be allowed

but

so SELECT s,s, C,s,C TO R, CV, s, RV should be disallowed

Regards

Pavel




>
> regards, tom lane
>


Re: [HACKERS] PoC plpgsql - possibility to force custom or generic plan

2017-09-19 Thread Pavel Stehule
2017-09-19 20:37 GMT+02:00 Robert Haas <robertmh...@gmail.com>:

> On Tue, Sep 19, 2017 at 12:45 PM, Pavel Stehule <pavel.steh...@gmail.com>
> wrote:
> >> You can already set a GUC with function scope.  I'm not getting your
> >> point.
> >
> > yes, it is true. But implementation of #option is limited to PLpgSQL - so
> > there is not any too much questions - GUC is global - there is lot of
> > points:
> >
> > * what is correct impact on PREPARE
> > * what is correct impact on EXECUTE
> > * what should be done if this GUC is changed ..
>
> For better or for worse, as a project we've settled on GUCs as a way
> to control behavior.  I think it makes more sense to try to apply that
> option to new behaviors we want to control than to invent some new
> system.
>

I have nothing against GUC generally - just in this case, I have knowleadge
what is expected behave in plpgsql environment and I miss this knowleadge
else where, so I am thinking be good start just for plpgsql (where this
issue is mentioned often). The some plpgsql limitted implementation is not
barier against any another implementation.



> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>


Re: [HACKERS] Re: proposal - psql: possibility to specify sort for describe commands, when size is printed

2017-09-19 Thread Pavel Stehule
Hi

2017-09-19 16:14 GMT+02:00 Alexander Korotkov <a.korot...@postgrespro.ru>:

> On Fri, Sep 8, 2017 at 7:13 AM, Pavel Stehule <pavel.steh...@gmail.com>
> wrote:
>
>> 2017-08-16 14:06 GMT+02:00 Pavel Stehule <pavel.steh...@gmail.com>:
>>
>>> Hi
>>>
>>> 2017-08-15 4:37 GMT+02:00 Peter Eisentraut <
>>> peter.eisentr...@2ndquadrant.com>:
>>>
>>>> On 3/11/17 07:06, Pavel Stehule wrote:
>>>> > I am sending a updated version with separated sort direction in
>>>> special
>>>> > variable
>>>>
>>>> This patch also needs a rebase.
>>>>
>>>
>>> I am sending rebased patch
>>>
>>
>> rebased again + fix obsolete help
>>
>
> For me, patch applies cleanly, builds and passed regression tests.
> However, patch misses regression tests covering added functionality.
>

I am not sure if there are any tests related to output of \dt+ commands -
there result is platform depend.


> Patch is definitely harmless, i.e. it doesn't affect anybody who doesn't
> use new functionality.
> But I still would prefer ordering to be options of \d* commands while psql
> variables be defaults for those options...
>

I understand

a) I don't think so commands like \dt++ (or similar) is good idea - these
commands should be simple how it is possible

b) this patch doesn't block any other design - more it opens the door
because the executive part will be implemented and users can have a
experience with with different output sorts - so if people will need more
quick change of result sort, then the work in this area will continue.

Regards

Pavel







> --
> Alexander Korotkov
> Postgres Professional: http://www.postgrespro.com
> The Russian Postgres Company
>


  1   2   3   4   5   6   7   8   9   10   >