Re: \COPY command and indexes in tables

2020-11-18 Thread Laurenz Albe
On Thu, 2020-11-19 at 08:38 +0100, Matthias Apitz wrote:
> We load large tables in some customer installation (some millions of rows)
> from file with:
> 
> TRUNCATE TABLE tableName ;
> \COPY tableName FROM 'fileName' WITH ( DELIMITER '|' )
> 
> and got to know that the loading nearly stops (without any real CPU
> consumption) in the middle. The wild guess is that we forgot to DROP the 
> indexes on
> the tables.

If it does not consume CPU, it must be stalled somehow.

Are there any wait events in "pg_stat_activity".

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





\COPY command and indexes in tables

2020-11-18 Thread Matthias Apitz


Hello,

We load large tables in some customer installation (some millions of rows)
from file with:

TRUNCATE TABLE tableName ;
\COPY tableName FROM 'fileName' WITH ( DELIMITER '|' )

and got to know that the loading nearly stops (without any real CPU
consumption) in the middle. The wild guess is that we forgot to DROP the 
indexes on
the tables. The doc https://www.postgresql.org/docs/13/sql-copy.html
does not give any hints related indexes.

There seems to be even tools available which address this issue on the
flight, like https://www.californiacivicdata.org/2018/01/25/index-drop-and-copy/

Any comments on this?

matthias

-- 
Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/ +49-176-38902045
Public GnuPG key: http://www.unixarea.de/key.pub
Без книги нет знания, без знания нет коммунизма (Влaдимир Ильич Ленин)
Without books no knowledge - without knowledge no communism (Vladimir Ilyich 
Lenin)
Sin libros no hay saber - sin saber no hay comunismo. (Vladimir Ilich Lenin)




Re: pg_upgrade from 12 to 13 failes with plpython2

2020-11-18 Thread Tom Lane
Bruce Momjian  writes:
> It didn't trigger this message for him, and I am also wondering if it
> should have.

The extra functions in this case were in pg_catalog, not public,
so there is exactly no part of that error message that is applicable.

In any case, that seems an overly specific solution.  The generic
problem is how to usefully identify some functions that have dangling
probin pointers.  I don't want a solution that only works for the
plpython functions.

regards, tom lane




Re: create type with %type or %rowtype

2020-11-18 Thread Paul Förster
Hi,

> On 18. Nov, 2020, at 22:08, Post Gresql  wrote:
> 
> I might be stupid, but where in the document for create function does it say 
> that the return type can be a table?
> 
> From the doc for version 13 
> https://www.postgresql.org/docs/13/sql-createfunction.html
> 
> "rettype
> The return data type (optionally schema-qualified). The return type can be a 
> base, composite, or domain type, or can reference the type of a table column."

right in the syntax:

CREATE [ OR REPLACE ] FUNCTION
name ( [ [ argmode ] [ argname ] argtype [ { DEFAULT | = } default_expr
 ] [, ...] ] )
[ RETURNS rettype
  | RETURNS TABLE ( column_name column_type [, ...] ) ]

"RETURNS TABLE(...)" is probably what you're looking for?

Cheers,
Paul



Re: pg_upgrade from 12 to 13 failes with plpython2

2020-11-18 Thread Bruce Momjian
On Wed, Nov 18, 2020 at 10:57:00PM -0700, Rob Sargent wrote:
> > It issues this message and fails:
> > 
> >if (PQntuples(res) > 0)
> >{
> >if (!found_public_plpython_handler)
> >{
> >pg_log(PG_WARNING,
> >   "\nThe old cluster has a 
> > \"plpython_call_handler\" function defined\n"
> >   "in the \"public\" schema which is a duplicate of 
> > the one defined\n"
> >   "in the \"pg_catalog\" schema.  You can confirm 
> > this by executing\n"
> >   "in psql:\n"
> >   "\n"
> >   "\\df *.plpython_call_handler\n"
> >   "\n"
> >   "The \"public\" schema version of this function 
> > was created by a\n"
> >   "pre-8.1 install of plpython, and must be removed 
> > for pg_upgrade\n"
> >   "to complete because it references a now-obsolete 
> > \"plpython\"\n"
> >   "shared object file.  You can remove the 
> > \"public\" schema version\n"
> >   "of this function by running the following 
> > command:\n"
> >   "\n"
> >   "DROP FUNCTION 
> > public.plpython_call_handler()\n"
> >   "\n"
> >   "in each affected database:\n"
> >   "\n");
> >}
> >pg_log(PG_WARNING, "%s\n", active_db->db_name);
> >found_public_plpython_handler = true;
> >}
> >PQclear(res);
> >}
> > 
> >PQfinish(conn);
> >}
> > 
> >if (found_public_plpython_handler)
> >pg_fatal("Remove the problem functions from the old cluster to 
> > continue.\n");
> > 
> > 
> Does this jive with OP’s experience?  Or should it?

It didn't trigger this message for him, and I am also wondering if it
should have.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: pg_upgrade from 12 to 13 failes with plpython2

2020-11-18 Thread Rob Sargent




> On Nov 18, 2020, at 9:39 PM, Bruce Momjian  wrote:
> 
> On Wed, Nov 18, 2020 at 09:35:20PM -0700, Rob Sargent wrote:
>>> pg_upgrade does have some code to handle plpython call handlers in
>>> function.c:get_loadable_libraries();
>>> 
>>>* Systems that install plpython before 8.1 have
>>>* plpython_call_handler() defined in the "public" schema, causing
>>>* pg_dump to dump it.  However that function still references
>>>* "plpython" (no "2"), so it throws an error on restore.  This code
>>>* checks for the problem function, reports affected databases to the
>>>* user and explains how to remove them. 8.1 git commit:
>>>* e0dedd0559f005d60c69c9772163e69c204bac69
>>>* http://archives.postgresql.org/pgsql-hackers/2012-03/msg01101.php
>>>* http://archives.postgresql.org/pgsql-bugs/2012-05/msg00206.php
>>> 
>>> -- 
>>> Bruce Momjian  https://momjian.us
>>> EnterpriseDB https://enterprisedb.com
>>> 
>>> The usefulness of a cup is in its emptiness, Bruce Lee
>>> 
>>> 
>> Unless it stop at prompts for “Yes, ok, I understand.” this could easily fly 
>> out the window.
> 
> It issues this message and fails:
> 
>if (PQntuples(res) > 0)
>{
>if (!found_public_plpython_handler)
>{
>pg_log(PG_WARNING,
>   "\nThe old cluster has a \"plpython_call_handler\" 
> function defined\n"
>   "in the \"public\" schema which is a duplicate of 
> the one defined\n"
>   "in the \"pg_catalog\" schema.  You can confirm 
> this by executing\n"
>   "in psql:\n"
>   "\n"
>   "\\df *.plpython_call_handler\n"
>   "\n"
>   "The \"public\" schema version of this function was 
> created by a\n"
>   "pre-8.1 install of plpython, and must be removed 
> for pg_upgrade\n"
>   "to complete because it references a now-obsolete 
> \"plpython\"\n"
>   "shared object file.  You can remove the \"public\" 
> schema version\n"
>   "of this function by running the following 
> command:\n"
>   "\n"
>   "DROP FUNCTION public.plpython_call_handler()\n"
>   "\n"
>   "in each affected database:\n"
>   "\n");
>}
>pg_log(PG_WARNING, "%s\n", active_db->db_name);
>found_public_plpython_handler = true;
>}
>PQclear(res);
>}
> 
>PQfinish(conn);
>}
> 
>if (found_public_plpython_handler)
>pg_fatal("Remove the problem functions from the old cluster to 
> continue.\n");
> 
> 
Does this jive with OP’s experience?  Or should it?



CreateProcess call failed: A blocking operation was interrupted by a call to WSACancelBlockingCall

2020-11-18 Thread 江川潔
Hi,

I'm not sure how to reach the cause and reproduce it.
Any suggestion will be appreciated.
PostgreSQL 11.7
postgresql-42.2.12.jar
Windows Server 2016

Thanks,
Kiyoshi

2020-11-13 21:11:51.535 JST [3652] LOG:  CreateProcess call failed: A
blocking operation was interrupted by a call to WSACancelBlockingCall.

(error code 1455)
2020-11-13 21:11:51.536 JST [3652] LOG:  could not fork autovacuum worker
process: No error
2020-11-13 21:11:52.548 JST [3652] LOG:  CreateProcess call failed: A
blocking operation was interrupted by a call to WSACancelBlockingCall.

(error code 1455)
2020-11-13 21:11:52.548 JST [3652] LOG:  could not fork autovacuum worker
process: No error
2020-11-13 21:11:53.562 JST [3652] LOG:  CreateProcess call failed: A
blocking operation was interrupted by a call to WSACancelBlockingCall.

(error code 1455)
2020-11-13 21:11:53.562 JST [3652] LOG:  could not fork autovacuum worker
process: No error
2020-11-13 21:11:54.578 JST [3652] LOG:  CreateProcess call failed: A
blocking operation was interrupted by a call to WSACancelBlockingCall.

(error code 1455)
2020-11-13 21:11:54.578 JST [3652] LOG:  could not fork autovacuum worker
process: No error
2020-11-13 21:11:55.578 JST [3652] LOG:  CreateProcess call failed: A
blocking operation was interrupted by a call to WSACancelBlockingCall.

(error code 1455)
2020-11-13 21:11:55.578 JST [3652] LOG:  could not fork autovacuum worker
process: No error
2020-11-13 21:11:55.957 JST [3652] LOG:  CreateProcess call failed: A
blocking operation was interrupted by a call to WSACancelBlockingCall.

(error code 1455)
2020-11-13 21:11:55.957 JST [3652] LOG:  could not fork new process for
connection: No error
2020-11-13 21:11:55.958 JST [3652] LOG:  CreateProcess call failed: No
error (error code 1455)
2020-11-13 21:11:55.958 JST [3652] LOG:  could not fork new process for
connection: No error
2020-11-13 21:11:55.967 JST [3652] LOG:  CreateProcess call failed: No
error (error code 1455)
2020-11-13 21:11:55.967 JST [3652] LOG:  could not fork new process for
connection: No error
2020-11-13 21:11:55.968 JST [3652] LOG:  CreateProcess call failed: No
error (error code 1455)
2020-11-13 21:11:55.968 JST [3652] LOG:  could not fork new process for
connection: No error
2020-11-13 21:11:55.974 JST [3652] LOG:  CreateProcess call failed: No
error (error code 1455)
2020-11-13 21:11:55.974 JST [3652] LOG:  could not fork new process for
connection: No error
2020-11-13 21:11:55.975 JST [3652] LOG:  CreateProcess call failed: No
error (error code 1455)
2020-11-13 21:11:55.975 JST [3652] LOG:  could not fork new process for
connection: No error
2020-11-13 21:11:55.979 JST [3652] LOG:  CreateProcess call failed: No
error (error code 1455)
2020-11-13 21:11:55.979 JST [3652] LOG:  could not fork new process for
connection: No error
2020-11-13 21:11:55.980 JST [3652] LOG:  CreateProcess call failed: No
error (error code 1455)
2020-11-13 21:11:55.980 JST [3652] LOG:  could not fork new process for
connection: No error
2020-11-13 21:11:56.594 JST [3652] LOG:  CreateProcess call failed: A
blocking operation was interrupted by a call to WSACancelBlockingCall.

(error code 1455)
2020-11-13 21:11:56.594 JST [3652] LOG:  could not fork autovacuum worker
process: No error
2020-11-13 21:11:57.115 JST [3652] LOG:  CreateProcess call failed: A
blocking operation was interrupted by a call to WSACancelBlockingCall.

(error code 1450)
2020-11-13 21:11:57.115 JST [3652] LOG:  could not fork new process for
connection: No error
2020-11-13 21:11:57.116 JST [3652] LOG:  could not map backend parameter
memory: error code 6
2020-11-13 21:11:57.116 JST [3652] LOG:  could not fork new process for
connection: No error
2020-11-13 22:08:16.028 JST [17660] LOG:  could not receive data from
client: An existing connection was forcibly closed by the remote host.


2020-11-13 22:08:16.028 JST [12560] LOG:  could not receive data from
client: An existing connection was forcibly closed by the remote host.


Re: pg_upgrade from 12 to 13 failes with plpython2

2020-11-18 Thread Bruce Momjian
On Wed, Nov 18, 2020 at 09:35:20PM -0700, Rob Sargent wrote:
> > pg_upgrade does have some code to handle plpython call handlers in
> > function.c:get_loadable_libraries();
> > 
> > * Systems that install plpython before 8.1 have
> > * plpython_call_handler() defined in the "public" schema, causing
> > * pg_dump to dump it.  However that function still references
> > * "plpython" (no "2"), so it throws an error on restore.  This code
> > * checks for the problem function, reports affected databases to the
> > * user and explains how to remove them. 8.1 git commit:
> > * e0dedd0559f005d60c69c9772163e69c204bac69
> > * http://archives.postgresql.org/pgsql-hackers/2012-03/msg01101.php
> > * http://archives.postgresql.org/pgsql-bugs/2012-05/msg00206.php
> > 
> > -- 
> >  Bruce Momjian  https://momjian.us
> >  EnterpriseDB https://enterprisedb.com
> > 
> >  The usefulness of a cup is in its emptiness, Bruce Lee
> > 
> > 
> Unless it stop at prompts for “Yes, ok, I understand.” this could easily fly 
> out the window.

It issues this message and fails:

if (PQntuples(res) > 0)
{
if (!found_public_plpython_handler)
{
pg_log(PG_WARNING,
   "\nThe old cluster has a \"plpython_call_handler\" 
function defined\n"
   "in the \"public\" schema which is a duplicate of 
the one defined\n"
   "in the \"pg_catalog\" schema.  You can confirm this 
by executing\n"
   "in psql:\n"
   "\n"
   "\\df *.plpython_call_handler\n"
   "\n"
   "The \"public\" schema version of this function was 
created by a\n"
   "pre-8.1 install of plpython, and must be removed 
for pg_upgrade\n"
   "to complete because it references a now-obsolete 
\"plpython\"\n"
   "shared object file.  You can remove the \"public\" 
schema version\n"
   "of this function by running the following 
command:\n"
   "\n"
   "DROP FUNCTION public.plpython_call_handler()\n"
   "\n"
   "in each affected database:\n"
   "\n");
}
pg_log(PG_WARNING, "%s\n", active_db->db_name);
found_public_plpython_handler = true;
}
PQclear(res);
}

PQfinish(conn);
}

if (found_public_plpython_handler)
pg_fatal("Remove the problem functions from the old cluster to 
continue.\n");


-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: pg_upgrade from 12 to 13 failes with plpython2

2020-11-18 Thread Rob Sargent



> On Nov 18, 2020, at 9:29 PM, Bruce Momjian  wrote:
> 
> On Wed, Nov 18, 2020 at 10:06:17PM +, Devrim Gunduz wrote:
>> Hi,
>> 
>> On Wed, 2020-11-18 at 14:54 -0500, Tom Lane wrote:
>>> Uh-huh, so there you have it.  These must be leftovers from some
>>> pre-extension incarnation of plpython that was never cleaned up
>>> properly. 
>> 
>> I think this was broken when Marcin first dropped the language. We
>> should just have dropped the extension, I guess.
> 
> pg_upgrade does have some code to handle plpython call handlers in
> function.c:get_loadable_libraries();
> 
> * Systems that install plpython before 8.1 have
> * plpython_call_handler() defined in the "public" schema, causing
> * pg_dump to dump it.  However that function still references
> * "plpython" (no "2"), so it throws an error on restore.  This code
> * checks for the problem function, reports affected databases to the
> * user and explains how to remove them. 8.1 git commit:
> * e0dedd0559f005d60c69c9772163e69c204bac69
> * http://archives.postgresql.org/pgsql-hackers/2012-03/msg01101.php
> * http://archives.postgresql.org/pgsql-bugs/2012-05/msg00206.php
> 
> -- 
>  Bruce Momjian  https://momjian.us
>  EnterpriseDB https://enterprisedb.com
> 
>  The usefulness of a cup is in its emptiness, Bruce Lee
> 
> 
Unless it stop at prompts for “Yes, ok, I understand.” this could easily fly 
out the window.

> 





Re: pg_upgrade from 12 to 13 failes with plpython2

2020-11-18 Thread Bruce Momjian
On Wed, Nov 18, 2020 at 10:06:17PM +, Devrim Gunduz wrote:
> Hi,
> 
> On Wed, 2020-11-18 at 14:54 -0500, Tom Lane wrote:
> > Uh-huh, so there you have it.  These must be leftovers from some
> > pre-extension incarnation of plpython that was never cleaned up
> > properly. 
> 
> I think this was broken when Marcin first dropped the language. We
> should just have dropped the extension, I guess.

pg_upgrade does have some code to handle plpython call handlers in
function.c:get_loadable_libraries();

 * Systems that install plpython before 8.1 have
 * plpython_call_handler() defined in the "public" schema, causing
 * pg_dump to dump it.  However that function still references
 * "plpython" (no "2"), so it throws an error on restore.  This code
 * checks for the problem function, reports affected databases to the
 * user and explains how to remove them. 8.1 git commit:
 * e0dedd0559f005d60c69c9772163e69c204bac69
 * http://archives.postgresql.org/pgsql-hackers/2012-03/msg01101.php
 * http://archives.postgresql.org/pgsql-bugs/2012-05/msg00206.php

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: create type with %type or %rowtype

2020-11-18 Thread Adrian Klaver

On 11/18/20 1:08 PM, Post Gresql wrote:


On 2020-11-18 17:07, Adrian Klaver wrote:


\d cell_per
   Foreign table "public.cell_per"
  Column  |   Type    | Collation | Nullable | Default | FDW 
options
--+---+---+--+-+- 


 category | character varying |   |  | |
 cell_per | integer   |   |  | |
Server: test_db

CREATE OR REPLACE FUNCTION public.type_test()
 RETURNS cell_per
 LANGUAGE plpgsql
AS $function$
DECLARE
    cp_type cell_per;
BEGIN
    SELECT INTO cp_type * from cell_per limit 1;
    RETURN cp_type;
END;
$function$

select * from type_test();
  category  | cell_per
+--
 H PREM 3.5 |   18

You can change the RETURNS to RETURNS SETOF and return multiple rows.



I might be stupid, but where in the document for create function does it 
say that the return type can be a table?


It doesn't but the above is not returning a table, it is returning a 
(composite)type.


And earlier in this thread, my comment:

"To me that is redundant as a table has a composite type already."

and from the %ROWTYPE portion of the plpgsql section:

https://www.postgresql.org/docs/12/plpgsql-declarations.html#PLPGSQL-DECLARATION-ROWTYPES

"(Since every table has an associated composite type of the same name, 
it actually does not matter in PostgreSQL whether you write %ROWTYPE or 
not. But the form with %ROWTYPE is more portable.)"





 From the doc for version 13 
https://www.postgresql.org/docs/13/sql-createfunction.html


/|"rettype|/

The return data type (optionally schema-qualified). The return type
can be a base, composite, or domain type, or can reference the type
of a table column."







--
Adrian Klaver
adrian.kla...@aklaver.com




Re: create type with %type or %rowtype

2020-11-18 Thread David G. Johnston
On Wednesday, November 18, 2020, Post Gresql  wrote:

>
> On 2020-11-18 17:07, Adrian Klaver wrote:
>
>
> \d cell_per
>Foreign table "public.cell_per"
>   Column  |   Type| Collation | Nullable | Default | FDW
> options
> --+---+---+--+-+-
>
>  category | character varying |   |  | |
>  cell_per | integer   |   |  | |
> Server: test_db
>
> CREATE OR REPLACE FUNCTION public.type_test()
>  RETURNS cell_per
>  LANGUAGE plpgsql
> AS $function$
> DECLARE
> cp_type cell_per;
> BEGIN
> SELECT INTO cp_type * from cell_per limit 1;
> RETURN cp_type;
> END;
> $function$
>
> select * from type_test();
>   category  | cell_per
> +--
>  H PREM 3.5 |   18
>
> You can change the RETURNS to RETURNS SETOF and return multiple rows.
>
>
> I might be stupid, but where in the document for create function does it
> say that the return type can be a table?
>
> From the doc for version 13 https://www.postgresql.org/
> docs/13/sql-createfunction.html
> *"rettype*
>
> The return data type (optionally schema-qualified). The return type can be
> a base, composite, or domain type, or can reference the type of a table
> column."
>
>

The word “composite”.  Every table has an associated composite type of the
same name.

David J.


Re: psql backward compatibility

2020-11-18 Thread Alvaro Herrera
On 2020-Nov-18, Stephen Haddock wrote:

> Hello,
> 
> When upgrading an older version of postgres, version 8.4 for example, to a
> newer version such as 9.6, does the data have to be migrated immediately?

As others have said: yes.

> It looks like the recommended method is to dump the data, upgrade,
> initialize a new cluster, and then restore the dumped data into the newer
> version.

Actually, you can also use pg_upgrade, which might be more convenient,
particularly if your database is large.




Re: pg_upgrade from 12 to 13 failes with plpython2

2020-11-18 Thread Devrim Gündüz
Hi,

On Wed, 2020-11-18 at 14:54 -0500, Tom Lane wrote:
> Uh-huh, so there you have it.  These must be leftovers from some
> pre-extension incarnation of plpython that was never cleaned up
> properly. 

I think this was broken when Marcin first dropped the language. We
should just have dropped the extension, I guess.

Regards,

-- 
Devrim Gündüz
Open Source Solution Architect, Red Hat Certified Engineer
Twitter: @DevrimGunduz , @DevrimGunduzTR


signature.asc
Description: This is a digitally signed message part


Re: pg_upgrade from 12 to 13 failes with plpython2

2020-11-18 Thread Tom Lane
Bruce Momjian  writes:
> On Wed, Nov 18, 2020 at 03:25:56PM -0500, Tom Lane wrote:
>> Maybe pg_upgrade should print the actual function names, not just the
>> probin values.

> It is done that way so we don't overwhelm them with lots of function
> names, and they can focus on the library.  I was thinking of showing
> them a query that would allow them to investigate.

These days "focus on the extension" would be more helpful.  Maybe
we could check to see if all the functions referencing a given .so
belong to the same extension, and if so complain about that extension?

I think there's a reasonable argument that functions that don't
belong to any known extension should be printed individually,
because more than likely the user's gonna have to clean them up
manually, as we saw here.

regards, tom lane




Re: create type with %type or %rowtype

2020-11-18 Thread Post Gresql


On 2020-11-18 17:07, Adrian Klaver wrote:


\d cell_per
   Foreign table "public.cell_per"
  Column  |   Type    | Collation | Nullable | Default | FDW 
options
--+---+---+--+-+- 


 category | character varying |   |  | |
 cell_per | integer   |   |  | |
Server: test_db

CREATE OR REPLACE FUNCTION public.type_test()
 RETURNS cell_per
 LANGUAGE plpgsql
AS $function$
DECLARE
    cp_type cell_per;
BEGIN
    SELECT INTO cp_type * from cell_per limit 1;
    RETURN cp_type;
END;
$function$

select * from type_test();
  category  | cell_per
+--
 H PREM 3.5 |   18

You can change the RETURNS to RETURNS SETOF and return multiple rows.



I might be stupid, but where in the document for create function does it 
say that the return type can be a table?


From the doc for version 13 
https://www.postgresql.org/docs/13/sql-createfunction.html


/|"rettype|/

   The return data type (optionally schema-qualified). The return type
   can be a base, composite, or domain type, or can reference the type
   of a table column."




Re: pg_upgrade from 12 to 13 failes with plpython2

2020-11-18 Thread Bruce Momjian
On Wed, Nov 18, 2020 at 03:25:56PM -0500, Tom Lane wrote:
> Bruce Momjian  writes:
> > I think one big problem is that when pg_upgrade fails in this way, users
> > are required to do some complex system catalog queries to diagnose the
> > cause.  Is there a way to simplify this for them?
> 
> Maybe pg_upgrade should print the actual function names, not just the
> probin values.

It is done that way so we don't overwhelm them with lots of function
names, and they can focus on the library.  I was thinking of showing
them a query that would allow them to investigate.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: pg_upgrade from 12 to 13 failes with plpython2

2020-11-18 Thread Tom Lane
Bruce Momjian  writes:
> I think one big problem is that when pg_upgrade fails in this way, users
> are required to do some complex system catalog queries to diagnose the
> cause.  Is there a way to simplify this for them?

Maybe pg_upgrade should print the actual function names, not just the
probin values.

regards, tom lane




Re: pg_upgrade from 12 to 13 failes with plpython2

2020-11-18 Thread Bruce Momjian
On Wed, Nov 18, 2020 at 08:59:58PM +0100, Marcin Giedz wrote:
> 
> > anyway got this from your query:
> 
> > 16402 | plpython_call_handler | 11 | 10 | 13 | 1 | 0 | 0 | - | f | f | f | f
> | f | v | u | 0 | 0 | 2280 | | | | | | | plpython_call_handler | $libdir/
> plpython2 | |
> > 16403 | plpython_inline_handler | 11 | 10 | 13 | 1 | 0 | 0 | - | f | f | f |
> t | f | v | u | 1 | 0 | 2278 | 2281 | | | | | | plpython_inline_handler |
> $libdir/plpython2 | |
> > 16404 | plpython_validator | 11 | 10 | 13 | 1 | 0 | 0 | - | f | f | f | t | 
> > f
> | v | u | 1 | 0 | 2278 | 26 | | | | | | plpython_validator | $libdir/plpython2
> | |
> 
> Uh-huh, so there you have it.  These must be leftovers from some
> pre-extension incarnation of plpython that was never cleaned up
> properly.  Try
> 
> DROP FUNCTION pg_catalog.plpython_call_handler();
> DROP FUNCTION pg_catalog.plpython_inline_handler(internal);
> DROP FUNCTION pg_catalog.plpython_validator(oid);
> 
> It'll be interesting to see if there are any dependencies.
> 
> regards, tom lane
> 
> -
> 
> BINGO! after drops all went smooth and easy

I think one big problem is that when pg_upgrade fails in this way, users
are required to do some complex system catalog queries to diagnose the
cause.  Is there a way to simplify this for them?

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: pg_upgrade from 12 to 13 failes with plpython2

2020-11-18 Thread Marcin Giedz

> anyway got this from your query: 

> 16402 | plpython_call_handler | 11 | 10 | 13 | 1 | 0 | 0 | - | f | f | f | f 
> | f | v | u | 0 | 0 | 2280 | | | | | | | plpython_call_handler | 
> $libdir/plpython2 | | 
> 16403 | plpython_inline_handler | 11 | 10 | 13 | 1 | 0 | 0 | - | f | f | f | 
> t | f | v | u | 1 | 0 | 2278 | 2281 | | | | | | plpython_inline_handler | 
> $libdir/plpython2 | | 
> 16404 | plpython_validator | 11 | 10 | 13 | 1 | 0 | 0 | - | f | f | f | t | f 
> | v | u | 1 | 0 | 2278 | 26 | | | | | | plpython_validator | 
> $libdir/plpython2 | | 

Uh-huh, so there you have it. These must be leftovers from some 
pre-extension incarnation of plpython that was never cleaned up 
properly. Try 

DROP FUNCTION pg_catalog.plpython_call_handler(); 
DROP FUNCTION pg_catalog.plpython_inline_handler(internal); 
DROP FUNCTION pg_catalog.plpython_validator(oid); 

It'll be interesting to see if there are any dependencies. 

regards, tom lane 

- 

BINGO! after drops all went smooth and easy 


Many thx Tom! 

M. 



Re: pg_upgrade from 12 to 13 failes with plpython2

2020-11-18 Thread Tom Lane
Marcin Giedz  writes:
> anyway got this from your query: 

> 16402 | plpython_call_handler | 11 | 10 | 13 | 1 | 0 | 0 | - | f | f | f | f 
> | f | v | u | 0 | 0 | 2280 | | | | | | | plpython_call_handler | 
> $libdir/plpython2 | | 
> 16403 | plpython_inline_handler | 11 | 10 | 13 | 1 | 0 | 0 | - | f | f | f | 
> t | f | v | u | 1 | 0 | 2278 | 2281 | | | | | | plpython_inline_handler | 
> $libdir/plpython2 | | 
> 16404 | plpython_validator | 11 | 10 | 13 | 1 | 0 | 0 | - | f | f | f | t | f 
> | v | u | 1 | 0 | 2278 | 26 | | | | | | plpython_validator | 
> $libdir/plpython2 | | 

Uh-huh, so there you have it.  These must be leftovers from some
pre-extension incarnation of plpython that was never cleaned up
properly.  Try

DROP FUNCTION pg_catalog.plpython_call_handler();
DROP FUNCTION pg_catalog.plpython_inline_handler(internal);
DROP FUNCTION pg_catalog.plpython_validator(oid);

It'll be interesting to see if there are any dependencies.

regards, tom lane




Re: pg_upgrade from 12 to 13 failes with plpython2

2020-11-18 Thread Marcin Giedz

[ please don't top-post, it makes conversations unreadable ] 

Marcin Giedz  writes: 
> so look at this: 
> postgres=# drop extension plpython; 
> ERROR: extension "plpython" does not exist 
> postgres=# drop extension plpythonu; 
> ERROR: extension "plpythonu" does not exist 
> postgres=# drop extension plpython2u; 
> ERROR: extension "plpython2u" does not exist 

Well, the pg_upgrade failure clearly shows that you've got some 
functions referencing plpython2. Maybe they're "loose" instead 
of being bound into an extension --- that's quite possible if 
this database has been brought forward from some pre-9.1 state. 
Try looking in each database with 

select * from pg_proc where probin like '%python2%'; 

regards, tom lane 


 

sorry for top-posting not a day-to-day habits in our env ;) 

anyway got this from your query: 


oid | proname | pronamespace | proowner | prolang | procost | prorows | 
provariadic | prosupport | prokind | prosecdef | proleakproof | proisstrict | 
proretset | provolatile | proparallel | pronargs | pronargdefaults | prorettype 
| proargtypes | proallargtypes | proargmodes | proargnames | proargdefaults | 
protrftypes | prosrc | probin | proconfig | proacl 
---+-+--+--+-+-+-+-++-+---+--+-+---+-+-+--+-++-++-+-++-+-+---+---+
 
16402 | plpython_call_handler | 11 | 10 | 13 | 1 | 0 | 0 | - | f | f | f | f | 
f | v | u | 0 | 0 | 2280 | | | | | | | plpython_call_handler | 
$libdir/plpython2 | | 
16403 | plpython_inline_handler | 11 | 10 | 13 | 1 | 0 | 0 | - | f | f | f | t 
| f | v | u | 1 | 0 | 2278 | 2281 | | | | | | plpython_inline_handler | 
$libdir/plpython2 | | 
16404 | plpython_validator | 11 | 10 | 13 | 1 | 0 | 0 | - | f | f | f | t | f | 
v | u | 1 | 0 | 2278 | 26 | | | | | | plpython_validator | $libdir/plpython2 | 
| 
(3 rows) 

(END) 

sounds nothing for me I'm afraid but I hope gives a hint for you ;) 


Many thx 
M. 





Re: pg_upgrade from 12 to 13 failes with plpython2

2020-11-18 Thread Tom Lane
[ please don't top-post, it makes conversations unreadable ]

Marcin Giedz  writes:
> so look at this: 
> postgres=# drop extension plpython; 
> ERROR: extension "plpython" does not exist 
> postgres=# drop extension plpythonu; 
> ERROR: extension "plpythonu" does not exist 
> postgres=# drop extension plpython2u; 
> ERROR: extension "plpython2u" does not exist 

Well, the pg_upgrade failure clearly shows that you've got some
functions referencing plpython2.  Maybe they're "loose" instead
of being bound into an extension --- that's quite possible if
this database has been brought forward from some pre-9.1 state.
Try looking in each database with

select * from pg_proc where probin like '%python2%';

regards, tom lane




Re: pg_upgrade from 12 to 13 failes with plpython2

2020-11-18 Thread Marcin Giedz
so look at this: 


marcin@carbon2:~$ psql -U pgsql -h 192.168.89.64 postgres 
psql (12.4 (Ubuntu 12.4-1), server 12.5) 
Type "help" for help. 

postgres=# drop extension plpython; 
ERROR: extension "plpython" does not exist 
postgres=# drop extension plpythonu; 
ERROR: extension "plpythonu" does not exist 
postgres=# drop extension plpython2u; 
ERROR: extension "plpython2u" does not exist 
postgres=# \q 
marcin@carbon2:~$ psql -U pgsql -h 192.168.89.64 template1 
psql (12.4 (Ubuntu 12.4-1), server 12.5) 
Type "help" for help. 

template1=# drop extension plpython2u; 
ERROR: extension "plpython2u" does not exist 
template1=# drop extension plpythonu; 
ERROR: extension "plpythonu" does not exist 
template1=# drop extension plpython; 
ERROR: extension "plpython" does not exist 
template1=# \q 
marcin@carbon2:~$ psql -U pgsql -h 192.168.89.64 argosrm 
psql (12.4 (Ubuntu 12.4-1), server 12.5) 
Type "help" for help. 

argosrm=# drop extension plpython; 
ERROR: extension "plpython" does not exist 
argosrm=# drop extension plpythonu; 
ERROR: extension "plpythonu" does not exist 
argosrm=# drop extension plpython2u; 
ERROR: extension "plpython2u" does not exist 



Od: "Tom Lane"  
Do: "Marcin Giedz"  
DW: "Laurenz Albe" , "Magnus Hagander" 
, "Adrian Klaver" , "Devrim 
Gündüz" , "pgsql-general" 
 
Wysłane: środa, 18 listopad 2020 19:08:25 
Temat: Re: pg_upgrade from 12 to 13 failes with plpython2 

Marcin Giedz  writes: 
> all DBs checked and no plpython(2u) is found except for plpython3u 

I think you also need to make sure you've dropped the plpythonu 
and plpython2u extensions in every database. 

regards, tom lane 




Re: pg_upgrade from 12 to 13 failes with plpython2

2020-11-18 Thread Tom Lane
Marcin Giedz  writes:
> all DBs checked and no plpython(2u) is found except for plpython3u 

I think you also need to make sure you've dropped the plpythonu
and plpython2u extensions in every database.

regards, tom lane




Re: pg_upgrade from 12 to 13 failes with plpython2

2020-11-18 Thread Marcin Giedz
Hi, not the case I believe : 

postgres=# \l 
List of databases 
Name | Owner | Encoding | Collate | Ctype | Access privileges 
---+---+--+-+-+--- 
argosrm | pgsql | UTF8 | pl_PL.UTF-8 | pl_PL.UTF-8 | 
postgres | pgsql | UTF8 | pl_PL.UTF-8 | pl_PL.UTF-8 | 
template0 | pgsql | UTF8 | pl_PL.UTF-8 | pl_PL.UTF-8 | =c/pgsql + 
| | | | | pgsql=CTc/pgsql 
template1 | pgsql | UTF8 | pl_PL.UTF-8 | pl_PL.UTF-8 | =c/pgsql + 
| | | | | pgsql=CTc/pgsql 
(4 rows) 

all DBs checked and no plpython(2u) is found except for plpython3u 

... 

cat loadable_libraries.txt 
could not load library "$libdir/plpython2": ERROR: could not access file 
"$libdir/plpython2": No such file or directory 
In database: argosrm 
In database: template1 

Thx 
M. 



Od: "Laurenz Albe"  
Do: "Marcin Giedz" , "Magnus Hagander" 
 
DW: "Adrian Klaver" , "Tom Lane" 
, "Devrim Gündüz" , "pgsql-general" 
 
Wysłane: środa, 18 listopad 2020 12:58:45 
Temat: Re: pg_upgrade from 12 to 13 failes with plpython2 

On Wed, 2020-11-18 at 11:05 +0100, Marcin Giedz wrote: 
> right, I had one function relaying on plpython2u so I changed it... but the 
> again pg_upgrade claims error with python: 
> 
> cat loadable_libraries.txt 
> could not load library "$libdir/plpython2": ERROR: could not access file 
> "$libdir/plpython2": No such file or directory 
> In database: argosrm 
> In database: template1 

The problematic function is perhaps in another database. 
Look everywhere. 

Yours, 
Laurenz Albe 
-- 
Cybertec | https://www.cybertec-postgresql.com 




Re: How to select values in a JSON type of column?

2020-11-18 Thread Thomas Kellerer

Snjezana Frketic schrieb am 18.11.2020 um 17:00:

I actually have version 9.3.17 


On Wed, 18 Nov 2020 at 15:55, Thomas Kellerer mailto:sham...@gmx.net>> wrote:

Snjezana Frketic schrieb am 18.11.2020 um 11:29:
 > I have a column called |targeting| in a table called |campaigns| .
 > [...]
 > and I need to select all the |ids| in |includes|.
 > Currently, I am doing it like this
 >
 > SELECT 
|targeting#>'{targets,0,audienceSegments,0,includes,0,segments,allOf,0,ids}'FROM 
campaigns;|
 >

If you are on Postgres 12 or later, this can be done using 
jsonb_path_query_array:

   select jsonb_path_query_array(targeting, '$.targets[*].*.includes.**.ids.id 
')
   from campaigns


If you are limited to an unsupported version, you need to go down the hierarchy 
manually:

select t.ids
from campaigns c
   cross join lateral (
 select array_agg(s2.seg2 ->> 'id') as ids
 from json_array_elements(c.targeting -> 'targets') as t(target)
   cross join json_array_elements(t.target -> 'audienceSegments') as a(aud)
   cross join json_array_elements(a.aud -> 'includes') as i(include)
   cross join json_array_elements(i.include #> '{segments,allOf}') as s(seg)
   cross join json_array_elements(s.seg -> 'ids') as s2(seg2)
   ) t




Re: psql backward compatibility

2020-11-18 Thread Ron

On 11/18/20 10:30 AM, Laurenz Albe wrote:

On Wed, 2020-11-18 at 11:05 -0500, Stephen Haddock wrote:

When upgrading an older version of postgres, version 8.4 for example, to a newer
  version such as 9.6, does the data have to be migrated immediately?

Since nobody mentioned that explicitly: do not upgrade to 9.6.
If you upgrade, move to v13.


Unless the software is only certified up to 9.6.

--
Angular momentum makes the world go 'round.




Re: psql backward compatibility

2020-11-18 Thread Ron

On 11/18/20 10:13 AM, Adrian Klaver wrote:

On 11/18/20 8:05 AM, Stephen Haddock wrote:

Hello,

When upgrading an older version of postgres, version 8.4 for example, to 
a newer version such as 9.6, does the data have to be migrated immediately?


It looks like the recommended method is to dump the data, upgrade, 
initialize a new cluster, and then restore the dumped data into the newer 
version. My question is whether the data dump and restore must be done 
immediately. It appears that 9.6 is able to run against the older cluster 
(DB service starts, queries work, etc), and the data could be migrated 
days or weeks later. I don't know if that is asking for issues down the 
line though such as 9.6 corrupting the data due to incompatibilities 
between the two versions.


https://www.postgresql.org/docs/9.6/app-pgdump.html

"Because pg_dump is used to transfer data to newer versions of PostgreSQL, 
the output of pg_dump can be expected to load into PostgreSQL server 
versions newer than pg_dump's version. pg_dump can also dump from 
PostgreSQL servers older than its own version. (Currently, servers back to 
version 7.0 are supported.) "



The above is for Postgres 9.6 version of pg_dump. Newer versions(10+) go 
back to Postgres 8.0.  You can dump the old server at anytime. The 
important thing to remember is to dump the old server using the new 
servers version of pg_dump. So in your case pg_dump(9.6) against server(8.4).


This is especially useful, since the 9.6 pg_dump is able to do parallel 
operations against 8.4.


--
Angular momentum makes the world go 'round.




Re: Problem with compiling extensions with Postgres Version 13

2020-11-18 Thread Laurenz Albe
On Wed, 2020-11-18 at 14:46 +0100, Eric Svenson wrote:
> I am not really sure if I am in the right mailing list, but I try it here 
> first.
> 
> I have written a postgres C extension as DLL which was used with Postgres 9.2 
> successfully.
> 
> Now I am trying to upgrade to Postgres 13 but I ran into a compile problem.
> 
> The function
> 
> PG_GETARG_BYTEA_P 
> 
> causes the compile error
> 
> error LNK2019: unresolved external symbol _pg_detoast_datum referenced in 
> function _compressBytea
> 
> The Compiler is
> Microsoft Visual Studio 2010 
> Windows 10
> 
> Any idea what to do? Is PG_GETARG_BYTEA_P obsolete?

In PostgreSQL v13, "PG_GETARG_BYTEA_P(n)" is a macro that resolves to
"((bytea *) pg_detoast_datum((struct varlena *) 
DatumGetPointer(fcinfo->args[n].value)))".

This has not changed since 9.2, as far as I can tell.

The underscore in front of the function name is relevant: perhaps you compile 
your function
using a different "calling convention" than was used to build PostgreSQL?

Sorry, but I am not a Windows expert.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: How to select values in a JSON type of column?

2020-11-18 Thread Snjezana Frketic
Fair point.
Appreciate your help nevertheless :)


On Wed, 18 Nov 2020 at 17:30, David G. Johnston 
wrote:

> On Wed, Nov 18, 2020 at 9:23 AM Snjezana Frketic <
> frketic.snjez...@gmail.com> wrote:
>
>> Unfortunately, I also can not update my version :)
>>
>
> Then probably the answer to your original question is no :)
>
> There are possibly other ways to make something that works but if you
> aren't willing to upgrade off of a discontinued version, onto one which has
> a perfectly usable solution, then my interest in pondering a work-around is
> near zero.
>
> David J.
>
>


Re: vacuum vs vacuum full

2020-11-18 Thread Ron

On 11/18/20 6:02 AM, Laurenz Albe wrote:

On Wed, 2020-11-18 at 10:57 +0100, Thomas Kellerer wrote:

No matter how long it takes, this is an excellent argument for
partitioning Very Large Tables: many maintenance tasks are made
*much* easier.

The problem is, you can't partition every table as long as Postgres
does not support a primary key that is independent of the partitioning key
(i.e. until it has "global indexes" as they are called in Oracle)

I personally hope that we will never have global indexes.
I am not looking forward to helping customers with the problems that
they create (long duration of ATTACH/DETACH PARTITION, index fragmentation).


I've been using what Oracle calls "global indexes" for 20 years. They're 
super useful when -- for example -- you want to partition a transaction 
table by a date field, while the PK is synthetic.


Up until about two years ago, I purged old data every six months. (Then it 
was migrated from the legacy RDBMS to Oracle.)


Yes, you've got to drop and rebuild the indices, but that's a small price to 
pay for the simplicity of archiving (especially when the indices are built 
in parallel).


--
Angular momentum makes the world go 'round.




Re: psql backward compatibility

2020-11-18 Thread David G. Johnston
On Wed, Nov 18, 2020 at 9:30 AM Laurenz Albe 
wrote:

> On Wed, 2020-11-18 at 11:05 -0500, Stephen Haddock wrote:
> > When upgrading an older version of postgres, version 8.4 for example, to
> a newer
> >  version such as 9.6, does the data have to be migrated immediately?
>
> Since nobody mentioned that explicitly: do not upgrade to 9.6.
> If you upgrade, move to v13.
>
>
Not sure I'd suggest people upgrade to v13.  If they are in a position to
do so and accept the risk involved with a first year point release great,
but I wouldn't make that assumption when making a blind suggestion.  v12
would be the best from an efficiency/risk perspective at this moment in
time, IMO.  v9.6 is only being supported for one more year would be the
reason to avoid choosing it.

David J.


Re: psql backward compatibility

2020-11-18 Thread Laurenz Albe
On Wed, 2020-11-18 at 11:05 -0500, Stephen Haddock wrote:
> When upgrading an older version of postgres, version 8.4 for example, to a 
> newer
>  version such as 9.6, does the data have to be migrated immediately?

Since nobody mentioned that explicitly: do not upgrade to 9.6.
If you upgrade, move to v13.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: How to select values in a JSON type of column?

2020-11-18 Thread David G. Johnston
On Wed, Nov 18, 2020 at 9:23 AM Snjezana Frketic 
wrote:

> Unfortunately, I also can not update my version :)
>

Then probably the answer to your original question is no :)

There are possibly other ways to make something that works but if you
aren't willing to upgrade off of a discontinued version, onto one which has
a perfectly usable solution, then my interest in pondering a work-around is
near zero.

David J.


Re: psql backward compatibility

2020-11-18 Thread David G. Johnston
On Wed, Nov 18, 2020 at 9:16 AM Adrian Klaver 
wrote:

> On 11/18/20 8:05 AM, Stephen Haddock wrote:
> > Hello,
> >
> > When upgrading an older version of postgres, version 8.4 for example, to
> > a newer version such as 9.6, does the data have to be migrated
> immediately?
> >
> > It looks like the recommended method is to dump the data, upgrade,
> > initialize a new cluster, and then restore the dumped data into the
> > newer version. My question is whether the data dump and restore must be
> > done immediately. It appears that 9.6 is able to run against the older
> > cluster (DB service starts, queries work, etc), and the data could be
>
> Hmm, missed that. As David said that should not happen and if you are
> running a new binary against an old cluster then you will get corruption.
>
>
Actually, upon re-reading I suspect you are more likely correct.  Depending
on the package/installer both 8.4 and 9.6 are both able to run on the
server simultaneously - on different ports.  Upgrading PostgreSQL to 9.6
only installs the database programs and, usually, a default cluster (using
the next available port number) having a "postgres" database (it's not
really an upgrade if the major version changes, it's a new install).
Separately, the DBA must initiate an upgrade of clusters (or dump/reload of
individual databases) that they wish to run under the newly installed 9.6
version.

David J.


Re: psql backward compatibility

2020-11-18 Thread Stephen Haddock
Thanks for the quick responses!

I'll double-check the configuration. Given your responses it is highly
likely that the older version is still running the server and I'm simply
running the client in 9.6.

On Wed, Nov 18, 2020, 11:16 Adrian Klaver  wrote:

> On 11/18/20 8:05 AM, Stephen Haddock wrote:
> > Hello,
> >
> > When upgrading an older version of postgres, version 8.4 for example, to
> > a newer version such as 9.6, does the data have to be migrated
> immediately?
> >
> > It looks like the recommended method is to dump the data, upgrade,
> > initialize a new cluster, and then restore the dumped data into the
> > newer version. My question is whether the data dump and restore must be
> > done immediately. It appears that 9.6 is able to run against the older
> > cluster (DB service starts, queries work, etc), and the data could be
>
> Hmm, missed that. As David said that should not happen and if you are
> running a new binary against an old cluster then you will get corruption.
>
> > migrated days or weeks later. I don't know if that is asking for issues
> > down the line though such as 9.6 corrupting the data due to
> > incompatibilities between the two versions.
> >
> > Thanks!
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: How to select values in a JSON type of column?

2020-11-18 Thread Snjezana Frketic
Unfortunately, I also can not update my version :)

On Wed, 18 Nov 2020 at 17:00, Snjezana Frketic 
wrote:

> I actually have version 9.3.17 
>
>
> On Wed, 18 Nov 2020 at 15:55, Thomas Kellerer  wrote:
>
>> Snjezana Frketic schrieb am 18.11.2020 um 11:29:
>> > I have a column called |targeting| in a table called |campaigns| .
>> > [...]
>> > and I need to select all the |ids| in |includes|.
>> > Currently, I am doing it like this
>> >
>> >
>> SELECT 
>> |targeting#>'{targets,0,audienceSegments,0,includes,0,segments,allOf,0,ids}'FROM
>> campaigns;|
>> >
>>
>> If you are on Postgres 12 or later, this can be done using
>> jsonb_path_query_array:
>>
>>   select jsonb_path_query_array(targeting, '$.targets[*].*.includes.**.
>> ids.id')
>>   from campaigns
>>
>>
>> Online example:
>> https://dbfiddle.uk/?rdbms=postgres_12=ee7f6e73055ffb3a98fcfd3d86763e35
>>
>> Thomas
>>
>>
>>


Re: psql backward compatibility

2020-11-18 Thread Adrian Klaver

On 11/18/20 8:05 AM, Stephen Haddock wrote:

Hello,

When upgrading an older version of postgres, version 8.4 for example, to 
a newer version such as 9.6, does the data have to be migrated immediately?


It looks like the recommended method is to dump the data, upgrade, 
initialize a new cluster, and then restore the dumped data into the 
newer version. My question is whether the data dump and restore must be 
done immediately. It appears that 9.6 is able to run against the older 
cluster (DB service starts, queries work, etc), and the data could be 


Hmm, missed that. As David said that should not happen and if you are 
running a new binary against an old cluster then you will get corruption.


migrated days or weeks later. I don't know if that is asking for issues 
down the line though such as 9.6 corrupting the data due to 
incompatibilities between the two versions.


Thanks!



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: psql backward compatibility

2020-11-18 Thread Adrian Klaver

On 11/18/20 8:05 AM, Stephen Haddock wrote:

Hello,

When upgrading an older version of postgres, version 8.4 for example, to 
a newer version such as 9.6, does the data have to be migrated immediately?


It looks like the recommended method is to dump the data, upgrade, 
initialize a new cluster, and then restore the dumped data into the 
newer version. My question is whether the data dump and restore must be 
done immediately. It appears that 9.6 is able to run against the older 
cluster (DB service starts, queries work, etc), and the data could be 
migrated days or weeks later. I don't know if that is asking for issues 
down the line though such as 9.6 corrupting the data due to 
incompatibilities between the two versions.


https://www.postgresql.org/docs/9.6/app-pgdump.html

"Because pg_dump is used to transfer data to newer versions of 
PostgreSQL, the output of pg_dump can be expected to load into 
PostgreSQL server versions newer than pg_dump's version. pg_dump can 
also dump from PostgreSQL servers older than its own version. 
(Currently, servers back to version 7.0 are supported.) "



The above is for Postgres 9.6 version of pg_dump. Newer versions(10+) go 
back to Postgres 8.0.  You can dump the old server at anytime. The 
important thing to remember is to dump the old server using the new 
servers version of pg_dump. So in your case pg_dump(9.6) against 
server(8.4).




Thanks!



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: psql backward compatibility

2020-11-18 Thread David G. Johnston
On Wed, Nov 18, 2020 at 9:05 AM Stephen Haddock 
wrote:

> It appears that 9.6 is able to run against the older cluster (DB service
> starts, queries work, etc)
>

If this is indeed what you've observed you've found a bug because a 9.6
service should not start at all if the data directory it is being pointed
to is from a different major version.

David J.


Re: psql backward compatibility

2020-11-18 Thread Christophe Pettus



> On Nov 18, 2020, at 08:05, Stephen Haddock  wrote:
> When upgrading an older version of postgres, version 8.4 for example, to a 
> newer version such as 9.6, does the data have to be migrated immediately?

Yes.  You cannot run binaries from a newer major version of PostgreSQL on a 
cluster that was initialized with an older major version.  You'll need to do a 
pg_dump/pg_restore, or use  pg_upgrade to create a new cluster.

--
-- Christophe Pettus
   x...@thebuild.com





Re: create type with %type or %rowtype

2020-11-18 Thread David G. Johnston
On Wed, Nov 18, 2020 at 12:34 AM Post Gresql  wrote:

> or even a complete table row as return type.
>
As mentioned, this is already possible.

> create type my_type (a int, b my_table.my_column%type);
>
> The real reason: you will be sure you are using the same type everywhere.
> And it is easier to change type later on, then only one column has to be
> changed, not many and in a lot of different places.
>

The documentation for pl/pgsql says:

"By using %TYPE you don't need to know the data type of the structure you
are referencing, and most importantly, if the data type of the referenced
item changes in the future (for instance: you change the type of user_id
from integer to real), you might not need to change your function
definition."

In short, you cannot gain that benefit (avoid recompilation) at the SQL
level.  I believe your "change type" use case is thus rendered basically
undoable.  And I don't see the effort to be worth the benefit for "create
type" alone.

Just write: create type my_type (a int, b int); -- b's type matches
my_table.my_column
And/Or: comment on column my_type.b is '@depends on my_table.my_column%type'

David J.


Re: create type with %type or %rowtype

2020-11-18 Thread Adrian Klaver

On 11/17/20 11:34 PM, Post Gresql wrote:


On 2020-11-18 04:37, David G. Johnston wrote:

(resending to include the list)

On Tue, Nov 17, 2020 at 3:12 PM Post Gresql > wrote:


create type my_type as my_table%rowtype;


This would be redundant with existing behavior - all tables have a 
corresponding type already


create type my_type as my_table.my_column%type;


What does the indirection get us?

Correct? It seems to be a feature for plpgsql programing only, right?


Correct


But wouldn't that be a good thing to be able to do?


You are the one proposing it - why would it be a good thing to do?


My idea, that I did not explain properly, sorry for that, is that when I 
write plpgsql functions I sometime need to have a certain column type as 
return value, or even a complete table row as return type.


\d cell_per
   Foreign table "public.cell_per"
  Column  |   Type| Collation | Nullable | Default | FDW 
options

--+---+---+--+-+-
 category | character varying |   |  | |
 cell_per | integer   |   |  | |
Server: test_db

CREATE OR REPLACE FUNCTION public.type_test()
 RETURNS cell_per
 LANGUAGE plpgsql
AS $function$
DECLARE
cp_type cell_per;
BEGIN
SELECT INTO cp_type * from cell_per limit 1;
RETURN cp_type;
END;
$function$

select * from type_test();
  category  | cell_per
+--
 H PREM 3.5 |   18

You can change the RETURNS to RETURNS SETOF and return multiple rows.


See also:

Polymorphic types explanation at bottom of this section:

https://www.postgresql.org/docs/12/plpgsql-declarations.html#PLPGSQL-DECLARATION-PARAMETERS

Using %TYPE with polymorphic types:
https://www.postgresql.org/docs/12/plpgsql-declarations.html#PLPGSQL-DECLARATION-TYPE



Then it would be great if I could just refer to the column or row type 
when delcaring the return type.


It would also be handy if I could reference types when declaring other 
types,


for example

create type my_type (a int, b my_table.my_column%type);


The real reason: you will be sure you are using the same type 
everywhere.  And it is easier to change type later on, then only one 
column has to be changed, not many and in a lot of different places.


I hope that explains my idea.



David J.


On Tue, Nov 17, 2020 at 3:12 PM Post Gresql > wrote:


Hello.

It seems that I can not create a type with

create type my_type as my_table%rowtype;

or

create type my_type as my_table.my_column%type;

Correct? It seems to be a feature for plpgsql programing only, right?

But wouldn't that be a good thing to be able to do? Or would it cause
too many problems?


Best regards







--
Adrian Klaver
adrian.kla...@aklaver.com




psql backward compatibility

2020-11-18 Thread Stephen Haddock
Hello,

When upgrading an older version of postgres, version 8.4 for example, to a
newer version such as 9.6, does the data have to be migrated immediately?

It looks like the recommended method is to dump the data, upgrade,
initialize a new cluster, and then restore the dumped data into the newer
version. My question is whether the data dump and restore must be done
immediately. It appears that 9.6 is able to run against the older cluster
(DB service starts, queries work, etc), and the data could be migrated days
or weeks later. I don't know if that is asking for issues down the line
though such as 9.6 corrupting the data due to incompatibilities between the
two versions.

Thanks!


Re: How to select values in a JSON type of column?

2020-11-18 Thread Snjezana Frketic
I actually have version 9.3.17 


On Wed, 18 Nov 2020 at 15:55, Thomas Kellerer  wrote:

> Snjezana Frketic schrieb am 18.11.2020 um 11:29:
> > I have a column called |targeting| in a table called |campaigns| .
> > [...]
> > and I need to select all the |ids| in |includes|.
> > Currently, I am doing it like this
> >
> >
> SELECT 
> |targeting#>'{targets,0,audienceSegments,0,includes,0,segments,allOf,0,ids}'FROM
> campaigns;|
> >
>
> If you are on Postgres 12 or later, this can be done using
> jsonb_path_query_array:
>
>   select jsonb_path_query_array(targeting, '$.targets[*].*.includes.**.
> ids.id')
>   from campaigns
>
>
> Online example:
> https://dbfiddle.uk/?rdbms=postgres_12=ee7f6e73055ffb3a98fcfd3d86763e35
>
> Thomas
>
>
>


Re: vacuum vs vacuum full

2020-11-18 Thread David G. Johnston
On Wed, Nov 18, 2020 at 1:33 AM Atul Kumar  wrote:

> Hi,
>
> We have a table of 3113GB, and we are planning to vacuum it in non
>

Just making sure that isn't a typo (repeated 1s)...

business hours i.e. 12AM to 4AM, So my queries are:
>
> 1. What should be perform on the table Vacuum or Vacuum full ?
>

You should be vacuuming that table constantly, so why is the particular
vacuum special?  What are the "last vacuum" related statistics for this
table?

There is "vacuum" and there is "rebuilding the whole table from scratch",
the later of which is unfortunately named "vacuum full".

If you haven't started learning/thinking about it yet you should try and
get an understanding around where your system is in the process of
requiring an anti-wraparound vacuum.  Or, more generally, using "vacuum
freeze".

David J.


Re: vacuum vs vacuum full

2020-11-18 Thread Paul Förster
Hi Ravi,

> On 18. Nov, 2020, at 15:30, Ravi Krishna  wrote:
> 
> ALTER TABLE TABLE_NAME DROP PARTITION PARTITION_NAME UPDATE INDEXES;

IIRC the statement is

alter table  drop partition  update *GLOBAL* indexes;

But we experienced big problems in the past which is why we changed all to 
local indexes. The situation may have improved in the last few years but we 
will not change back again. :-) Why should we?

Cheers,
Paul



Re: How to select values in a JSON type of column?

2020-11-18 Thread Thomas Kellerer
Snjezana Frketic schrieb am 18.11.2020 um 11:29:
> I have a column called |targeting| in a table called |campaigns| .
> [...]
> and I need to select all the |ids| in |includes|.
> Currently, I am doing it like this 
>
> SELECT 
> |targeting#>'{targets,0,audienceSegments,0,includes,0,segments,allOf,0,ids}'FROM
>  campaigns;|  
>

If you are on Postgres 12 or later, this can be done using 
jsonb_path_query_array:

  select jsonb_path_query_array(targeting, '$.targets[*].*.includes.**.ids.id')
  from campaigns


Online example: 
https://dbfiddle.uk/?rdbms=postgres_12=ee7f6e73055ffb3a98fcfd3d86763e35

Thomas




Re: How to select values in a JSON type of column?

2020-11-18 Thread Snjezana Frketic
I looked at it yesterday, but I couldn't figure it out because my JSON is
more nested and I got lost going down the path.


On Wed, 18 Nov 2020 at 15:40, David G. Johnston 
wrote:

>
> On Wednesday, November 18, 2020, Snjezana Frketic <
> frketic.snjez...@gmail.com> wrote:
>
>>
>>
>> SELECT
>> targeting#>'{targets,0,audienceSegments,0,includes,0,segments,allOf,0,ids}'FROM
>> campaigns;
>>
>> and that works, but, I don’t want to have a fixed path because positions
>> could change like 0 could become 1, includes and excludes could change
>> positions, allOf  could be anyOf etc.
>> Any idea of how to always select ids in includes no matter the changes?
>>
>
>   Maybe it can be done using json path:
>
>
> https://www.postgresql.org/docs/13/functions-json.html#FUNCTIONS-SQLJSON-PATH
>
> David J.
>
>


Re: How to select values in a JSON type of column?

2020-11-18 Thread David G. Johnston
On Wednesday, November 18, 2020, Snjezana Frketic <
frketic.snjez...@gmail.com> wrote:

>
>
> SELECT targeting#>'{targets,0,audienceSegments,0,includes,0,
> segments,allOf,0,ids}'FROM campaigns;
>
> and that works, but, I don’t want to have a fixed path because positions
> could change like 0 could become 1, includes and excludes could change
> positions, allOf  could be anyOf etc.
> Any idea of how to always select ids in includes no matter the changes?
>

  Maybe it can be done using json path:


https://www.postgresql.org/docs/13/functions-json.html#FUNCTIONS-SQLJSON-PATH

David J.


Problem with compiling extensions with Postgres Version 13

2020-11-18 Thread Eric Svenson
Hello,

I am not really sure if I am in the right mailing list, but I try it here
first.

I have written a postgres C extension as DLL which was used with Postgres
9.2 successfully.

Now I am trying to upgrade to Postgres 13 but I ran into a compile problem.

The function

PG_GETARG_BYTEA_P

causes the compile error

error LNK2019: unresolved external symbol _pg_detoast_datum referenced in
function _compressBytea

The Compiler is
Microsoft Visual Studio 2010
Windows 10

Any idea what to do? Is PG_GETARG_BYTEA_P obsolete?

Best regards,
Eric


Re: vacuum vs vacuum full

2020-11-18 Thread Paul Förster
Hi Laurenz,

> On 18. Nov, 2020, at 13:02, Laurenz Albe  wrote:
> 
> I personally hope that we will never have global indexes.
> I am not looking forward to helping customers with the problems that
> they create (long duration of ATTACH/DETACH PARTITION, index fragmentation).

+1.

Experience shows that global index in Oracle lead to problems when dropping a 
partition. rebuilding an index, or other such nice administrative stuff, often 
leading to unnecessarily long downtimes.

Cheers,
Paul



Re: Race condition with restore_command on streaming replica

2020-11-18 Thread Leo Jin
Hi Brad,

We did some research on this at EDB and here is a summary of the finding
and suggested next steps.

*Context:*

*Given* a 3 node cluster M0, M1 and M2.
*And *M0 is the primary with M1 and M2 as the replicas - we will call
this *"T0"
(Timeline 0).*
*And* a *"switch over"* command is issued to terminate M0 and promote M1 as
the new primary.

*Expected Behavior:*

*When* M0 is terminated and M1 is promoted to primary
*Then* a *".history"* file would be written to a *shared archive repo (S3) *to
indicate *T0* has ended and *T1 *is created with m1 as the new primary.
*And* *"restore_command"* is executed on the replica M2 to replay any
remaining "wal" files from T0 as well as switch to streaming from *M1* as
the new primary henceforth.


*IBM's Problem:*


*Given* the "switch over" process is run asynchronously.
*When* "restore_comand" is run on *M2* before *M1* creates a .*history* file
*Then* M2 will understand this as - *No new timeline is created, I'll wait
for the next wal file from T0.*
*But* the next wal file from T0 will *NEVER* come because it's been
terminated and M2 hangs there looking for something that will never exist.


*IBM's Question:  * Why is the "restore_command" only checks for the
.history file once and moves on instead of pooling for the .history file
for a specified period to make sure it "waits" for it to show up so it can
execute everything in the correct order.

*EDB current diagnose: * The* "restore_command*" on the M2 replica is
behaving as expected.  Based on a test case we've set up, we can see that
the*"restore_command"* is checking for both *.history *and the next
*"wal"* file
from the current timeline. *Based on this, we suspect the issue lies with
IBM's current configuration settings and we recommend IBM to try and
recreate the issue manually without Patroni to ensure their current
assumptions are indeed correct.*

*Here's the test case we ran as FYI:*

1) Setup a 3 node cluster with m0 (primary), m1 (replica) and m2(replica)
2) after m-0 is stopped and m-1 is not yet promoted, then m-2 logs contain
the following as expected:





*FATAL:  could not connect to the primary server: could not connect to
server: No such file or directoryIs the server running locally and
acceptingconnections on Unix domain socket
"/var/run/postgresql/.s.PGSQL.5432"?cp: cannot stat
‘/var/lib/pgsql/12/backups/archives/0002.history’: No such file or
directorycp: cannot stat
‘/var/lib/pgsql/12/backups/archives/00010001004E’: No such file
or directory*

It confirms that when SR is failing, it tries to look for the
00010001004E wal or the history file (for the new timeline).

2) after m-1 gets promoted, we see the following files generated in the
archives repo:


*00010001004E.partial. # Wal timeline 1*
*0002.history.  # history file for timeline 2*

3) as soon as those files comes up in the repo, m-2 is seeing it and play
it:






*LOG:  restored log file "0002.history" from archive  # as you can see,
it reads the history file as soon as it appearscp: cannot stat
‘/var/lib/pgsql/12/backups/archives/0003.history’: No such file or
directoryLOG:  restored log file "0002.history" from archiveLOG:  new
target timeline is 2. # switches to timeline 2 after reading the .history
filecp: cannot stat
‘/var/lib/pgsql/12/backups/archives/00020001004E’: No such file
or directory. # this is happening because the primary conn info needs to be
changed to point to the m1 and this requires a db restart to get picked
up.cp: cannot stat
‘/var/lib/pgsql/12/backups/archives/00010001004E’: No such file
or directory*

On Wed, Nov 18, 2020 at 8:39 AM Brad Nicholson  wrote:

> Dilip Kumar  wrote on 2020/11/15 04:47:12 AM:
> > I am not sure how Patroni does it internally,  can you explain the
> > scenario in more detail?  Suppose you are executing the promote on m-1
> > and if the promotion is successful it will switch the timeline and it
> > will create the timeline history file.  Now, once the promotion is
> > successful if we change the primary_conninfo on the m-2 then it will
> > restart the walsender and look for the latest .history file which it
> > should find either from direct streaming or through the
> > restore_command.  If you are saying that m-2 tried to look for the
> > history file before m-1 created it then it seems like you change the
> > primary_conninfo on m-2 before the m-1 promotion got completed.
>
>
> Patroni first issues a fast shutdown to m-0.  Once that completes, it
> issues a promote on m-1.
>
> In this case, m-2 is not connecting directly to to m-0 or m-1 via a
> hostname, it is connecting to a virtual layer that routes the connectivity
> to the current primary.  This does not change.
>
> Brad.
>
>
>


Re: autovacuum recommendations for Large tables

2020-11-18 Thread Laurenz Albe
On Tue, 2020-11-17 at 22:17 +0100, Olivier Gautherot wrote:
> On Tue, Nov 17, 2020 at 12:05 AM David G. Johnston 
>  wrote:
> > On Mon, Nov 16, 2020 at 3:57 PM Atul Kumar  wrote:
> > > I only have this one big table in the database of size 3113 GB with rows 
> > > 7661353111.
> > > 
> > > Right Now the autovacuum setting for that table is set to
> > > {autovacuum_enabled=true,autovacuum_vacuum_scale_factor=0.2,autovacuum_analyze_scale_factor=0.2}
> > 
> > auto-vacuum doesn't care directly about absolute size, it cares about 
> > change (relative to absolute size in many cases, hence the scale factors).
> > 
> > David J.
> > 
> 
> David is correct.
> 
> If it helps, I put together a few thoughts and own experience on a blog:
> https://sites.google.com/gautherot.net/postgresql/vacuum 
> 
> Hope you find it useful.

Then I can chime in with 
https://www.cybertec-postgresql.com/en/tuning-autovacuum-postgresql/

Yours,
Laurenz Albe





Re: vacuum vs vacuum full

2020-11-18 Thread Laurenz Albe
On Wed, 2020-11-18 at 10:57 +0100, Thomas Kellerer wrote:
> > No matter how long it takes, this is an excellent argument for
> > partitioning Very Large Tables: many maintenance tasks are made
> > *much* easier.
> 
> The problem is, you can't partition every table as long as Postgres
> does not support a primary key that is independent of the partitioning key
> (i.e. until it has "global indexes" as they are called in Oracle)

I personally hope that we will never have global indexes.
I am not looking forward to helping customers with the problems that
they create (long duration of ATTACH/DETACH PARTITION, index fragmentation).

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: pg_upgrade from 12 to 13 failes with plpython2

2020-11-18 Thread Laurenz Albe
On Wed, 2020-11-18 at 11:05 +0100, Marcin Giedz wrote:
> right, I had one function relaying on plpython2u so I changed it... but the 
> again pg_upgrade claims error with python:
> 
> cat loadable_libraries.txt 
> could not load library "$libdir/plpython2": ERROR:  could not access file 
> "$libdir/plpython2": No such file or directory
> In database: argosrm
> In database: template1

The problematic function is perhaps in another database.
Look everywhere.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





No parallel plan on an union all subquery

2020-11-18 Thread Phil Florent
Hi,

I have a question about parallel plans. Here is my test case :

select version();
 version
--
 PostgreSQL 13.1 (Ubuntu 13.1-1.pgdg20.04+1) on x86_64-pc-linux-gnu, compiled 
by gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0, 64-bit

create unlogged table drop_me as select generate_series(1,7e7) n1;
SELECT 7000

explain
select count(*)
from (select
 n1
  from drop_me
) s;

  QUERY PLAN
--
 Finalize Aggregate  (cost=675319.13..675319.14 rows=1 width=8)
   ->  Gather  (cost=675318.92..675319.13 rows=2 width=8)
 Workers Planned: 2
 ->  Partial Aggregate  (cost=674318.92..674318.93 rows=1 width=8)
   ->  Parallel Seq Scan on drop_me  (cost=0.00..601402.13 
rows=29166713 width=0)
 JIT:
   Functions: 4
   Options: Inlining true, Optimization true, Expressions true, Deforming true

Parallel plan, OK, 1s

explain
select count(*)
from (select
 n1
  from drop_me
  union all
  select
 n1
  from drop_me) ua;

  QUERY PLAN
--
 Finalize Aggregate  (cost=1640315.00..1640315.01 rows=1 width=8)
   ->  Gather  (cost=1640314.96..1640314.99 rows=2 width=8)
 Workers Planned: 2
 ->  Partial Aggregate  (cost=1640304.96..1640304.97 rows=1 width=8)
   ->  Parallel Append  (cost=0.00..1494471.40 rows=58333426 
width=0)
 ->  Parallel Seq Scan on drop_me  (cost=0.00..601402.13 
rows=29166713 width=0)
 ->  Parallel Seq Scan on drop_me drop_me_1  
(cost=0.00..601402.13 rows=29166713 width=0)
 JIT:
   Functions: 6
   Options: Inlining true, Optimization true, Expressions true, Deforming true


Parallel plan, 2s2

explain
select count(*)
from (select
 n1
  from drop_me
  union all
  values(1)) ua;

   QUERY PLAN

 Aggregate  (cost=2934739.24..2934739.25 rows=1 width=8)
   ->  Append  (cost=0.00..2059737.83 rows=7113 width=32)
 ->  Seq Scan on drop_me  (cost=0.00..1009736.12 rows=7112 width=6)
 ->  Subquery Scan on "*SELECT* 2"  (cost=0.00..0.02 rows=1 width=32)
   ->  Result  (cost=0.00..0.01 rows=1 width=4)
 JIT:
   Functions: 4
   Options: Inlining true, Optimization true, Expressions true, Deforming true

No parallel plan, 2s6

I read the documentation but I don't get the reason of the "noparallel" seq 
scan of drop_me in the last case ?

Best regards,

Phil





How to select values in a JSON type of column?

2020-11-18 Thread Snjezana Frketic
Hi!

I have a column called targeting in a table called campaigns .
The column looks like

{
   "targets": [
 {
   "audienceSegments": [
 {
   "includes": [
 {
   "consumer": "selection",
   "segments": {
 "allOf": [
   {
 "provider": "a",
 "ids": [
   {
 "id": "110418"
   },
   {
 "id": "110430"
   },
   {
 "id": "110433"
   }
 ]
   }
 ]
   }
 }
   ],
   "excludes": [
 {
   "consumer": "selection",
   "segments": {
 "allOf": [
   {
 "provider": "a",
 "ids": [
   {
 "id": "109776"
   }
 ]
   }
 ]
   }
 }
   ]
 }
   ]
 }
   ]
 }

and I need to select all the ids in includes.
Currently, I am doing it like this

SELECT
targeting#>'{targets,0,audienceSegments,0,includes,0,segments,allOf,0,ids}'FROM
campaigns;

and that works, but, I don’t want to have a fixed path because positions
could change like 0 could become 1, includes and excludes could change
positions, allOf  could be anyOf etc.
Any idea of how to always select ids in includes no matter the changes?

Thank you!
Anna


Re: pg_upgrade from 12 to 13 failes with plpython2

2020-11-18 Thread Marcin Giedz
right, I had one function relaying on plpython2u so I changed it... but the 
again pg_upgrade claims error with python: 

cat loadable_libraries.txt 
could not load library "$libdir/plpython2": ERROR: could not access file 
"$libdir/plpython2": No such file or directory 
In database: argosrm 
In database: template1 




marcin@carbon2:~$ psql -U pgsql -h 192.168.89.64 postgres 
psql (12.4 (Ubuntu 12.4-1), server 12.5) 
Type "help" for help. 

postgres=# select * from pg_language ; 
oid | lanname | lanowner | lanispl | lanpltrusted | lanplcallfoid | laninline | 
lanvalidator | lanacl 
---+--+--+-+--+---+---+--+
 
12 | internal | 10 | f | f | 0 | 0 | 2246 | 
13 | c | 10 | f | f | 0 | 0 | 2247 | 
14 | sql | 10 | f | t | 0 | 0 | 2248 | 
14177 | plpgsql | 10 | t | t | 14174 | 14175 | 14176 | 
(4 rows) 

postgres=# \c argosrm 
psql (12.4 (Ubuntu 12.4-1), server 12.5) 
You are now connected to database "argosrm" as user "pgsql". 
argosrm=# select * from pg_language ; 
oid | lanname | lanowner | lanispl | lanpltrusted | lanplcallfoid | laninline | 
lanvalidator | lanacl 
-++--+-+--+---+---+--+
 
12 | internal | 10 | f | f | 0 | 0 | 2246 | 
13 | c | 10 | f | f | 0 | 0 | 2247 | 
14 | sql | 10 | f | t | 0 | 0 | 2248 | 
14177 | plpgsql | 10 | t | t | 14174 | 14175 | 14176 | 
1824389 | plpython3u | 10 | t | f | 1824386 | 1824387 | 1824388 | 
(5 rows) 

argosrm=# \c template1 
psql (12.4 (Ubuntu 12.4-1), server 12.5) 
You are now connected to database "template1" as user "pgsql". 
template1=# select * from pg_language ; 
oid | lanname | lanowner | lanispl | lanpltrusted | lanplcallfoid | laninline | 
lanvalidator | lanacl 
---+--+--+-+--+---+---+--+
 
12 | internal | 10 | f | f | 0 | 0 | 2246 | 
13 | c | 10 | f | f | 0 | 0 | 2247 | 
14 | sql | 10 | f | t | 0 | 0 | 2248 | 
14177 | plpgsql | 10 | t | t | 14174 | 14175 | 14176 | 
(4 rows) 

template1=# drop language plpython2u; 
ERROR: language "plpython2u" does not exist 
template1=# drop language plpython; 
ERROR: language "plpython" does not exist 
template1=# drop language plpythonu; 
ERROR: language "plpythonu" does not exist 

template1=# \c postgres 
psql (12.4 (Ubuntu 12.4-1), server 12.5) 
You are now connected to database "postgres" as user "pgsql". 
postgres=# drop language plpythonu; 
ERROR: language "plpythonu" does not exist 
postgres=# drop language plpython; 
ERROR: language "plpython" does not exist 
postgres=# drop language plpython2u; 
ERROR: language "plpython2u" does not exist 


postgres=# \c argosrm 
psql (12.4 (Ubuntu 12.4-1), server 12.5) 
You are now connected to database "argosrm" as user "pgsql". 
argosrm=# drop language plpython2u; 
ERROR: language "plpython2u" does not exist 
argosrm=# drop language plpython; 
ERROR: language "plpython" does not exist 
argosrm=# drop language plpythonu; 
ERROR: language "plpythonu" does not exist 
argosrm=# 


argosrm=# select * from pg_pltemplate ; 
tmplname | tmpltrusted | tmpldbacreate | tmplhandler | tmplinline | 
tmplvalidator | tmpllibrary | tmplacl 
+-+---++--+-+---+-
 
plpgsql | t | t | plpgsql_call_handler | plpgsql_inline_handler | 
plpgsql_validator | $libdir/plpgsql | 
pltcl | t | t | pltcl_call_handler | | | $libdir/pltcl | 
pltclu | f | f | pltclu_call_handler | | | $libdir/pltcl | 
plperl | t | t | plperl_call_handler | plperl_inline_handler | plperl_validator 
| $libdir/plperl | 
plperlu | f | f | plperlu_call_handler | plperlu_inline_handler | 
plperlu_validator | $libdir/plperl | 
plpython3u | f | f | plpython3_call_handler | plpython3_inline_handler | 
plpython3_validator | $libdir/plpython3 | 
(6 rows) 

argosrm=# \c postgres 
psql (12.4 (Ubuntu 12.4-1), server 12.5) 
You are now connected to database "postgres" as user "pgsql". 
postgres=# select * from pg_pltemplate ; 
tmplname | tmpltrusted | tmpldbacreate | tmplhandler | tmplinline | 
tmplvalidator | tmpllibrary | tmplacl 
+-+---++--+-+---+-
 
plpgsql | t | t | plpgsql_call_handler | plpgsql_inline_handler | 
plpgsql_validator | $libdir/plpgsql | 
pltcl | t | t | pltcl_call_handler | | | $libdir/pltcl | 
pltclu | f | f | pltclu_call_handler | | | $libdir/pltcl | 
plperl | t | t | plperl_call_handler | plperl_inline_handler | plperl_validator 
| $libdir/plperl | 
plperlu | f | f | plperlu_call_handler | plperlu_inline_handler | 
plperlu_validator | $libdir/plperl | 
plpython3u | f | f | plpython3_call_handler | plpython3_inline_handler | 
plpython3_validator | $libdir/plpython3 | 
(6 rows) 

postgres=# \c template1 
psql (12.4 (Ubuntu 

Re: vacuum vs vacuum full

2020-11-18 Thread Olivier Gautherot
On Wed, Nov 18, 2020 at 10:45 AM Ron  wrote:

> On 11/18/20 3:41 AM, Olivier Gautherot wrote:
>
> Hi Atul,
>
> On Wed, Nov 18, 2020 at 9:33 AM Atul Kumar  wrote:
>
>> Hi,
>>
>> We have a table of 3113GB, and we are planning to vacuum it in non
>> business hours i.e. 12AM to 4AM, So my queries are:
>>
> [snip]
>
>
>
> 3. Will the operation be completed in the given time frame? how to
>> check the same.
>>
>
> Given the size of the table, it will probably take several days.
>
>
> No matter how long it takes, this is an excellent argument for
> partitioning Very Large Tables: many maintenance tasks are made *much*
> easier.
>

I can only agree with this comment. The main issue I see is the available
disk space, as the partitioning process will include copying the whole
table.


Libre
de virus. www.avast.com

<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>


Re: vacuum vs vacuum full

2020-11-18 Thread Thomas Kellerer
Ron schrieb am 18.11.2020 um 10:44:
> No matter how long it takes, this is an excellent argument for
> partitioning Very Large Tables: many maintenance tasks are made
> *much* easier.

The problem is, you can't partition every table as long as Postgres
does not support a primary key that is independent of the partitioning key
(i.e. until it has "global indexes" as they are called in Oracle)

Thomas




Re: Postgresql13-devel fails to install on centos 7

2020-11-18 Thread Hemil Ruparel
Thanks a lot. I really hope it helps newbies like myself save a bunch of
time

On Wed 18 Nov, 2020, 2:50 PM Devrim Gündüz,  wrote:

>
> Hi,
>
> On Wed, 2020-11-18 at 13:45 +0530, Hemil Ruparel wrote:
> > Had to run this command:
> >
> >  yum install
> > https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
> >  yum install llvm5.0-devel
> >
> > And then it worked :)
>
> Updated website to clarify that EPEL is needed for -devel RPMs as well:
>
> https://yum.postgresql.org/news/devel-rpms-require-a-new-repository/
>
> Regards,
> --
> Devrim Gündüz
> Open Source Solution Architect, Red Hat Certified Engineer
> Twitter: @DevrimGunduz , @DevrimGunduzTR
>


Re: vacuum vs vacuum full

2020-11-18 Thread Olivier Gautherot
Hi Atul,

On Wed, Nov 18, 2020 at 9:33 AM Atul Kumar  wrote:

> Hi,
>
> We have a table of 3113GB, and we are planning to vacuum it in non
> business hours i.e. 12AM to 4AM, So my queries are:
>
> 1. What should be perform on the table Vacuum or Vacuum full ?
>

Vacuum full will do a complete rewrite of the table so you need to make
sure that you have the necessary space. I would recommend a simple
VACUUM, although it won't return the extra space to the OS.


> 2. Do we need to perform Analyze also?
>

It would be a good thing.


> 3. Will the operation be completed in the given time frame? how to
> check the same.
>

Given the size of the table, it will probably take several days.


> 4. Who acquire lock on table vacuum or vacuum full.
>

VACUUM FULL acquires a lock on the table. VACUUM doesn't.


> 5. If the activity goes beyond time frame, do we have any option to do
> continue doing t without acquiring lock on the table ?
>

VACUUM is a background activity. It does not block any other activity.


>
> If you also need the structure of the table, Please let me know.
>

It would be interesting to know the number of rows updated per hour or per
day to have an estimation of the needs.


> Please help me by responding my query wise.
>
> Regards,
> Atul
>

Cheers
Olivier



Libre
de virus. www.avast.com

<#m_-5414522580965465877_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>


Re: vacuum vs vacuum full

2020-11-18 Thread Ron

On 11/18/20 2:33 AM, Atul Kumar wrote:

Hi,

We have a table of 3113GB, and we are planning to vacuum it in non
business hours i.e. 12AM to 4AM, So my queries are:

1. What should be perform on the table Vacuum or Vacuum full ?


The documentation *clearly states* the difference between VACUUM and VACUUM 
FULL.


https://www.postgresql.org/docs/9.6/sql-vacuum.html


2. Do we need to perform Analyze also?


I always do.


3. Will the operation be completed in the given time frame? how to
check the same.


How in the heck do we know your system's hardware configuration?


4. Who acquire lock on table vacuum or vacuum full.


Read the docs.


5. If the activity goes beyond time frame, do we have any option to do
continue doing t without acquiring lock on the table ?

If you also need the structure of the table, Please let me know.

Please help me by responding my query wise.



--
Angular momentum makes the world go 'round.




Re: pg_upgrade from 12 to 13 failes with plpython2

2020-11-18 Thread Magnus Hagander
On Wed, Nov 18, 2020 at 8:11 AM Marcin Giedz  wrote:
>
> but my question still remains the same - what causes pg_upgrade failure - are 
> functions the reason? what I did was to delete these 2 rows from 
> pg_pltemplate as I thought this may help:
>
> postgres=# delete from pg_pltemplate where tmplname = 'plpython2u';
> DELETE 1
> postgres=# delete from pg_pltemplate where tmplname = 'plpythonu';
> DELETE 1
>
>
> but pg_upgrade still complains about plpython2:
>
> cat loadable_libraries.txt
> could not load library "$libdir/plpython2": ERROR:  could not access file 
> "$libdir/plpython2": No such file or directory
> In database: alaxx
> In database: template1


It's not the template that's interesting, it's the language itself you
need to drop. Log into each database and try to do that, and you will
get something like this if you still have functions using it:
postgres=# DROP LANGUAGE plpython2u;
ERROR:  cannot drop language plpython2u because other objects depend on it
DETAIL:  function testfunc() depends on language plpython2u
HINT:  Use DROP ... CASCADE to drop the dependent objects too.

If you have no functions using it, it will just go away, and once you
have dropped it in both databases you should be good to go.

And of course, if there are functions depending on it, you should
rebuild those on plpython3u before you drop plpython2u (or drop the
functions if they're not in use).

-- 
 Magnus Hagander
 Me: https://www.hagander.net/
 Work: https://www.redpill-linpro.com/




Re: Postgresql13-devel fails to install on centos 7

2020-11-18 Thread Devrim Gündüz

Hi,

On Wed, 2020-11-18 at 13:45 +0530, Hemil Ruparel wrote:
> Had to run this command:
> 
>  yum install
> https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
>  yum install llvm5.0-devel
> 
> And then it worked :)

Updated website to clarify that EPEL is needed for -devel RPMs as well:

https://yum.postgresql.org/news/devel-rpms-require-a-new-repository/

Regards,
-- 
Devrim Gündüz
Open Source Solution Architect, Red Hat Certified Engineer
Twitter: @DevrimGunduz , @DevrimGunduzTR


signature.asc
Description: This is a digitally signed message part


Postgres on Kubernetes/VMware

2020-11-18 Thread George Sexton

Everyone,

 I’ve run into an issue that’s got me stumped and I would be really 
grateful for any ideas. We’re deploying Postgres 11.8 on Kubernetes 
1.17.9. The nodes are running RedHat EL 7.9 with the latest kernel for 
that distribution.


 We create a Kubernetes pod that runs Postgres. It’s setup with /pgdata 
going to a PersistentVolumeClaim that’s thin provisioned with a size set 
at 500GB. Initially, the allocated size of the PersistentVolumeClaim’s 
corresponding disk file (VMware .VMDK) is around 12GB. However, the size 
of the .VMDK keeps increasing at roughly 150-300MB/Hour even though the 
size of the /pgdata folder as shown by df is stable around 230MB. To try 
to troubleshoot this, I set up Postgres to put the pg_wal folder on a 
different partition. That didn’t make any difference.


 Replication is turned on for this Postgres instance, and it was 
observed that the replica’s .VMDK file grew at a rate of 25MB/Hour.


 I’ve checked to make sure that the partition is mounted with the 
DISCARD option. I created a test program that would allocate disk space, 
and then free it. I confirmed that when DISCARD is used for mount, 
VMware will reclaim the space as expected. For example, if the .VMDK 
grows to 20GB during a test, it shrinks back down to some size like 1GB 
when the test completes.


 To try to sort this out, I’ve done the following:

1. Written the test program and confirmed that on the ESXI Host/VCenter
   space reclamation works.
2. Moved the pg_wal directory to another partition.
3. Changed Auto-Vacuum frequency to run less often (12 hours).
4. Executed “vacuum full”, followed by fstrim  from the
   Kubernetes node. This freed perhaps 12 MB from the /pgdata folder,
   and around 19MB from the .VMDK size.
5. Checked the /proc/fd descriptors for the running connections and
   confirmed no unusual temp files are opened.
6. Confirmed the replication is working as expected.
7. Set archive_mode off to eliminate that as a source of noise.

 Does anyone have any ideas about what’s causing this? Is there 
anything unusual about how Postgres allocates temporary data files or 
frees them? I’m really just grasping. Thanks for looking!


George


vacuum vs vacuum full

2020-11-18 Thread Atul Kumar
Hi,

We have a table of 3113GB, and we are planning to vacuum it in non
business hours i.e. 12AM to 4AM, So my queries are:

1. What should be perform on the table Vacuum or Vacuum full ?
2. Do we need to perform Analyze also?
3. Will the operation be completed in the given time frame? how to
check the same.
4. Who acquire lock on table vacuum or vacuum full.
5. If the activity goes beyond time frame, do we have any option to do
continue doing t without acquiring lock on the table ?

If you also need the structure of the table, Please let me know.

Please help me by responding my query wise.



Regards,
Atul




Re: Postgresql13-devel fails to install on centos 7

2020-11-18 Thread Hemil Ruparel
Had to run this command:

 yum install 
https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
 yum install llvm5.0-devel

And then it worked :)


On Wed, Nov 18, 2020 at 1:26 PM Hemil Ruparel 
wrote:

> I am trying to install postgres13-devel on a centos 7 server. But I am
> getting this error message:
> Error: Package: postgresql13-devel-13.1-1PGDG.rhel7.x86_64 (pgdg13)
>Requires: llvm5.0-devel >= 5.0
>
> After a bit of googling, and following these instructions
> 
> :
>
> $ sudo yum install centos-release-scl
> $ sudo yum install llvm-toolset-7
> $ scl enable llvm-toolset-7 bash
>
> But I am still getting the same error message.
>
>