Re: WAL File Recovery on Standby Server Stops Before End of WAL Files

2021-10-27 Thread Dilip Kumar
On Thu, Oct 28, 2021 at 7:28 AM Kyotaro Horiguchi
 wrote:
>
> At Wed, 27 Oct 2021 16:42:52 +, "Ryan, Les"  wrote in
> > 2021-10-27 10:26:31.467 MDT [2012] LOG:  redo starts at 419/5229A858
> ...
> > 2021-10-27 10:26:36.188 MDT [2012] LOG:  restored log file 
> > "00010419005A" from archive
> > 2021-10-27 10:26:36.750 MDT [2012] LOG:  consistent recovery state reached 
> > at 419/5AB8
> > 2021-10-27 10:26:36.752 MDT [6204] LOG:  database system is ready to accept 
> > read only connections
> > 2021-10-27 10:26:36.823 MDT [6040] LOG:  started streaming WAL from primary 
> > at 419/5A00 on timeline 1
> >
> >   *   There are many more WAL files available starting with 
> > 00010419005B but the restore process always stops at 
> > 00010419005A.
> >
> > I have two questions:
> >
> >   *   Why does the WAL file recovery process now stop after it reads 
> > 00010419005A?
> >   *   What do I need to do to get PostgreSQL to recover the rest of the 
> > available WAL files.
>
> The info above alone donesn't clearly suggest anything about the
> reason. Could you show the result of "pg_waldump
> 00010419005A 2>&1 | tail -5"?  What I'm expecting to see
> is an error message from pg_waldump before the end of the file. It
> would be the immediate cause of the problem.

+1, that will be the best place to start with, additionally, you can
enable DEBUG2 message so that from logs we can identify why it could
not continue recovery from the archive.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com




Re: WAL File Recovery on Standby Server Stops Before End of WAL Files

2021-10-27 Thread Kyotaro Horiguchi
At Wed, 27 Oct 2021 16:42:52 +, "Ryan, Les"  wrote in 
> 2021-10-27 10:26:31.467 MDT [2012] LOG:  redo starts at 419/5229A858
...
> 2021-10-27 10:26:36.188 MDT [2012] LOG:  restored log file 
> "00010419005A" from archive
> 2021-10-27 10:26:36.750 MDT [2012] LOG:  consistent recovery state reached at 
> 419/5AB8
> 2021-10-27 10:26:36.752 MDT [6204] LOG:  database system is ready to accept 
> read only connections
> 2021-10-27 10:26:36.823 MDT [6040] LOG:  started streaming WAL from primary 
> at 419/5A00 on timeline 1
> 
>   *   There are many more WAL files available starting with 
> 00010419005B but the restore process always stops at 
> 00010419005A.
> 
> I have two questions:
> 
>   *   Why does the WAL file recovery process now stop after it reads 
> 00010419005A?
>   *   What do I need to do to get PostgreSQL to recover the rest of the 
> available WAL files.

The info above alone donesn't clearly suggest anything about the
reason. Could you show the result of "pg_waldump
00010419005A 2>&1 | tail -5"?  What I'm expecting to see
is an error message from pg_waldump before the end of the file. It
would be the immediate cause of the problem.

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




Re: jsonb: unwrapping text

2021-10-27 Thread David G. Johnston
On Wed, Oct 27, 2021 at 11:58 AM  wrote:

>
> I've found out that one can treat a string singleton as if it
> were an array:
>
>   foo=# select '"foo"'::jsonb ->> 0;
>?column?
>   --
>foo
>   (1 row)
>
> which conveniently returns the right type. My question: can I rely
> on that, or am I missing a much more obvious option?
>
>
Not sure if this exact behavior is trustworthy - but you are on the right
path. Place the value into either a json array or json object and then use
the text versions of the accessor methods to get the json value to pass
through the decoding routine.

David J.


Re: Segmentation fault in volatile c function

2021-10-27 Thread Louise Grandjonc
Thank you! I used that. The segmentation fault came from a later code in my 
hook. But that helped.

> On Oct 27, 2021, at 12:47 PM, Peter Geoghegan  wrote:
> 
> On Wed, Oct 27, 2021 at 12:39 PM Louise Grandjonc
>  wrote:
>> I'm creating my first ever extension. The function that I'm trying to write 
>> takes the schema and name of a table, and adds it in a table with all the 
>> tables the extension follows.
>> In that table I want the schema, name and oid of the table.
>> 
>> I created a C function for that. Here is the code
> 
> I don't think that using cstring as an argument is appropriate here.
> That's only used for "internal" functions that are called through C,
> rather than being called through SQL.
> 
> It would probably be better to use a regclass argument instead. Take a
> look at the example of (say) amcheck's bt_index_check() or
> verify_heapam() functions to see how that's done. Not all C functions
> use this built-in way of specifying a particular relation from SQL (a
> few older ones really do just pass a string some other way), but it's
> definitely considered the way to go.
> 
> Note that you can do stuff like this from SQL:
> 
> pg@regression:5432 [2999218]=# select 'pg_class'::regclass::oid;
>  oid
> ───
> 1,259
> (1 row)
> 
> -- 
> Peter Geoghegan





Re: psql could not connect to server after complete reinstallation [Ubuntu 21.10 / PostgreSQL 13]

2021-10-27 Thread Adrian Klaver

On 10/27/21 12:51, Jérémy Garniaux wrote:

Adrian,

Le 27/10/2021 à 19:30, Adrian Klaver a écrit :



While you are at it:

sudo pg_dropcluster 12 main

to get rid of the orphaned cluster.


Great, thanks for your help. Everything is back in order now.

I found out about Pg clusters in the process!


Just to be clear the commands I showed are specific to the Debian/Ubuntu 
Postgres packaging.




Best regards,

Jeremy




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: psql could not connect to server after complete reinstallation [Ubuntu 21.10 / PostgreSQL 13]

2021-10-27 Thread Jérémy Garniaux

Adrian,

Le 27/10/2021 à 19:30, Adrian Klaver a écrit :


Your Postgres 13 cluster is up and listening on 5433.

Your choices:

1) Connect using psql -d  -U  -p 5433


2) Open:

sudo vi /etc/postgresql/13/main/postgresql.conf

and change

port = 5433

to

port = 5432

and then restart server.


While you are at it:

sudo pg_dropcluster 12 main

to get rid of the orphaned cluster.


Great, thanks for your help. Everything is back in order now.

I found out about Pg clusters in the process!

Best regards,

Jeremy





Re: Segmentation fault in volatile c function

2021-10-27 Thread Peter Geoghegan
On Wed, Oct 27, 2021 at 12:39 PM Louise Grandjonc
 wrote:
> I'm creating my first ever extension. The function that I'm trying to write 
> takes the schema and name of a table, and adds it in a table with all the 
> tables the extension follows.
> In that table I want the schema, name and oid of the table.
>
> I created a C function for that. Here is the code

I don't think that using cstring as an argument is appropriate here.
That's only used for "internal" functions that are called through C,
rather than being called through SQL.

It would probably be better to use a regclass argument instead. Take a
look at the example of (say) amcheck's bt_index_check() or
verify_heapam() functions to see how that's done. Not all C functions
use this built-in way of specifying a particular relation from SQL (a
few older ones really do just pass a string some other way), but it's
definitely considered the way to go.

Note that you can do stuff like this from SQL:

pg@regression:5432 [2999218]=# select 'pg_class'::regclass::oid;
  oid
───
 1,259
(1 row)

-- 
Peter Geoghegan




Segmentation fault in volatile c function

2021-10-27 Thread Louise Grandjonc
Hi,

I'm creating my first ever extension. The function that I'm trying to write 
takes the schema and name of a table, and adds it in a table with all the 
tables the extension follows.
In that table I want the schema, name and oid of the table.

I created a C function for that. Here is the code

```
Datum add_mygreat_table(PG_FUNCTION_ARGS);

PG_FUNCTION_INFO_V1(add_mygreat_table);

Datum add_mygreat_table(PG_FUNCTION_ARGS) {
  char *tableSchema,
*tableName;

  Oid tableNamespace,
relationId;

  tableSchema = PG_GETARG_CSTRING(0);
  tableName = PG_GETARG_CSTRING(1);

  tableNamespace = get_namespace_oid(tableSchema, false);
  relationId = get_relname_relid(tableName, tableNamespace);

  if (relationId == NULL)
  {
ereport(ERROR, (errmsg("could not create remote table: "
   "relation does not exist")));
  }

  Datum values[3];
  bool isNulls[3];
  memset(isNulls, false, sizeof(isNulls));

  values[0] = CStringGetTextDatum(tableSchema);
  values[1] = CStringGetTextDatum(tableName);
  values[2] = ObjectIdGetDatum(relationId);

  Relation greatTable = table_open(MyGreatTablesRelationId(), RowExclusiveLock);

  HeapTuple tuple = heap_form_tuple(RelationGetDescr(greatTable), values, 
isNulls);

  /*
   * CatalogTupleInsert() is originally for PostgreSQL's catalog. However,
   * it is used at here for convenience.
   */
  CatalogTupleInsert(greatTable, tuple);

  table_close(greatTable, RowExclusiveLock);
  heap_freetuple(tuple);

  CommandCounterIncrement();

  ereport(NOTICE,
  (errmsg("Table %s is now a great table", tableName)));

  PG_RETURN_BOOL(true);
}
```


And in the sql I have

```
CREATE OR REPLACE FUNCTION mygreat.add_mygreat_table(cstring, cstring)
  RETURNS bool
  LANGUAGE C STRICT IMMUTABLE
AS '$libdir/mygreat',
$$add_mygreat_table$$;
```


Here is my problem, it works as expected when the table doesn't exist.

```
mygreat_test=# create extension hydra;

CREATE EXTENSION
mygreat_test=# select mygreat.add_mygreat_table('public', 'sampledata3');
ERROR:  could not create remote table: relation does not exist
```

However if the table exists it just completely crashes

```
hydra_test=# select hydra.add_remote_table('snowflake', 'public', 
'sampledata2');
NOTICE:  Table sampledata2 is now a mygreat table
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
```

I figured that it was because of the `IMMUTABLE` as the function does modify 
the database. And according to documentation

```
IMMUTABLE indicates that the function cannot modify the database and always 
returns the same result when given the same argument values; t
```

So I tried creating it as a `VOLATILE` function.


```
CREATE OR REPLACE FUNCTION hydra.add_remote_table(cstring, cstring)
  RETURNS bool
  LANGUAGE C STRICT VOLATILE
AS '$libdir/mygreat',
$$add_mygreat_table$$;
CREATE FUNCTION
```

But now it just always crashes, and what used to return the proper error 
message when ran against a table that doesn't exist now returns

```
mygreat_test=# select mygreat.add_mygreat_table('public', 'sampledata3');
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
```

In the logs, for the latest, I get

```
2021-10-27 12:28:02.712 PDT [44115] LOG:  statement: select 
mygreat.add_mygreat_table('public', 'sampledata3');
2021-10-27 12:28:02.718 PDT [43936] LOG:  server process (PID 44115) was 
terminated by signal 11: Segmentation fault: 11
2021-10-27 12:28:02.718 PDT [43936] DETAIL:  Failed process was running: select 
mygreat.add_mygreat_table('public', 'sampledata3');
2021-10-27 12:28:02.719 PDT [43936] LOG:  terminating any other active server 
processes
```

What am I doing wrong? Can anyone help me with this?

Thank you!





jsonb: unwrapping text

2021-10-27 Thread tomas
Hi,

I'm trying to extract text from a jsonb 'string'. Simply casting
gives me the string wrapped in quotes:

  foo=# select '"foo"'::jsonb::text;
   text  
  ---
   "foo"
  (1 row)

This, of course, makes kind of sense, since it /is/ the JSON's
textual representation.

What is the canonical way to get at the string's content itself?

I've found out that one can treat a string singleton as if it
were an array:

  foo=# select '"foo"'::jsonb ->> 0;
   ?column? 
  --
   foo
  (1 row)

which conveniently returns the right type. My question: can I rely
on that, or am I missing a much more obvious option?

(I try to stay compatible down to PostgreSQL 11, 9 when possible.
Conservative customers and that).

Thanks for any insights, cheers
-- tomás


signature.asc
Description: Digital signature


Re: Strange sequences - how to construct?

2021-10-27 Thread SQL Padawan



> --- example: lets get 3 sequences
>
> select next_sequence(), next_sequence(), next_sequence();
> --- inspect the table to see what happned
> select * from my_sequence;


Thanks for your input on this issue.

SQLP


Sent with ProtonMail Secure Email.




Re: psql could not connect to server after complete reinstallation [Ubuntu 21.10 / PostgreSQL 13]

2021-10-27 Thread Adrian Klaver

On 10/27/21 9:01 AM, Jérémy Garniaux wrote:

Hi Adrian,

Thanks for answering on both sides!





Same response as to your SO question:

What is the output of pg_lsclusters?
Ver Cluster Port Status    Owner    Data 
directory  Log file
12  main    5432 down,binaries_missing postgres 
/var/lib/postgresql/12/main /var/log/postgresql/postgresql-12-main.log
13  main    5433 online    postgres 
/var/lib/postgresql/13/main /var/log/postgresql/postgresql-13-main.log





Your Postgres 13 cluster is up and listening on 5433.

Your choices:

1) Connect using psql -d  -U  -p 5433


2) Open:

sudo vi /etc/postgresql/13/main/postgresql.conf

and change

port = 5433

to

port = 5432

and then restart server.


While you are at it:

sudo pg_dropcluster 12 main

to get rid of the orphaned cluster.

--
Adrian Klaver
adrian.kla...@aklaver.com




WAL File Recovery on Standby Server Stops Before End of WAL Files

2021-10-27 Thread Ryan, Les
Hello,

I'm hoping to get some suggestions on what to do here.  I am running PostgreSQL 
version 13.2 and am shipping the WAL files to a standby server.  Once a day I 
restart the standby server and it recovers the new WAL files that have been 
shipped to it.  Everything was working great until yesterday.  My problem is 
that the WAL recovery is now stopping before it recovers all of the available 
WAL files.  This happened once before and the only way I could get the WAL 
recovery to go past that file was to create a fresh back and restore that.  I'm 
hoping to avoid that as it takes about a week to create the backup.

Here are the specifics:

  *   PostgreSQL version 13.2
  *   The primary server creates the WAL files and a scheduled process copies 
them to a folder on the standby server.
  *   Once a day, the standby server is restared using the following command:
 *   "C:\Program Files\PostgreSQL\13\bin\pg_ctl" restart -D .\
  *   The log contains the following:
2021-10-27 10:26:30.508 MDT [6204] LOG:  starting PostgreSQL 13.2, compiled by 
Visual C++ build 1914, 64-bit
2021-10-27 10:26:30.509 MDT [6204] LOG:  listening on IPv6 address "::", port 
5432
2021-10-27 10:26:30.510 MDT [6204] LOG:  listening on IPv4 address "0.0.0.0", 
port 5432
2021-10-27 10:26:30.585 MDT [2012] LOG:  database system was shut down in 
recovery at 2021-10-27 10:26:29 MDT
2021-10-27 10:26:30.701 MDT [2012] LOG:  entering standby mode
2021-10-27 10:26:30.821 MDT [2012] LOG:  restored log file 
"000104190056" from archive
2021-10-27 10:26:31.158 MDT [2012] LOG:  restored log file 
"000104190052" from archive
2021-10-27 10:26:31.467 MDT [2012] LOG:  redo starts at 419/5229A858
2021-10-27 10:26:31.561 MDT [2012] LOG:  restored log file 
"000104190053" from archive
2021-10-27 10:26:32.108 MDT [2012] LOG:  restored log file 
"000104190054" from archive
2021-10-27 10:26:32.849 MDT [2012] LOG:  restored log file 
"000104190055" from archive
2021-10-27 10:26:33.612 MDT [2012] LOG:  restored log file 
"000104190056" from archive
2021-10-27 10:26:34.342 MDT [2012] LOG:  restored log file 
"000104190057" from archive
2021-10-27 10:26:35.146 MDT [2012] LOG:  restored log file 
"000104190058" from archive
2021-10-27 10:26:35.718 MDT [2012] LOG:  restored log file 
"000104190059" from archive
2021-10-27 10:26:36.188 MDT [2012] LOG:  restored log file 
"00010419005A" from archive
2021-10-27 10:26:36.750 MDT [2012] LOG:  consistent recovery state reached at 
419/5AB8
2021-10-27 10:26:36.752 MDT [6204] LOG:  database system is ready to accept 
read only connections
2021-10-27 10:26:36.823 MDT [6040] LOG:  started streaming WAL from primary at 
419/5A00 on timeline 1

  *   There are many more WAL files available starting with 
00010419005B but the restore process always stops at 
00010419005A.

I have two questions:

  *   Why does the WAL file recovery process now stop after it reads 
00010419005A?
  *   What do I need to do to get PostgreSQL to recover the rest of the 
available WAL files.

Thanks in advance for any suggestions.

Sincerely,
-Les


Les Ryan, P.Eng | WSP

SCADA Engineer
Energy, Resources & Industry

T +1 403-813-6327
E les.r...@wsp.com
O 405 18 St SE. Calgary, Alberta T2E 
6J5





NOTICE: This communication and any attachments ("this message") may contain 
information which is privileged, confidential, proprietary or otherwise subject 
to restricted disclosure under applicable law. This message is for the sole use 
of the intended recipient(s). Any unauthorized use, disclosure, viewing, 
copying, alteration, dissemination or distribution of, or reliance on, this 
message is strictly prohibited. If you have received this message in error, or 
you are not an authorized or intended recipient, please notify the sender 
immediately by replying to this message, delete this message and all copies 
from your e-mail system and destroy any printed copies. You are receiving this 
communication because you are listed as a current WSP contact. Should you have 
any questions regarding WSP's electronic communications policy, please consult 
our Anti-Spam Commitment at www.wsp.com/casl. For any 
concern or if you believe you should not be receiving this message, please 
forward this message to caslcomplia...@wsp.com 
so that we can promptly address your request. Note that not all messages sent 
by WSP qualify as commercial electronic messages.

AVIS : Ce message, incluant tout fichier l'accompagnant (< le message >), peut 
contenir des renseignements ou de l'information privil?gi?s, confidentiels, 
propri?taires ou ? divulgation restreinte en vertu de la loi. Ce message est 
destin? ? l'usage 

Re: psql could not connect to server after complete reinstallation [Ubuntu 21.10 / PostgreSQL 13]

2021-10-27 Thread Jérémy Garniaux

Hi Adrian,

Thanks for answering on both sides!

Le 27/10/2021 à 17:58, Adrian Klaver a écrit :

On 10/27/21 08:51, Jérémy Garniaux wrote:

Hi,

After experiencing issues related to the existence of two different 
PostgreSQL versions installed on the same machine, I decided to do a 
"remove --purge postgresql*" and reinstall postgresql-13. Now the 
output of "psql --version" is "psql (PostgreSQL) 13.4 (Ubuntu 
13.4-1)". However, trying to enter psql, I have this error:


psql: error: could not connect to server: Aucun fichier ou dossier de 
ce type

 Is the server running locally and accepting
 connections on Unix domain socket 
"/var/run/postgresql/.s.PGSQL.5432"?


I assume this has been asked many times, but I tried different 
solutions proposed on similar posts and none solved my issue.


Same response as to your SO question:

What is the output of pg_lsclusters?
Ver Cluster Port Status    Owner    Data 
directory  Log file
12  main    5432 down,binaries_missing postgres 
/var/lib/postgresql/12/main /var/log/postgresql/postgresql-12-main.log
13  main    5433 online    postgres 
/var/lib/postgresql/13/main /var/log/postgresql/postgresql-13-main.log





Re: psql could not connect to server after complete reinstallation [Ubuntu 21.10 / PostgreSQL 13]

2021-10-27 Thread Adrian Klaver

On 10/27/21 08:51, Jérémy Garniaux wrote:

Hi,

After experiencing issues related to the existence of two different 
PostgreSQL versions installed on the same machine, I decided to do a 
"remove --purge postgresql*" and reinstall postgresql-13. Now the output 
of "psql --version" is "psql (PostgreSQL) 13.4 (Ubuntu 13.4-1)". 
However, trying to enter psql, I have this error:


psql: error: could not connect to server: Aucun fichier ou dossier de ce 
type

     Is the server running locally and accepting
     connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?

I assume this has been asked many times, but I tried different solutions 
proposed on similar posts and none solved my issue.


Same response as to your SO question:

What is the output of pg_lsclusters?



Thanks,

Best regards

Jeremy






--
Adrian Klaver
adrian.kla...@aklaver.com




psql could not connect to server after complete reinstallation [Ubuntu 21.10 / PostgreSQL 13]

2021-10-27 Thread Jérémy Garniaux

Hi,

After experiencing issues related to the existence of two different 
PostgreSQL versions installed on the same machine, I decided to do a 
"remove --purge postgresql*" and reinstall postgresql-13. Now the output 
of "psql --version" is "psql (PostgreSQL) 13.4 (Ubuntu 13.4-1)". 
However, trying to enter psql, I have this error:


psql: error: could not connect to server: Aucun fichier ou dossier de ce 
type

    Is the server running locally and accepting
    connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?

I assume this has been asked many times, but I tried different solutions 
proposed on similar posts and none solved my issue.


Thanks,

Best regards

Jeremy





Re: ZFS filesystem - supported ?

2021-10-27 Thread Laurenz Albe
On Wed, 2021-10-27 at 11:21 +1300, Lucas wrote:
> The snapshots are done this way:
> 1. Grab the latest applied WAL File for further references, stores that in a 
> variable in Bash
> 2. Stop the Postgres process
> 3. Check it is stopped
> 4. Start the Block level EBS Snapshot process
> 5. Applied tags to the snapshot, such as the WAL file from Step #1
> 6. Wait for snapshot to complete, querying the AWS API for that
> 7. Start PG
> 8. Check it is started
> 9. Check it is replicating from master

If you shutdown the server cleanly, that backup is good and does not
need recovery to restore.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: pg_dump and pg_restore and foreign keys

2021-10-27 Thread Tore Halvorsen
Then I'll try that, thank you :)

On Wed, Oct 27, 2021 at 4:04 PM Tom Lane  wrote:

> Tore Halvorsen  writes:
> > That would be appending it for "pg_catalog.pg_get_constraintdef(oid) AS
> > condef"  in getConstraints in pg_dump.c?
>
> No, you want to mess with the text printed by dumpConstraint().
>
> regards, tom lane
>


-- 
Eld på åren og sol på eng gjer mannen fegen og fjåg. [Jøtul]
 Tore Halvorsen || +052 0553034554


Re: Determining if a table really changed in a trigger

2021-10-27 Thread Mitar
Hi!

On Wed, Oct 27, 2021 at 12:56 PM Marcos Pegoraro  wrote:
>> Oh, very interesting. I thought that this is not possible because WHEN
>> condition on triggers does not have NEW and OLD. But this is a very
>> cool way to combine rules with triggers, where a rule can still
>> operate by row.
>
> That is not true

Sorry to be imprecise. In this thread I am interested in statement
triggers, so I didn't mention this explicitly here. So statement
triggers do not have NEW and OLD. But you can combine it with a
row-level rule and this works then well together.


Mitar

-- 
http://mitar.tnode.com/
https://twitter.com/mitar_m




Re: pg_dump and pg_restore and foreign keys

2021-10-27 Thread Tom Lane
Tore Halvorsen  writes:
> That would be appending it for "pg_catalog.pg_get_constraintdef(oid) AS
> condef"  in getConstraints in pg_dump.c?

No, you want to mess with the text printed by dumpConstraint().

regards, tom lane




Re: pg_dump and pg_restore and foreign keys

2021-10-27 Thread Tore Halvorsen
That would be appending it for "pg_catalog.pg_get_constraintdef(oid) AS
condef"  in getConstraints in pg_dump.c?

On Wed, Oct 27, 2021 at 3:27 PM Tom Lane  wrote:

> Tore Halvorsen  writes:
> > I'm trying to restore a database where adding foreign key constraints
> takes
> > most of the time. Does there exist a simple way to make either pg_dump or
> > pg_restore handle them as "not valid", and defer the validation.
>
> No.  It's kind of a neat idea perhaps, but it's not there.  You could
>
> * use parallel pg_restore to alleviate the pain, or
>
> * use pg_restore's -l and -L switches to strip out the FKs altogether,
> and then re-add them manually afterwards.
>
> Or there's always
>
> * hack the pg_dump source code to include NOT VALID.  While a real
> feature patch that made this optional would be a bit complicated,
> doing it unconditionally should be a one-line change.
>
> regards, tom lane
>


-- 
Eld på åren og sol på eng gjer mannen fegen og fjåg. [Jøtul]
 Tore Halvorsen || +052 0553034554


Re: pg_dump and pg_restore and foreign keys

2021-10-27 Thread Tom Lane
Tore Halvorsen  writes:
> I'm trying to restore a database where adding foreign key constraints takes
> most of the time. Does there exist a simple way to make either pg_dump or
> pg_restore handle them as "not valid", and defer the validation.

No.  It's kind of a neat idea perhaps, but it's not there.  You could

* use parallel pg_restore to alleviate the pain, or

* use pg_restore's -l and -L switches to strip out the FKs altogether,
and then re-add them manually afterwards.

Or there's always

* hack the pg_dump source code to include NOT VALID.  While a real
feature patch that made this optional would be a bit complicated,
doing it unconditionally should be a one-line change.

regards, tom lane




Re: Python3 for PostgreSQL 14

2021-10-27 Thread Magnus Hagander
On Sat, Oct 23, 2021 at 10:40 PM Adrian Klaver 
wrote:

> On 10/23/21 13:00, Дмитрий Иванов wrote:
> > Да.
> > Python extension issues occurred on Windows Server 2012 and Windows 10
> > Pro. Experimentally, when installing version 3.7.4, everything worked.
>
> If you used the EDB installer for the Windows install of Postgres then
> you need to use their Language Pack to match up with what the plpython
> expects.
>
> > This is my first time deploying Debian, and since I didn't want to
> > deploy multiple versions, I asked a question. I have installed Python
> > 3.10 x64. when faced with a lack of information on installing an
> > extension pack, it would be a good idea to highlight this in the help.
> >
> https://debian.pkgs.org/11/postgresql-amd64/postgresql-plpython3-14_14.0-1.pgdg110+1_amd64.deb.html
> > <
> https://debian.pkgs.org/11/postgresql-amd64/postgresql-plpython3-14_14.0-1.pgdg110+1_amd64.deb.html
> >:
>
> This is the Debian project packages, not something the Postgres project
> controls.
>

It is correct that the postgres project itself does not, but the Postgres
debian packaging team packages both the packages for debian and those for
apt.postgresql.org, so it's still the same people that does.

So there's actually a lot more crossover there than when you look at the
EDB provided packages for example, where the team is completely
indepdendent.

-- 
 Magnus Hagander
 Me: https://www.hagander.net/ 
 Work: https://www.redpill-linpro.com/ 


pg_dump and pg_restore and foreign keys

2021-10-27 Thread Tore Halvorsen
Hi,

I hope this is the correct place for this question.

I'm trying to restore a database where adding foreign key constraints takes
most of the time. Does there exist a simple way to make either pg_dump or
pg_restore handle them as "not valid", and defer the validation.


-- 
Eld på åren og sol på eng gjer mannen fegen og fjåg. [Jøtul]
 Tore Halvorsen || +052 0553034554


Re: Determining if a table really changed in a trigger

2021-10-27 Thread Marcos Pegoraro
>
> Oh, very interesting. I thought that this is not possible because WHEN
> condition on triggers does not have NEW and OLD. But this is a very
> cool way to combine rules with triggers, where a rule can still
> operate by row.
>
> That is not true

create table test(i integer);
create function test_old_new() returns trigger language plpgsql as $$
begin
   raise notice '% - %', old.i, new.i;
   return new;
end;$$;
CREATE TRIGGER testvalue BEFORE UPDATE OF i ON test FOR EACH ROW WHEN
(((new.i)::integer = 5::integer)) EXECUTE PROCEDURE test_old_new();
> insert into test values(4)
1 row affected in 52 ms
> update test set i = 6
1 row affected in 93 ms
> update test set i = 5
6 - 5 ->raise notice of procedure test_old_new was called only when new.i =
5
1 row affected in 48 ms

>
>


Re: String comparison fails for some cases after migration

2021-10-27 Thread Jayadevan M
On Wed, Oct 27, 2021 at 11:49 AM Julien Rouhaud  wrote:

>
>
> Most likely you had a different version of the glibc or ICU libraries
> on the new system, which lead to your indexes on collatable datatypes
> partially corrupted.  See https://wiki.postgresql.org/wiki/Collations
> for more details.
>
> You can issue a REINDEX for each impacted index, or a database-wide
> REINDEX.
>

Tried creating  a new env with the same approach - just reindexing solved
the problem. Thanks a lot.

Regards,
Jayadevan


Re: Determining if a table really changed in a trigger

2021-10-27 Thread Thomas Kellerer
Mitar schrieb am 26.10.2021 um 09:05:
> I would like to test inside trigger_function if the table really
> changed. I have tried to do:
>
> PERFORM * FROM ((TABLE old_table EXCEPT TABLE new_table) UNION ALL
> (TABLE new_table EXCEPT TABLE old_table)) AS differences LIMIT 1;
> IF FOUND THEN
>   ... changed ...
> END IF;
>
> But this fails if the table contains a JSON field with the error:
>
> could not identify an equality operator for type json
>

If you use jsonb (which is recommended over json anyway), then this
would work.

I would probably use a row level trigger instead of a statement level trigger

Then compare the two records using IS DISTINCT FROM


   if new is distinct from old then
  
   end if;

> I want to skip trivial updates (those which have not changed anything).

It seems you might want to use the pre-defined function 
suppress_redundant_updates_trigger()

https://www.postgresql.org/docs/current/functions-trigger.html

Might be faster than a self written trigger.

Regards
Thomas




Re: Determining if a table really changed in a trigger

2021-10-27 Thread Mitar
Hi!

On Wed, Oct 27, 2021 at 3:56 AM Michael Lewis  wrote:
> If you end up with no rows changing from an insert or delete, something seems 
> awry. Unless you mean 0 rows affected.

Isn't this the same? Isn't the number of rows affected the same as the
number of rows changing? For example:

DELETE FROM my_table where i=100;

would not change anything in your example. But probably this is just
terminology I have used badly.

> Do after statement triggers still execute? I suppose they very well might.

I have run the following and it seems statement triggers still execute
even if nothing changes:

postgres=# create table my_table (i integer, j json);
CREATE TABLE
postgres=# insert into my_table
  select gs::integer, '{"key":1}'::json
from generate_series(1,3) gs;
INSERT 0 3
postgres=# create function my_table_func () returns trigger as $$
declare
  have_rows boolean;
begin
  raise warning 'trigger called';
  if (tg_op = 'INSERT') then
select true into have_rows from new_values limit 1;
if have_rows then
  raise warning 'rows have changed';
end if;
  elsif (tg_op = 'UPDATE' or tg_op = 'DELETE') then
select true into have_rows from old_values limit 1;
if have_rows then
  raise warning 'rows have changed';
end if;
  end if;
  return null;
end
$$ language plpgsql;
CREATE FUNCTION
postgres=# create trigger my_table_trig_insert after insert on my_table
  referencing new table as new_values
  for each statement
  execute function my_table_func();
CREATE TRIGGER
postgres=# create trigger my_table_trig_update after update on my_table
  referencing old table as old_values
  for each statement
  execute function my_table_func();
CREATE TRIGGER
postgres=# create trigger my_table_trig_delete after delete on my_table
  referencing old table as old_values
  for each statement
  execute function my_table_func();
CREATE TRIGGER
postgres=# update my_table set j = '{"key":2}'::jsonb;
WARNING:  trigger called
WARNING:  rows have changed
UPDATE 3
postgres=# update my_table set j = '{"key":2}'::jsonb;
WARNING:  trigger called
WARNING:  rows have changed
UPDATE 3
postgres=# create trigger z_min_update
  before update on my_table
  for each row execute function suppress_redundant_updates_trigger();
CREATE TRIGGER
postgres=# update my_table set j = '{"key":2}'::jsonb;
WARNING:  trigger called
UPDATE 0
postgres=# update my_table set j = '{"key":3}'::jsonb;
WARNING:  trigger called
WARNING:  rows have changed
UPDATE 3
postgres=# delete from my_table where i = 100;
WARNING:  trigger called
DELETE 0
postgres=# insert into my_table select * from my_table where i = 100;
WARNING:  trigger called
INSERT 0 0

> Would the statement even execute if no rows get updated and that is prevented 
> with before update? I would assume null is being returned rather than old if 
> the trigger finds the row to be identical.

It looks like a statement trigger is always called, but checking
REFERENCING matches affected rows as returned by the psql shell. Also
notice how the number of affected rows is non-zero for trivial update
before the use of suppress_redundant_updates_trigger, both through
REFERENCING and through the psql shell.

That matches also documentation:

> ..., a trigger that is marked FOR EACH STATEMENT only executes once for any 
> given operation, regardless of how many rows it modifies (in particular, an 
> operation that modifies zero rows will still result in the execution of any 
> applicable FOR EACH STATEMENT triggers).

So it would be really cool to be able to access the number of affected
rows inside a trigger without the use of REFERENCING. Given that WHEN
condition of a statement trigger is currently mostly useless (because
the condition cannot refer to any values in the table) maybe providing
something like AFFECTED variable in there would be the way to go? So
one could write:

CREATE TRIGGER my_trigger AFTER UPDATE ON my_table FOR EACH STATEMENT
WHEN AFFECTED <> 0 EXECUTE FUNCTION my_table_func();


Mitar

-- 
http://mitar.tnode.com/
https://twitter.com/mitar_m




Re: String comparison fails for some cases after migration

2021-10-27 Thread Tom Lane
Jayadevan M  writes:
> We moved our PostgreSQL database from one hosting provider to another using
> pgbackrest.  In the new environment, some comparison operations  were
> failing.  The issue was fixed by running an update. But I am trying to find
> out what would have happened.

Did the underlying OS version change?  If so, see

https://wiki.postgresql.org/wiki/Locale_data_changes

regards, tom lane




Re: String comparison fails for some cases after migration

2021-10-27 Thread Julien Rouhaud
Hi,

On Wed, Oct 27, 2021 at 2:12 PM Jayadevan M  wrote:
>
> We moved our PostgreSQL database from one hosting provider to another using 
> pgbackrest.  In the new environment, some comparison operations  were 
> failing.  The issue was fixed by running an update. But I am trying to find 
> out what would have happened.
>
> select * from accounts where email = 'someem...@gmail.com'; -- failed for 
> some email ids even though there were records.
>
> select * from accounts where lower(trim(email)) <> email; -- fetched no 
> records.
>
> select * from accounts where email::bytea = 'someem...@gmail.com'::bytea; - 
> worked for those records where comparison was failing.
>
> update accounts set email = trim(lower(email)); -- fixed the issue.
>
> Source database was PG 13.2, target 13.3.
>
> Any suggestions will be appreciated - not sure if other tables/columns are 
> affected.

Most likely you had a different version of the glibc or ICU libraries
on the new system, which lead to your indexes on collatable datatypes
partially corrupted.  See https://wiki.postgresql.org/wiki/Collations
for more details.

You can issue a REINDEX for each impacted index, or a database-wide REINDEX.




String comparison fails for some cases after migration

2021-10-27 Thread Jayadevan M
Hello all,

We moved our PostgreSQL database from one hosting provider to another using
pgbackrest.  In the new environment, some comparison operations  were
failing.  The issue was fixed by running an update. But I am trying to find
out what would have happened.

select * from accounts where email = 'someem...@gmail.com'; -- failed for
some email ids even though there were records.

select * from accounts where lower(trim(email)) <> email; -- fetched no
records.

select * from accounts where email::bytea = 'someem...@gmail.com'::bytea; -
worked for those records where comparison was failing.

update accounts set email = trim(lower(email)); -- fixed the issue.

Source database was PG 13.2, target 13.3.

Any suggestions will be appreciated - not sure if other tables/columns are
affected.

Best Regards,
Jayadevan