Re: pg_dump and pg_restore and foreign keys

2021-10-28 Thread Tore Halvorsen
In case someone else needs to do this - I ended up with this change in
pg_dump.c around lines 17080-17090

I guess a proper solution would check the already invalid foreign keys in a
different way.

appendPQExpBuffer(q, "ADD CONSTRAINT %s %s%s;\n",
fmtId(coninfo->dobj.name),
coninfo->condef
, dopt->foreign_keys_not_valid && !strstr(coninfo->condef, "NOT VALID") ? "
NOT VALID":"");



On Wed, Oct 27, 2021 at 4:08 PM Tore Halvorsen 
wrote:

> Then I'll try that, thank you :)
>
> On Wed, Oct 27, 2021 at 4:04 PM Tom Lane  wrote:
>
>> Tore Halvorsen  writes:
>> > That would be appending it for "pg_catalog.pg_get_constraintdef(oid) AS
>> > condef"  in getConstraints in pg_dump.c?
>>
>> No, you want to mess with the text printed by dumpConstraint().
>>
>> regards, tom lane
>>
>
>
> --
> Eld på åren og sol på eng gjer mannen fegen og fjåg. [Jøtul]
>  Tore Halvorsen || +052 0553034554
>


-- 
Eld på åren og sol på eng gjer mannen fegen og fjåg. [Jøtul]
 Tore Halvorsen || +052 0553034554


Re: WAL File Recovery on Standby Server Stops Before End of WAL Files

2021-10-28 Thread Kyotaro Horiguchi
At Thu, 28 Oct 2021 13:52:36 +, "Ryan, Les"  wrote in 
> Hi Kyotaro and Dilip,
> 
> Thank you for getting back to me.
> 
> Kyotaro: I ran pg_dump and the output was "pg_waldump: fatal: could not read 
> file "00010419005A": read 50 of 8192".  I'm guessing that it 
> means that wal file 00010419005A is corrupted and that is why the 
> recovery process stops there.  Is there any way to fix the file?

No way, unless the segment is still living in the primary's
pg_wal. Your archive storage is unstable or archive_command is not
performing its task reliably enough.

> Dilip:   setting the log level to debug2 did not provide any additional 
> information.  Here are the log entries:
> 
> 2021-10-28 06:51:06.166 MDT [7556] LOG:  restored log file 
> "000104190059" from archive
> 2021-10-28 06:51:06.464 MDT [7556] DEBUG:  got WAL segment from archive
> 2021-10-28 06:51:06.579 MDT [7556] LOG:  restored log file 
> "00010419005A" from archive
> 2021-10-28 06:51:06.854 MDT [7556] DEBUG:  got WAL segment from archive
> 2021-10-28 06:51:07.107 MDT [7556] LOG:  consistent recovery state reached at 
> 419/5AB8
> 2021-10-28 06:51:07.107 MDT [7556] DEBUG:  switched WAL source from archive 
> to stream after failure
> 2021-10-28 06:51:07.109 MDT [7844] LOG:  database system is ready to accept 
> read only connections
> 2021-10-28 06:51:07.152 MDT [7844] DEBUG:  forked new backend, pid=6900 
> socket=6068
> 
> I set the log level to debug5 and here is what I got:
> 
> 2021-10-28 06:25:41.262 MDT [6288] CONTEXT:  WAL redo at 419/5ABFFF60 for 
> Btree/INSERT_LEAF: off 130
> 2021-10-28 06:25:41.262 MDT [6288] DEBUG:  record known xact 33776257 
> latestObservedXid 33776257
> 2021-10-28 06:25:41.262 MDT [6288] CONTEXT:  WAL redo at 419/5ABFFFA0 for 
> Heap/INSERT: off 95 flags 0x00
> 2021-10-28 06:25:41.262 MDT [6288] LOG:  consistent recovery state reached at 
> 419/5AB8
> 2021-10-28 06:25:41.263 MDT [6288] DEBUG:  switched WAL source from archive 
> to stream after failure
> 2021-10-28 06:25:41.264 MDT [5512] LOG:  database system is ready to accept 
> read only connections
> 
> Does the "switched WAL source from archive to stream after failure" indicate 
> a problem with the WAL file?

Actually the message says that some trouble happend while reading file
but that happens always at the end of wal.  We might need a bit more
detailed message about the trouble in higher debug level messages.

> Anyway, it looks like I need to restore the standby server from a new backup. 
>  Thank you both for your help.

Unfortunately I think so, too.

> Sincerely,
> -Les

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




Re: database designs ERDs

2021-10-28 Thread Zahid Rahman
 > Something like this maybe?


 > http://databaseanswers.org/data_models/index_all_models.htm


DEFINITELY exactly   this.

A lot of Respect.


Best Regards

Zahid

On Thu, 28 Oct 2021, 15:45 Thomas Kellerer,  wrote:

> Zahid Rahman schrieb am 28.10.2021 um 12:58:
> > I am looking for some database designs (ERD) on current and up to date
> > business scenarios for a project.
> >
> > By update to date I am referring to the DVD rental business ERD,
> >
> https://www.postgresqltutorial.com/wp-content/uploads/2018/03/printable-postgresql-sample-database-diagram.pdf
> >
> > Although comprehensive, it is a bit outdated.  I mean BlockBusters was
> > useful in its time.
>
> Something like this maybe?
>
> http://databaseanswers.org/data_models/index_all_models.htm
>
>
>
>


Re: Incremental backup

2021-10-28 Thread Mladen Gogala


On 10/28/21 18:07, Andreas Joseph Krogh wrote:
I think everybody agrees that incremental backup /per database/, and 
not cluster-wide, is nice, and it would be nice if PG supported it. 
But, given the way PG is architectured, having cluster-wide WALs, 
that's not an easy task to implement.

Repeating "other databases have it" doesn't change that.
--
Andreas Joseph Krogh


I am not advocating for the database level incremental backups, but all 
databases that have it also have cluster wide/instance wide WAL logs. 
Cluster wide WAL logs do not make database level incremental backups 
hard. Both Oracle and DB2 have database level incremental backups and 
both have cluster wide WAL (redo or logs). So does SQL*Server which is a 
bit specific because its log is used for both recovery, as WAL, and undo 
so its very different from Postgres. Also, SQL Server is, to my 
knowledge, the only database which doesn't use MVCC.


The technology that all of those databases employ is a bitmap device 
which has one bit per each database block. Full backups set all bits to 
0 and whenever block is modified, the corresponding bit is set to 1. The 
backup tool in the incremental mode then only copies blocks with the 
bitmap value of 1. I am not too thrilled by that implementation. In 
particular, with Oracle there were bugs with database restore, 
"duplicate database" operation, global cache locks and instance latches. 
I've had quite a few headaches with RMAN cumulative incremental backups. 
My preferred method of backup is storage snapshot. Snapshots then can 
be  backed up to other arrays (NetApp SnapVault, Hitachi HUR, EMC SRDF)  
or can be backed up to deduplicated offline storage like AWS Glacier or 
EMC Data Domain using simple file level utilities. Once snapshot is 
taken, it is a read-only file system and the files are no longer opened 
by the database processes.


The classic file level backup tools like pg_basebackup or rman are 
pretty much unusable once your database hits 50 TB or so. With 50TB 
database, your RTO (Recovery Time Objective) will be in days. You can 
only push around 3TB/hour down a 10Gbit Ethernet pipe. On the other hand 
companies like Facebook. Amazon, Walmart or Target are losing hundreds 
of thousands USD per hour of downtime. Downtime of 15 hours or longer is 
completely unacceptable. Backup is only the last line of defense. It 
should only be used if everything else fails. And if you have to use 
file level tool like pg_basebackup and tar to restore your VLDB, you 
should also get your resume ready.


Incremental backups are awkward and ungainly. You have to restore the 
full backup and all incrementals taken since the last full. After that, 
you must apply the remaining WAL files. So, by definition, restoring 
incremental backup is, by definition, longer than restoring a full 
backup. So, if you have to take a backup, get yourself a deduplicating 
backup utility. Deduplication will only save the backup blocks that are 
different from the previous backup. In other words, you can do a daily 
full backup with the space expenditure of an incremental backup. Also, 
if you need backups because of regulatory reasons (HIPAA, SOX), 
incremental backups will not do.


Long story short, there are much more important things to do on Postgres 
than incremental database level backups. BTW, Chuck Norris can take and 
incremental database level backup of a Postgres database and recover 
database without restoring the preceding full backup. I am not sure 
whether Chuck Norris is a member of Postgres steering committee.


--
Mladen Gogala
Database Consultant
Tel: (347) 321-1217
https://dbwhisperer.wordpress.com


Re: Incremental backup

2021-10-28 Thread Ron

On 10/28/21 5:07 PM, Andreas Joseph Krogh wrote:
På torsdag 28. oktober 2021 kl. 23:09:19, skrev Ron 
mailto:ronljohnso...@gmail.com>>:


[...]
It's still a bunch of transaction logs, whereas differential and
incremental
backups only backup the changed pages, no matter how many times
they've been
changed.

That's a serious reduction in disk space, and time to apply them.

I think everybody agrees that incremental backup /per database/, and not 
cluster-wide, is nice, and it would be nice if PG supported it. But, given 
the way PG is architectured, having cluster-wide WALs, that's not an easy 
task to implement.


Which is what I said a couple of hours ago.

--
Angular momentum makes the world go 'round.


Re: Incremental backup

2021-10-28 Thread Andreas Joseph Krogh

På torsdag 28. oktober 2021 kl. 23:09:19, skrev Ron mailto:ronljohnso...@gmail.com>>: 
[...]
 It's still a bunch of transaction logs, whereas differential and incremental
 backups only backup the changed pages, no matter how many times they've been
 changed.

 That's a serious reduction in disk space, and time to apply them. 


I think everybody agrees that incremental backup per database, and not 
cluster-wide, is nice, and it would be nice if PG supported it. But, given the 
way PG is architectured, having cluster-wide WALs, that's not an easy task to 
implement. 
Repeating "other databases have it" doesn't change that. 



--
 Andreas Joseph Krogh 

Re: How to Resolve Data Being Truncated or Rounded Up During PostgreSQL Migration from v9.623 to v12.8?

2021-10-28 Thread Hilbert, Karin
Thanks for the explanation, Tom.  

From: Tom Lane 
Sent: Thursday, October 28, 2021 11:04 AM
To: Hilbert, Karin 
Cc: pgsql-general@lists.postgresql.org 
Subject: Re: How to Resolve Data Being Truncated or Rounded Up During 
PostgreSQL Migration from v9.623 to v12.8?

"Hilbert, Karin"  writes:
> [ PG12 displays float values a tad differently from 9.6 ]

This is not a bug; we just changed the behavior of the
"extra_float_digits" display option, so that it's less likely
to print garbage digits.  A float4 value only has about six
decimal digits of precision to begin with, and those extra
digits you are seeing in the 9.6 dump are basically fictional.

pg_dump does "set extra_float_digits to 3", which used to be
necessary to get reproducible results when dumping from old
servers.  That has this effect on 9.6:

regression=# select '53809.6'::float4;
 float4
-
 53809.6
(1 row)

regression=# set extra_float_digits to 3;
SET
regression=# select '53809.6'::float4;
   float4

 53809.6016
(1 row)

But it has no effect on new servers, because the six digits are
already enough to recreate the float4 value exactly.  The "016"
added by the old server is basically roundoff noise.

The reason for extra_float_digits is that with the old output
algorithm, there were corner cases where we had to print more than
six digits to ensure the value reloads exactly.  The new algorithm
automatically prints the minimum number of digits needed to ensure
exact reload.

All the same comments apply to float8, of course, with a
different number of digits.

regards, tom lane


Re: Incremental backup

2021-10-28 Thread Ron

On 10/28/21 3:08 PM, Bruce Momjian wrote:

On Thu, Oct 28, 2021 at 02:23:39PM -0500, Ron wrote:

On 10/28/21 2:06 PM, Adrian Klaver wrote:

On 10/28/21 11:48, Ron wrote:

On 10/28/21 1:00 PM, Adrian Klaver wrote:

On 10/28/21 10:51, Ron wrote:


Not doable in Postgresql because WAL files are global to cluster.
I've read multiple times that will not be changed.

Yet somehow logical replication does it:

https://www.postgresql.org/docs/14/logical-replication-architecture.html

"
Logical replication is built with an architecture similar to physical
streaming replication (see Section 27.2.5). It is implemented by
“walsender” and “apply” processes. The walsender process starts logical
decoding (described in Chapter 49) of the WAL and loads the

Scans the (global) WAL data for only the that portion from the relevant
database?

If so, definitely not the same as having per-database WAL files.

Just as importantly, replication is not, and never will be, a substitute for
backups.

Uh, for replication slots, we don't send the entire WAL stream to the
subscriber:


I meant scanning at the source.


   https://www.postgresql.org/docs/14/logical-replication.html

   Logical replication of a table typically starts with taking a snapshot
   of the data on the publisher database and copying that to the subscriber.
   Once that is done, the changes on the publisher are sent to the subscriber
   as they occur in real-time.  The subscriber applies the data in the same
   order as the publisher so that transactional consistency is guaranteed for
   publications within a single subscription.  This method of data replication
   is sometimes referred to as transactional replication.


It's still a bunch of transaction logs, whereas differential and incremental 
backups only backup the changed pages, no matter how many times they've been 
changed.


That's a serious reduction in disk space, and time to apply them.

--
Angular momentum makes the world go 'round.




Re: Incremental backup

2021-10-28 Thread Ron

On 10/28/21 3:43 PM, Adrian Klaver wrote:

On 10/28/21 12:23, Ron wrote:

On 10/28/21 2:06 PM, Adrian Klaver wrote:

On 10/28/21 11:48, Ron wrote:

"
Logical replication is built with an architecture similar to physical 
streaming replication (see Section 27.2.5). It is implemented by 
“walsender” and “apply” processes. The walsender process starts logical 
decoding (described in Chapter 49) of the WAL and loads the 


Scans the (global) WAL data for only the that portion from the relevant 
database?


If so, definitely not the same as having per-database WAL files.

Just as importantly, replication is not, and never will be, a substitute 
for backups.


Who says you have to use the mechanism to replicate to another database, 
why not to a file?


But WAL files store every transaction, right?  Differential and incremental 
backups only send the modified pages, even if the page has been updated 1000 
times.




Not sure of the plausibility, still it might be interesting to find out?

Also isn't a backup just an interrupted form of replication?


Highly interrupted, and usually never written back to disk in "active" form.

--
Angular momentum makes the world go 'round.




Re: Model clause and

2021-10-28 Thread Michael Lewis
Mea culpa.

"PostgreSQL currently implements only stored generated columns."

I should re-read what I share references to rather than trust my memory,
particularly while sick. Thanks for kindly correcting.


Re: Model clause and

2021-10-28 Thread Thomas Kellerer

Michael Lewis schrieb am 28.10.2021 um 22:44:

On Thu, Oct 28, 2021 at 1:57 PM SQL Padawan mailto:sql_pada...@protonmail.com>> wrote:

I presume that VIRTUAL GENERATED columns are on the to-do list?


https://www.postgresql.org/docs/current/ddl-generated-columns.html

Is this not what you want?


Those are persisted/stored generated columns. They are only evalated
when a row is inserted or updated.

Virtual generated columns are evaluated each time you access, so they
could use volatile functions.





Re: Model clause and

2021-10-28 Thread Michael Lewis
On Thu, Oct 28, 2021 at 1:57 PM SQL Padawan 
wrote:

> I presume that VIRTUAL GENERATED columns are on the to-do list?
>

https://www.postgresql.org/docs/current/ddl-generated-columns.html

Is this not what you want?


Re: Incremental backup

2021-10-28 Thread Adrian Klaver

On 10/28/21 12:23, Ron wrote:

On 10/28/21 2:06 PM, Adrian Klaver wrote:

On 10/28/21 11:48, Ron wrote:

On 10/28/21 1:00 PM, Adrian Klaver wrote:

On 10/28/21 10:51, Ron wrote:






"
Logical replication is built with an architecture similar to physical 
streaming replication (see Section 27.2.5). It is implemented by 
“walsender” and “apply” processes. The walsender process starts 
logical decoding (described in Chapter 49) of the WAL and loads the 


Scans the (global) WAL data for only the that portion from the relevant 
database?


If so, definitely not the same as having per-database WAL files.

Just as importantly, replication is not, and never will be, a substitute 
for backups.


Who says you have to use the mechanism to replicate to another database, 
why not to a file?


Not sure of the plausibility, still it might be interesting to find out?

Also isn't a backup just an interrupted form of replication?


--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Incremental backup

2021-10-28 Thread Bruce Momjian
On Thu, Oct 28, 2021 at 02:23:39PM -0500, Ron wrote:
> On 10/28/21 2:06 PM, Adrian Klaver wrote:
> > On 10/28/21 11:48, Ron wrote:
> > > On 10/28/21 1:00 PM, Adrian Klaver wrote:
> > > > On 10/28/21 10:51, Ron wrote:
> > > > 
> > 
> > > Not doable in Postgresql because WAL files are global to cluster.
> > > I've read multiple times that will not be changed.
> > 
> > Yet somehow logical replication does it:
> > 
> > https://www.postgresql.org/docs/14/logical-replication-architecture.html
> > 
> > "
> > Logical replication is built with an architecture similar to physical
> > streaming replication (see Section 27.2.5). It is implemented by
> > “walsender” and “apply” processes. The walsender process starts logical
> > decoding (described in Chapter 49) of the WAL and loads the
> 
> Scans the (global) WAL data for only the that portion from the relevant
> database?
> 
> If so, definitely not the same as having per-database WAL files.
> 
> Just as importantly, replication is not, and never will be, a substitute for
> backups.

Uh, for replication slots, we don't send the entire WAL stream to the
subscriber:

  https://www.postgresql.org/docs/14/logical-replication.html

  Logical replication of a table typically starts with taking a snapshot
  of the data on the publisher database and copying that to the subscriber.
  Once that is done, the changes on the publisher are sent to the subscriber
  as they occur in real-time.  The subscriber applies the data in the same
  order as the publisher so that transactional consistency is guaranteed for
  publications within a single subscription.  This method of data replication
  is sometimes referred to as transactional replication.

-- 
  Bruce Momjian  https://momjian.us
  EDB  https://enterprisedb.com

  If only the physical world exists, free will is an illusion.





Re: Model clause and

2021-10-28 Thread SQL Padawan
That's Dimitri Fontaine - I don't know him personally and didn't mean to be 
impolite by referring to him by his first name - typo really!

SQLP

Sent with [ProtonMail](https://protonmail.com/) Secure Email.

‐‐‐ Original Message ‐‐‐
On Thursday, October 28th, 2021 at 20:57, SQL Padawan 
 wrote:

> Good evening everybody.
>
> Are there any plans to emulate Oracle MODEL clause in PostgreSQL?
>
> If not, why not? It doesn't appear to have gained much traction even in the 
> Oracle world?
>
> Also, are there any plans to introduce the MATCH_RECOGNIZE functionality?
>
> I read an interview with Dimitri and he said he'd really like to see this in 
> Postgres?
>
> I presume that VIRTUAL GENERATED columns are on the to-do list?
>
> Just curious.
>
> SQLP

Model clause and

2021-10-28 Thread SQL Padawan
Good evening everybody.

Are there any plans to emulate Oracle MODEL clause in PostgreSQL?

If not, why not? It doesn't appear to have gained much traction even in the 
Oracle world?

Also, are there any plans to introduce the MATCH_RECOGNIZE functionality?

I read an interview with Dimitri and he said he'd really like to see this in 
Postgres?

I presume that VIRTUAL GENERATED columns are on the to-do list?

Just curious.

SQLP

Re: Incremental backup

2021-10-28 Thread Ron

On 10/28/21 2:06 PM, Adrian Klaver wrote:

On 10/28/21 11:48, Ron wrote:

On 10/28/21 1:00 PM, Adrian Klaver wrote:

On 10/28/21 10:51, Ron wrote:



Not doable in Postgresql because WAL files are global to cluster. I've 
read multiple times that will not be changed.


Yet somehow logical replication does it:

https://www.postgresql.org/docs/14/logical-replication-architecture.html

"
Logical replication is built with an architecture similar to physical 
streaming replication (see Section 27.2.5). It is implemented by 
“walsender” and “apply” processes. The walsender process starts logical 
decoding (described in Chapter 49) of the WAL and loads the 


Scans the (global) WAL data for only the that portion from the relevant 
database?


If so, definitely not the same as having per-database WAL files.

Just as importantly, replication is not, and never will be, a substitute for 
backups.


standard logical decoding plugin (pgoutput). The plugin transforms the 
changes read from WAL to the logical replication protocol (see Section 
53.5) and filters the data according to the publication specification. The 
data is then continuously transferred using the streaming replication 
protocol to the apply worker, which maps the data to local tables and 
applies the individual changes as they are received, in correct 
transactional order.

"

https://www.postgresql.org/docs/14/logical-replication.html

"The typical use-cases for logical replication are:

    Sending incremental changes in a single database or a subset of a 
database to subscribers as they occur.

 ...
"


--
Angular momentum makes the world go 'round.




Re: Incremental backup

2021-10-28 Thread Adrian Klaver

On 10/28/21 11:48, Ron wrote:

On 10/28/21 1:00 PM, Adrian Klaver wrote:

On 10/28/21 10:51, Ron wrote:



Not doable in Postgresql because WAL files are global to cluster. I've 
read multiple times that will not be changed.


Yet somehow logical replication does it:

https://www.postgresql.org/docs/14/logical-replication-architecture.html

"
Logical replication is built with an architecture similar to physical 
streaming replication (see Section 27.2.5). It is implemented by 
“walsender” and “apply” processes. The walsender process starts logical 
decoding (described in Chapter 49) of the WAL and loads the standard 
logical decoding plugin (pgoutput). The plugin transforms the changes 
read from WAL to the logical replication protocol (see Section 53.5) and 
filters the data according to the publication specification. The data is 
then continuously transferred using the streaming replication protocol 
to the apply worker, which maps the data to local tables and applies the 
individual changes as they are received, in correct transactional order.

"

https://www.postgresql.org/docs/14/logical-replication.html

"The typical use-cases for logical replication are:

Sending incremental changes in a single database or a subset of a 
database to subscribers as they occur.

 ...
"



PgBackRest does full and incremental backups, plus captures WAL files, 
but globally to a cluster.


--
Angular momentum makes the world go 'round.



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Incremental backup

2021-10-28 Thread Ron

On 10/28/21 1:00 PM, Adrian Klaver wrote:

On 10/28/21 10:51, Ron wrote:



Except we have no way of knowing what the situation is. I prefer not to 
assume a context.


You make it sound like incremental (and differential) backups are some 
complicated thing that needs context.  That's utter and complete rubbish 
in every Enterprise RDBMS except Postgresql.




This is Postgres so we do need context.

Also you have not defined what you consider incremental backup? I for one 
would put this:


https://www.postgresql.org/docs/14/continuous-archiving.html


I define Incremental and Differential backups the way everyone else does:

Incremental Backup: capture all *changed data* since the last incremental 
backup.  (Not the same as WAL archiving.)

Differential Backup: capture all *changed data* since the last full backup.
Transaction Backup: capture *all transactions* in a log file.



forward as a candidate.

Something like "BACKUP DATABASE (DIFFERENTIAL) foo TO foo.bak;" should be 
a fundamental feature of every RDBMS that claims to be enterprise class.


Knock your self out:

https://wiki.postgresql.org/wiki/Developer_FAQ


Not doable in Postgresql because WAL files are global to cluster. I've read 
multiple times that will not be changed.


PgBackRest does full and incremental backups, plus captures WAL files, but 
globally to a cluster.


--
Angular momentum makes the world go 'round.


Re: Incremental backup

2021-10-28 Thread Adrian Klaver

On 10/28/21 10:51, Ron wrote:



Except we have no way of knowing what the situation is. I prefer not 
to assume a context.


You make it sound like incremental (and differential) backups are some 
complicated thing that needs context.  That's utter and complete rubbish 
in every Enterprise RDBMS except Postgresql.




This is Postgres so we do need context.

Also you have not defined what you consider incremental backup? I for 
one would put this:


https://www.postgresql.org/docs/14/continuous-archiving.html

forward as a candidate.

Something like "BACKUP DATABASE (DIFFERENTIAL) foo TO foo.bak;" should 
be a fundamental feature of every RDBMS that claims to be enterprise class.


Knock your self out:

https://wiki.postgresql.org/wiki/Developer_FAQ


--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Incremental backup

2021-10-28 Thread Ron

On 10/28/21 11:40 AM, Adrian Klaver wrote:

On 10/28/21 09:25, Ron wrote:

On 10/28/21 10:36 AM, Adrian Klaver wrote:

On 10/28/21 05:23, Edu Gargiulo wrote:

Hi all,

Is there any way to make incremental backups in postgres-12?


It would helpful to be more explicit about what you are trying to achieve.

Do you want:

1) A continuous process or scheduled one?

2) Local or remote backups?

3) The backup as a standby?


Standby is not backup!


But we don't know what the OP is doing so it is best to ask.

Honestly, these are bizarre questions which no one in the Enterprise 
world would ask in response to "Is there any way to make incremental 
backups in Oracle, DB2, SQL Server?" since the answer is a simple "yes, 
of course" (or maybe "yes, differential backups").


Except we have no way of knowing what the situation is. I prefer not to 
assume a context.


You make it sound like incremental (and differential) backups are some 
complicated thing that needs context.  That's utter and complete rubbish in 
every Enterprise RDBMS except Postgresql.


Something like "BACKUP DATABASE (DIFFERENTIAL) foo TO foo.bak;" should be a 
fundamental feature of every RDBMS that claims to be enterprise class.


--
Angular momentum makes the world go 'round.




Re: Incremental backup

2021-10-28 Thread Paul Jungwirth

On 10/28/21 5:23 AM, Edu Gargiulo wrote:

Is there any way to make incremental backups in postgres-12?


wal-e and wal-g are useful tools for this and easy to set up. They take 
periodic full backups and then also continuously archive WAL files. To 
restore they apply the last full backup and then replay the WAL since 
then. They have nice support for putting the files on S3 or similar 
places, but really they can drop the files anywhere.


Yours,

--
Paul  ~{:-)
p...@illuminatedcomputing.com




Re: Incremental backup

2021-10-28 Thread Adrian Klaver

On 10/28/21 09:25, Ron wrote:

On 10/28/21 10:36 AM, Adrian Klaver wrote:

On 10/28/21 05:23, Edu Gargiulo wrote:

Hi all,

Is there any way to make incremental backups in postgres-12?


It would helpful to be more explicit about what you are trying to 
achieve.


Do you want:

1) A continuous process or scheduled one?

2) Local or remote backups?

3) The backup as a standby?


Standby is not backup!


But we don't know what the OP is doing so it is best to ask.

Honestly, these are bizarre questions which no one in the Enterprise 
world would ask in response to "Is there any way to make incremental 
backups in Oracle, DB2, SQL Server?" since the answer is a simple "yes, 
of course" (or maybe "yes, differential backups").


Except we have no way of knowing what the situation is. I prefer not to 
assume a context.







4) Any other features requests you might have.






--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Incremental backup

2021-10-28 Thread Ron

On 10/28/21 10:36 AM, Adrian Klaver wrote:

On 10/28/21 05:23, Edu Gargiulo wrote:

Hi all,

Is there any way to make incremental backups in postgres-12?


It would helpful to be more explicit about what you are trying to achieve.

Do you want:

1) A continuous process or scheduled one?

2) Local or remote backups?

3) The backup as a standby?


Standby is not backup!


Honestly, these are bizarre questions which no one in the Enterprise world 
would ask in response to "Is there any way to make incremental backups in 
Oracle, DB2, SQL Server?" since the answer is a simple "yes, of course" (or 
maybe "yes, differential backups").





4) Any other features requests you might have.



--
Angular momentum makes the world go 'round.




Re: Incremental backup

2021-10-28 Thread Edu Gargiulo
On Thu, Oct 28, 2021 at 12:36 PM Adrian Klaver 
wrote:

> On 10/28/21 05:23, Edu Gargiulo wrote:
> > Hi all,
> >
> > Is there any way to make incremental backups in postgres-12?
>
> It would helpful to be more explicit about what you are trying to achieve.
>
> Do you want:
>
> 1) A continuous process or scheduled one?
>

Scheduled

2) Local or remote backups?
>

Local. Now I'm doing locally and copying backups to other servers.

 3) The backup as a standby?

Now I'have a standby cluster via physical replication. Could do the backup
from standby server too.

4) Any other features requests you might have.
>

90% of the data in the databse is "static data" (compressed timescaledb
chunks), I want to append only that 10% of recently inserted or updated
data daily to yesterday backup, instead of do a full backup daily.
  
Thanks in advance and sorry for my english

--
Eduardo


Re: Incremental backup

2021-10-28 Thread Adrian Klaver

On 10/28/21 05:23, Edu Gargiulo wrote:

Hi all,

Is there any way to make incremental backups in postgres-12?


It would helpful to be more explicit about what you are trying to achieve.

Do you want:

1) A continuous process or scheduled one?

2) Local or remote backups?

3) The backup as a standby?

4) Any other features requests you might have.



Kind regards,

--
Eduardo



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Incremental backup

2021-10-28 Thread Ivan E. Panchenko



On 28.10.2021 15:58, Ron wrote:

On 10/28/21 7:23 AM, Edu Gargiulo wrote:

Hi all,

Is there any way to make incremental backups in postgres-12?


Using pg_dump?  No.
Using pgBackRest?  Yes.


https://github.com/postgrespro/pg_probackup

--





Re: How to Resolve Data Being Truncated or Rounded Up During PostgreSQL Migration from v9.623 to v12.8?

2021-10-28 Thread Adrian Klaver

On 10/28/21 07:44, Hilbert, Karin wrote:
I manage aPostgreSQL databases - we currently have clusters on 
*PostgreSQL v9.6.23* & **PostgreSQL v12.8**.

Our database clusters are on Linux VMs, with OS:
   Flavor:*redhat_7*
   Release: *3.10.0-1160.45.1.el7.x86_64*

We have repmgr clusters of 1 Primary & 2 Standby servers & use another 
server with PgBouncer to direct the connections to the current Primary.


How can I ensure that the data was migrated correctly - that the data 
hasn't been truncated or rounded up in the v12.8 tables?

Any help would be greatly appreciated.


In postgresql.conf what are the settings for?:

https://www.postgresql.org/docs/12/runtime-config-client.html

extra_float_digits

"
Note

The meaning of this parameter, and its default value, changed in 
PostgreSQL 12; see Section 8.1.3 for further discussion.

"

Section 8.1.3

https://www.postgresql.org/docs/12/datatype-numeric.html#DATATYPE-FLOAT




Thanks,

Karin Hilbert




--
Adrian Klaver
adrian.kla...@aklaver.com






Re: How to Resolve Data Being Truncated or Rounded Up During PostgreSQL Migration from v9.623 to v12.8?

2021-10-28 Thread Tom Lane
"Hilbert, Karin"  writes:
> [ PG12 displays float values a tad differently from 9.6 ]

This is not a bug; we just changed the behavior of the
"extra_float_digits" display option, so that it's less likely
to print garbage digits.  A float4 value only has about six
decimal digits of precision to begin with, and those extra
digits you are seeing in the 9.6 dump are basically fictional.

pg_dump does "set extra_float_digits to 3", which used to be
necessary to get reproducible results when dumping from old
servers.  That has this effect on 9.6:

regression=# select '53809.6'::float4;
 float4  
-
 53809.6
(1 row)

regression=# set extra_float_digits to 3;
SET
regression=# select '53809.6'::float4;
   float4   

 53809.6016
(1 row)

But it has no effect on new servers, because the six digits are
already enough to recreate the float4 value exactly.  The "016"
added by the old server is basically roundoff noise.

The reason for extra_float_digits is that with the old output
algorithm, there were corner cases where we had to print more than
six digits to ensure the value reloads exactly.  The new algorithm
automatically prints the minimum number of digits needed to ensure
exact reload.

All the same comments apply to float8, of course, with a
different number of digits.

regards, tom lane




Re: database designs ERDs

2021-10-28 Thread Thomas Kellerer
Zahid Rahman schrieb am 28.10.2021 um 12:58:
> I am looking for some database designs (ERD) on current and up to date
> business scenarios for a project.
>
> By update to date I am referring to the DVD rental business ERD,
> https://www.postgresqltutorial.com/wp-content/uploads/2018/03/printable-postgresql-sample-database-diagram.pdf
>
> Although comprehensive, it is a bit outdated.  I mean BlockBusters was
> useful in its time.

Something like this maybe?

http://databaseanswers.org/data_models/index_all_models.htm





How to Resolve Data Being Truncated or Rounded Up During PostgreSQL Migration from v9.623 to v12.8?

2021-10-28 Thread Hilbert, Karin
I manage a PostgreSQL databases - we currently have clusters on PostgreSQL 
v9.6.23 & PostgreSQL v12.8.
Our database clusters are on Linux VMs, with OS:
  Flavor: redhat_7
  Release: 3.10.0-1160.45.1.el7.x86_64

We have repmgr clusters of 1 Primary & 2 Standby servers & use another server 
with PgBouncer to direct the connections to the current Primary.

I am in the process of migrating the v9.6.23 databases to the v12.8 cluster, 
which already has live databases on it, so I'm doing a pg_dump on the v9.6 
cluster for the individual databases to be migrated & restoring the backups to 
the v12.8 cluster.  I'm currently testing in a sandbox cluster.  The restore 
completes successfully.

After the restore, I compare the rowcounts of the dbs from both versions to 
verify that the data loaded correctly.
I also do a pg_dump of just the data from both clusters & compare them with the 
diff utility.  For one of the databases, I'm discovering some differences in 
the data.  It looks like some data is being truncated:

5,6c5,6
< -- Dumped from database version 9.6.23
< -- Dumped by pg_dump version 9.6.23
---
> -- Dumped from database version 12.8
> -- Dumped by pg_dump version 12.8
34085c34085
<   xxx P   108 xxx UP  FI  
-xx-xx  53809.6016  53809.6016  52W  0   xxx 0  
 x   \N
---
>   xxx P   108 xxx UP  FI  
> -xx-xx  53809.653809.652W0   xxx 0
>x   \N
34088c34088
<   xxx P   108 xxx UP  FI  
-xx-xx  53809.6016  53809.6016  52W  0   xxx 0  
 x   \N
---
>   xxx P   108 xxx UP  FI  
> -xx-xx  53809.653809.652W0   xxx 0
>x   \N
…ß data is truncated in new database
147825,147826c147825,147826
<   \N  \N  \N  46716.8008  \N  \N  \N  
\N  \N  \N  \N
<   \N  \N  \N  38729.6016  \N  \N  \N  
\N  \N  \N  \N
---
>   \N  \N  \N  46716.8  \N  \N  \N 
>  \N  \N  \N  \N
>   \N  \N  \N  38729.6  \N  \N  \N 
>  \N  \N  \N  \N


When I looked at the table specification, it is the same in both versions & the 
affected columns are specified as datatype real:

   Table "tablex"
  Column  | Type | Modifiers
--+--+---
 id   | integer  | not null
 column2  | character(8) | not null
 column3  | character(3) | not null
 column4  | character(1) |
 column5  | character(4) |
 column6  | character(10)|
 column7  | character(2) |
 column8  | date |
 column9  | real |
 column10 | real |


When I do a select on each database version, the results both display the 
truncated data:

 id | column9  | column10
+--+--
    |  53809.6 |  53809.6
(1 row)

And when I try to export the data from both versions, the data also exports 
with a 1-digit decimal for those columns.
It's only when I do the pg_dump that I can see the extra digits from the 
v9.6.23 tables.

In other tables, I'm seeing differences with only 2 digits showing for columns 
where the datatype is real - they are being rounded up.  For example:
19.817  \N  \N  3435\N  1   \N  
\N  \N  34350   0
   3435\N  \N  \N  0
…
25.878  \N  \N  4484.12988  80  \N  
\N  \N  \N  2069.6001
   0   0   2069.6001   \N  \N  \N  0
vs.
19.82   \N  \N  3435\N  1   \N  \N  
\N  34350   0   3435
\N  \N  \N  0
…
25.87   \N  \N  4484.13 80  \N  \N  \N  
\N  2069.6  0   0   2069.6  \N  \N  \N  0


How can I ensure that the data was migrated correctly - that the data hasn't 
been truncated or rounded up in the v12.8 tables?
Any help would be greatly appreciated.

Thanks,

Karin Hilbert


RE: WAL File Recovery on Standby Server Stops Before End of WAL Files

2021-10-28 Thread Ryan, Les
Hi Kyotaro and Dilip,

Thank you for getting back to me.

Kyotaro: I ran pg_dump and the output was "pg_waldump: fatal: could not read 
file "00010419005A": read 50 of 8192".  I'm guessing that it means 
that wal file 00010419005A is corrupted and that is why the 
recovery process stops there.  Is there any way to fix the file?

Dilip:   setting the log level to debug2 did not provide any additional 
information.  Here are the log entries:

2021-10-28 06:51:06.166 MDT [7556] LOG:  restored log file 
"000104190059" from archive
2021-10-28 06:51:06.464 MDT [7556] DEBUG:  got WAL segment from archive
2021-10-28 06:51:06.579 MDT [7556] LOG:  restored log file 
"00010419005A" from archive
2021-10-28 06:51:06.854 MDT [7556] DEBUG:  got WAL segment from archive
2021-10-28 06:51:07.107 MDT [7556] LOG:  consistent recovery state reached at 
419/5AB8
2021-10-28 06:51:07.107 MDT [7556] DEBUG:  switched WAL source from archive to 
stream after failure
2021-10-28 06:51:07.109 MDT [7844] LOG:  database system is ready to accept 
read only connections
2021-10-28 06:51:07.152 MDT [7844] DEBUG:  forked new backend, pid=6900 
socket=6068

I set the log level to debug5 and here is what I got:

2021-10-28 06:25:41.262 MDT [6288] CONTEXT:  WAL redo at 419/5ABFFF60 for 
Btree/INSERT_LEAF: off 130
2021-10-28 06:25:41.262 MDT [6288] DEBUG:  record known xact 33776257 
latestObservedXid 33776257
2021-10-28 06:25:41.262 MDT [6288] CONTEXT:  WAL redo at 419/5ABFFFA0 for 
Heap/INSERT: off 95 flags 0x00
2021-10-28 06:25:41.262 MDT [6288] LOG:  consistent recovery state reached at 
419/5AB8
2021-10-28 06:25:41.263 MDT [6288] DEBUG:  switched WAL source from archive to 
stream after failure
2021-10-28 06:25:41.264 MDT [5512] LOG:  database system is ready to accept 
read only connections

Does the "switched WAL source from archive to stream after failure" indicate a 
problem with the WAL file?

Anyway, it looks like I need to restore the standby server from a new backup.  
Thank you both for your help.

Sincerely,
-Les

-Original Message-
From: Dilip Kumar 
Sent: October 27, 2021 10:29 PM
To: Kyotaro Horiguchi 
Cc: Ryan, Les ; pgsql-generallists.postgresql.org 

Subject: Re: WAL File Recovery on Standby Server Stops Before End of WAL Files

On Thu, Oct 28, 2021 at 7:28 AM Kyotaro Horiguchi  
wrote:
>
> At Wed, 27 Oct 2021 16:42:52 +, "Ryan, Les" 
> wrote in
> > 2021-10-27 10:26:31.467 MDT [2012] LOG:  redo starts at 419/5229A858
> ...
> > 2021-10-27 10:26:36.188 MDT [2012] LOG:  restored log file
> > "00010419005A" from archive
> > 2021-10-27 10:26:36.750 MDT [2012] LOG:  consistent recovery state
> > reached at 419/5AB8
> > 2021-10-27 10:26:36.752 MDT [6204] LOG:  database system is ready to
> > accept read only connections
> > 2021-10-27 10:26:36.823 MDT [6040] LOG:  started streaming WAL from
> > primary at 419/5A00 on timeline 1
> >
> >   *   There are many more WAL files available starting with 
> > 00010419005B but the restore process always stops at 
> > 00010419005A.
> >
> > I have two questions:
> >
> >   *   Why does the WAL file recovery process now stop after it reads 
> > 00010419005A?
> >   *   What do I need to do to get PostgreSQL to recover the rest of the 
> > available WAL files.
>
> The info above alone donesn't clearly suggest anything about the
> reason. Could you show the result of "pg_waldump
> 00010419005A 2>&1 | tail -5"?  What I'm expecting to see
> is an error message from pg_waldump before the end of the file. It
> would be the immediate cause of the problem.

+1, that will be the best place to start with, additionally, you can
enable DEBUG2 message so that from logs we can identify why it could not 
continue recovery from the archive.

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com




NOTICE: This communication and any attachments ("this message") may contain 
information which is privileged, confidential, proprietary or otherwise subject 
to restricted disclosure under applicable law. This message is for the sole use 
of the intended recipient(s). Any unauthorized use, disclosure, viewing, 
copying, alteration, dissemination or distribution of, or reliance on, this 
message is strictly prohibited. If you have received this message in error, or 
you are not an authorized or intended recipient, please notify the sender 
immediately by replying to this message, delete this message and all copies 
from your e-mail system and destroy any printed copies. You are receiving this 
communication because you are listed as a current WSP contact. Should you have 
any questions regarding WSP's electronic communications policy, please consult 
our Anti-Spam Commitment at www.wsp.com/casl. For any 
concern or if you believe you should not be receiving this message, please 
forward this message to 

Re: Incremental backup

2021-10-28 Thread Ron

On 10/28/21 7:23 AM, Edu Gargiulo wrote:

Hi all,

Is there any way to make incremental backups in postgres-12?


Using pg_dump?  No.
Using pgBackRest?  Yes.

--
Angular momentum makes the world go 'round.




Incremental backup

2021-10-28 Thread Edu Gargiulo
Hi all,

Is there any way to make incremental backups in postgres-12?

Kind regards,

--
Eduardo


database designs ERDs

2021-10-28 Thread Zahid Rahman
Hi,

I am looking for some database designs (ERD) on current and up to date
business scenarios for a project.

By update to date I am referring to the DVD rental business ERD,
https://www.postgresqltutorial.com/wp-content/uploads/2018/03/printable-postgresql-sample-database-diagram.pdf

Although comprehensive, it is a bit outdated.  I mean BlockBusters was
useful in its time.


Thanks
Zahid

https://www.backbutton.org
¯\_(ツ)_/¯
♡۶♡ ۶♡۶




Re: jsonb: unwrapping text

2021-10-28 Thread tomas
On Wed, Oct 27, 2021 at 04:18:20PM -0700, David G. Johnston wrote:
> On Wed, Oct 27, 2021 at 11:58 AM  wrote:
> 
> >
> > I've found out that one can treat a string singleton as if it
> > were an array:
> >
> >   foo=# select '"foo"'::jsonb ->> 0;
> >?column?
> >   --
> >foo
> >   (1 row)
> >
> > which conveniently returns the right type. My question: can I rely
> > on that, or am I missing a much more obvious option?
> >
> >
> Not sure if this exact behavior is trustworthy - but you are on the right
> path. Place the value into either a json array or json object and then use
> the text versions of the accessor methods to get the json value to pass
> through the decoding routine.

Thanks a bunch :)

I know that, behind the scenes, jsonb scalars (didn't check that
for json) are actually represented as one-element arrays, but was unsure
how much this can be relied on as "official interface" :-)

This leaves us with

  foo=# select jsonb_build_array('"foo"'::jsonb)->>0;
   ?column? 
  --
   foo
  (1 row)

...which feels somewhat roundabout, but hey, it actually works. I'll

What also seems to work is #>> with an empty path specifier, i.e.

  select '"foo"'::jsonb #>> '{}';

...but all of them feel somewhat hacky. I'll post a request with the
form linked in [1], let's see :-)

Thanks again for your assessment, cheers
 - t


signature.asc
Description: Digital signature