Re: pgBackRest on old installation

2023-11-20 Thread KK CHN
Thank you.  Its worked out well. But a basic doubt ? is storing the DB
superuser password in .pgpass is advisable ? What other options do we have ?
#su postgres
bash-4.2$ cd

bash-4.2$ cat .pgpass
*:*:*:postgres:your_password
bash-4.2$


On Mon, Nov 20, 2023 at 4:16 PM Achilleas Mantzios - cloud <
a.mantz...@cloud.gatewaynet.com> wrote:

>
> On 11/20/23 12:31, KK CHN wrote:
>
> list,
>
> I am trying pgBackRest on an RHEL 7.6 and old EDB 10 database cluster( a
> legacy application.)
>
> I have installed pgbackrest through  package install on RHEL7.6
> But unable to get the basic stanza-creation working It throws an error.
>
>
> * /etc/pgbackrest.conf  as follows..*
> 
> [demo]
> pg1-path=/app/edb/as10/data
> pg1-port = 5444
> pg1-socket-path=/tmp
>
> [global]
>
> repo1-cipher-pass=sUAeceWoDffSz9Q/d8sWREHe+wte3uOO9lggn5/5mTkQEempvBxQk5UbxsrDzHbw
>
> repo1-cipher-type=aes-256-cbc
> repo1-path=/var/lib/pgbackrest
> repo1-retention-full=2
> backup-user=postgres
>
>
> [global:archive-push]
> compress-level=3
> #
>
>
>
> [root@dbs ~]# pgbackrest version
> pgBackRest 2.48
> [root@dbs ~]#
> #
>
> *Postgres conf as follows... *
>
> listen_addresses = '*'
> port = 5444
> unix_socket_directories = '/tmp'
>
> archive_command = 'pgbackrest --stanza=demo archive-push %p'
> archive_mode = on
> log_filename = 'postgresql.log'
> max_wal_senders = 3
> wal_level = replica
>
> #
>
>
> *ERROR  Getting as follows ..What went wrong here ??*
>
>
>  [root@dbs ~]# sudo -u postgres pgbackrest --stanza=demo
> --log-level-console=info stanza-create
> 2023-11-20 21:04:05.223 P00   INFO: stanza-create command begin 2.48:
> --exec-id=29527-bf5e2f80 --log-level-console=info
> --pg1-path=/app/edb/as10/data --pg1-port=5444 --pg1-socket-path=/tmp
> --repo1-cipher-pass= --repo1-cipher-type=aes-256-cbc
> --repo1-path=/var/lib/pgbackrest --stanza=demo
> WARN: unable to check pg1: [DbConnectError] unable to connect to
> 'dbname='postgres' port=5444 host='/tmp'': connection to server on socket
> "/tmp/.s.PGSQL.5444" failed: fe_sendauth: no password supplied
> ERROR: [056]: unable to find primary cluster - cannot proceed
>HINT: are all available clusters in recovery?
> 2023-11-20 21:04:05.224 P00   INFO: stanza-create command end: aborted
> with exception [056]
> [root@dbs ~]#
>
> It complains about the password.  I followed the below tutorial link, but
> no mention of password (Where to supply password, what parameter where ?)
> setting here ==> https://pgbackrest.org/user-guide-rhel.html
>
> This is about the user connecting to the db, in general, pgbackrest has to
> connect like any other app/user. So, change your .pgpass to contain smth
> like the below on the top of the file :
>
> /tmp:5444:*:postgres:your_whatever_pgsql_password
>
> and retry
>
>
>
> Any hints welcome..  What am I missing here ??
>
> Best,
> Krishane
>
>
>
>
>
>
>
>


Re: Can user specification of a column value be required when querying a view ?

2023-11-20 Thread Ron Johnson
Or row level security.

On Mon, Nov 20, 2023 at 9:25 PM Alan Hodgson 
wrote:

> On Mon, 2023-11-20 at 13:44 -0800, Christophe Pettus wrote:
>
>
>
> On Nov 20, 2023, at 13:41, David Gauthier  wrote:
> I want the users to be required to provide a value for ssn in the
> following query...
> "select * from huge_view where ssn = '106-91-9930' "
> I never want them to query the view without specifying ssn.
> It has to do with resources and practicality.
>
> Is there a way to do that ?
>
>
> Not in a way that PostgreSQL itself will enforce.  If you are concerned
> about a query running wild and taking up resources, setting
> statement_timeout for the user that will be running these queries is the
> best way forward.  A user that has general access to PostgreSQL and can run
> arbitrary queries will be able to craft a query that takes up a lot of
> system time and memory without too much trouble.
>
>
> If it's really about SSN's it might be more about bulk access to PII than
> performance.
>
> A function is probably the right choice in either case.
>


Re: Can user specification of a column value be required when querying a view ?

2023-11-20 Thread Alan Hodgson
On Mon, 2023-11-20 at 13:44 -0800, Christophe Pettus wrote:
> 
> 
> > On Nov 20, 2023, at 13:41, David Gauthier 
> > wrote:
> > I want the users to be required to provide a value for ssn in the
> > following query... 
> > "select * from huge_view where ssn = '106-91-9930' "
> > I never want them to query the view without specifying ssn.
> > It has to do with resources and practicality.
> > 
> > Is there a way to do that ?
> 
> Not in a way that PostgreSQL itself will enforce.  If you are
> concerned about a query running wild and taking up resources,
> setting statement_timeout for the user that will be running these
> queries is the best way forward.  A user that has general access to
> PostgreSQL and can run arbitrary queries will be able to craft a
> query that takes up a lot of system time and memory without too
> much trouble.

If it's really about SSN's it might be more about bulk access to PII
than performance.

A function is probably the right choice in either case.


Re: Prepared statements versus stored procedures

2023-11-20 Thread Merlin Moncure
On Mon, Nov 20, 2023 at 4:07 AM Laurenz Albe 
wrote:

> On Sun, 2023-11-19 at 17:30 +, Simon Connah wrote:
> > I was reading about prepared statements and how they allow the server to
> > plan the query in advance so that if you execute that query multiple
> times
> > it gets sped up as the database has already done the planning work.
> >
> > My question is this. If I make a stored procedure doesn't the database
> > already pre-plan and optimise the query because it has access to the
> whole
> > query? Or could I create a stored procedure and then turn it into a
> prepared
> > statement for more speed? I was also thinking a stored procedure would
> help
> > as it requires less network round trips as the query is already on the
> server.
>
> Statements in functions and procedures don't get planned until the function
> or procedure is called for the first time.  These plans don't get cached
> unless
> the procedural language you are using has special support for that.
>
> Currently, only functions and procedures written in PL/pgSQL cache
> execution
> plans of static SQL statements.  And you are right, that is usually a good
> thing.
>

Adding to this,
Stored procedures and functions can provide really dramatic speedups by
eliminating round trips between statements where with traditional
programming approaches you have to bring all the data back to the client
side just to transform, run logic, etc, only to send it back to the
database.  How much benefit this provides is really situation specific, but
can be impactful in many common situations.

Also, they provide the benefit of hiding schema details and providing a
"database API" in situations where you want the application contract to the
database to be written against the query output (perhaps in json) vs the
schema.  This pattern is controversial in some circles but I employ it
often and it runs well.  It can also be comforting not to rely on client
side code to properly frame up the transaction.

This is only touching the surface -- there are many, many advantages to
server side programming and it is a tremendously valuable skill to learn
and master.  The main downside, of course, is that postgres server
programming can only be used in postgres without modification.

merlin


Re: Can user specification of a column value be required when querying a view ?

2023-11-20 Thread David Gauthier
OK, didn't think so, just checking.  Thanks for verifying !

On Mon, Nov 20, 2023 at 4:45 PM Christophe Pettus  wrote:

>
>
> > On Nov 20, 2023, at 13:41, David Gauthier  wrote:
> > I want the users to be required to provide a value for ssn in the
> following query...
> > "select * from huge_view where ssn = '106-91-9930' "
> > I never want them to query the view without specifying ssn.
> > It has to do with resources and practicality.
> >
> > Is there a way to do that ?
>
> Not in a way that PostgreSQL itself will enforce.  If you are concerned
> about a query running wild and taking up resources, setting
> statement_timeout for the user that will be running these queries is the
> best way forward.  A user that has general access to PostgreSQL and can run
> arbitrary queries will be able to craft a query that takes up a lot of
> system time and memory without too much trouble.


Re: Can user specification of a column value be required when querying a view ?

2023-11-20 Thread Christophe Pettus



> On Nov 20, 2023, at 13:41, David Gauthier  wrote:
> I want the users to be required to provide a value for ssn in the following 
> query... 
> "select * from huge_view where ssn = '106-91-9930' "
> I never want them to query the view without specifying ssn.
> It has to do with resources and practicality.
> 
> Is there a way to do that ?

Not in a way that PostgreSQL itself will enforce.  If you are concerned about a 
query running wild and taking up resources, setting statement_timeout for the 
user that will be running these queries is the best way forward.  A user that 
has general access to PostgreSQL and can run arbitrary queries will be able to 
craft a query that takes up a lot of system time and memory without too much 
trouble.



Re: Can user specification of a column value be required when querying a view ?

2023-11-20 Thread Steve Baldwin
Maybe you could create a function that has a required parameter, so rather
than 'select * from huge_view where .." they do 'select * from
function(some_ssn) [where...]' ?

That function would then query the view using the supplied ssn.

Just a thought.

Steve

On Tue, Nov 21, 2023 at 8:41 AM David Gauthier 
wrote:

> Hi:
>
> I have a view that I want  to require user specification for a specific
> column before the query starts (if that makes sense).
>
> Example
> I want the users to be required to provide a value for ssn in the
> following query...
> "select * from huge_view *where ssn = '106-91-9930'* "
> I never want them to query the view without specifying ssn.
> It has to do with resources and practicality.
>
> Is there a way to do that ?
> Thanks
>
>


Can user specification of a column value be required when querying a view ?

2023-11-20 Thread David Gauthier
Hi:

I have a view that I want  to require user specification for a specific
column before the query starts (if that makes sense).

Example
I want the users to be required to provide a value for ssn in the following
query...
"select * from huge_view *where ssn = '106-91-9930'* "
I never want them to query the view without specifying ssn.
It has to do with resources and practicality.

Is there a way to do that ?
Thanks


Re: New addition to the merge sql standard

2023-11-20 Thread Nick DeCoursin
> "ignore" may not be what you want, though.  Perhaps the fact that insert
> (coming from the NOT MATCHED clause) fails (== conflicts with a tuple
> concurrently inserted in an unique or exclusion constraint) should
> transform the row operation into a MATCHED case, so it'd fire the other
> clauses in the overall MERGE operation.  Then you could add a WHEN
> MATCHED DO NOTHING case which does the ignoring that you want; or just
> let them be handled by WHEN MATCHED UPDATE or whatever.  But you may
> need some way to distinguish rows that appeared concurrently from rows
> that were there all along.

Your suggestion (or idea) would also be good with me, but that would seem
to me to be considerably more complex to implement and to use, I think.

In a similar sense, I think perhaps it could be good to add the complete
`insert ... on conflict` syntax and functionality onto the insert part of
merge (including handling unique violations with an update), but I haven't
completely thought it through. This is similar to what you seemed to be
brainstorming, but rather than going back into the match statements, it
would look like this perhaps (with the new additions underlined):

merge into some_table st
using different_table as dt
on (st.id = dt.st_id)
when matched do nothing
when not matched then
  insert (cola, colb, colc)
  values ("uniquekey", "gal", "asfa")
  *on conflict (cola) do update set*
  *colb = "gal",*
  *colc = "asfa"*
;

As a result, `merge` would essentially have all the functionality that
`insert ... on conflict` has, so that perhaps (perhaps!) `merge` >=
'insert' in every case except for simply inserting raw data without a
select.

I personally think adding the `on conflict ...` syntax onto merge insert
like ^ this would be better than trying to go back into the merge's match
statements.

However, even if it were desired to add the complete `on conflict do update
...` functionality, I would think it would be better to take one step at a
time by just first adding the ignore functionality (of ignoring rows that
cause unique violation exceptions) which could be called `on conflict do
nothing`, leaving open the possibility for future expansion with the `on
conflict do update set ...`.

> In regards to the SQL standard, I hope what you're saying is merely not
> documented by them.  If it indeed isn't, it may be possible to get them
> to accept some new behavior, and then I'm sure we'd consider
> implementing it.  If your suggestion goes against what they already
> have, I'm afraid you'd be doomed.  So the next question is, how do other
> implementations handle this case you're talking about?  SQL Server, DB2
> and Oracle being the relevant ones.

I would suppose it's merely not documented but yet that there's nothing
contradictory to it either. Unfortunately, I don't know how other databases
do it, but I would think probably none of them do it yet.

> Assuming the idea is good and there are no conflicts, then maybe it's
> just lack of round tuits.

I think it's just an advance feature that's hard to grasp, understand, and
develop. But as our database systems become more and more advanced, advance
features like this become more and more feasible.

You know, Postgres is sort of reaching that point - I believe - where it's
looking around, like, ok what next? We've done everything, while other
databases are still struggling under structural problems, postgres is
starting to "push the envelope".

> I vaguely recall thinking about this, and noticing that implementing
> something of this sort would require messing around with the ExecInsert
> interface.  It'd probably require splitting it in pieces, similar to how
> ExecUpdate was split.
>
> There are some comments in the code about possible "live-locks" where
> merge would be eternally confused between inserting a new row which it
> then wants to delete; or something like that.  For sure we would need to
> understand the concurrent behavior of this new feature very clearly.
>
> An interesting point is that our inserts *wait* to see whether the
> concurrent insertion commits or aborts, when a unique constraint is
> involved.  I'm not sure you want to have MERGE blocking on concurrent
> inserts.  This is all assuming READ COMMITTED semantics; on REPEATABLE
> READ or higher, I think you're just screwed, because of course MERGE is
> not going to get a snapshot that sees the rows inserted by transactions
> that started after.
>
> You'd need to explore all this very carefully.

Yeah. I would have liked to have been able to volunteer to do it myself,
but it seems based on your description to be way over my head . C is
definitely not my forte, and I don't have any postgres development
experience (actually we did go into the postgres source for a database
project in college haha).

However, it might be pertinent for me to reference this under the READ
COMMITTED semantics:

MERGE allows the user to specify various combinations of INSERT, UPDATE and
> DELETE 

Re: pg_basebackup

2023-11-20 Thread Christoph Moench-Tegeder
## Matthias Apitz (g...@unixarea.de):

> 2023-11-16 20:34:13.538 CET [6250] LOG:  terminating walsender process due to 
> replication timeout

Besides "what Lauenz said" (especially about the horribly ooutdated
PostgreSQL version): check IO speed and saturation during backup
and make sure you're not stalling. I've seen this beaviour a few
times, mostly in conjunction with btrfs - using a suitably proven
filesystem usually solved the problem (overloaded hardware can
be a problem, too - but modern systems can take quite a bit more
than in the olden days of spinning rust).

Regards,
Christoph

-- 
Spare Space.




Re: pgBackRest on old installation

2023-11-20 Thread Achilleas Mantzios - cloud


On 11/20/23 12:31, KK CHN wrote:

list,

I am trying pgBackRest on an RHEL 7.6 and old EDB 10 database cluster( 
a legacy application.)


I have installed pgbackrest through  package install on RHEL7.6
But unable to get the basic stanza-creation working It throws an error.


* /etc/pgbackrest.conf  as follows..*

[demo]
pg1-path=/app/edb/as10/data
pg1-port = 5444
pg1-socket-path=/tmp

[global]
repo1-cipher-pass=sUAeceWoDffSz9Q/d8sWREHe+wte3uOO9lggn5/5mTkQEempvBxQk5UbxsrDzHbw

repo1-cipher-type=aes-256-cbc
repo1-path=/var/lib/pgbackrest
repo1-retention-full=2
backup-user=postgres


[global:archive-push]
compress-level=3
#



[root@dbs ~]# pgbackrest version
pgBackRest 2.48
[root@dbs ~]#
#

*Postgres conf as follows... *

listen_addresses = '*'
port = 5444
unix_socket_directories = '/tmp'

archive_command = 'pgbackrest --stanza=demo archive-push %p'
archive_mode = on
log_filename = 'postgresql.log'
max_wal_senders = 3
wal_level = replica

#


*ERROR  Getting as follows ..    What went wrong here ??*


 [root@dbs ~]# sudo -u postgres pgbackrest --stanza=demo 
--log-level-console=info stanza-create
2023-11-20 21:04:05.223 P00   INFO: stanza-create command begin 2.48: 
--exec-id=29527-bf5e2f80 --log-level-console=info 
--pg1-path=/app/edb/as10/data --pg1-port=5444 --pg1-socket-path=/tmp 
--repo1-cipher-pass= --repo1-cipher-type=aes-256-cbc 
--repo1-path=/var/lib/pgbackrest --stanza=demo
WARN: unable to check pg1: [DbConnectError] unable to connect to 
'dbname='postgres' port=5444 host='/tmp'': connection to server on 
socket "/tmp/.s.PGSQL.5444" failed: fe_sendauth: no password supplied

ERROR: [056]: unable to find primary cluster - cannot proceed
       HINT: are all available clusters in recovery?
2023-11-20 21:04:05.224 P00   INFO: stanza-create command end: aborted 
with exception [056]

[root@dbs ~]#

It complains about the password.  I followed the below tutorial link, 
but no mention of password (Where to supply password, what parameter 
where ?) setting here ==> https://pgbackrest.org/user-guide-rhel.html


This is about the user connecting to the db, in general, pgbackrest has 
to connect like any other app/user. So, change your .pgpass to contain 
smth like the below on the top of the file :


/tmp:5444:*:postgres:your_whatever_pgsql_password

and retry




Any hints welcome..  What am I missing here ??

Best,
Krishane








pgBackRest on old installation

2023-11-20 Thread KK CHN
list,

I am trying pgBackRest on an RHEL 7.6 and old EDB 10 database cluster( a
legacy application.)

I have installed pgbackrest through  package install on RHEL7.6
But unable to get the basic stanza-creation working It throws an error.


* /etc/pgbackrest.conf  as follows..*

[demo]
pg1-path=/app/edb/as10/data
pg1-port = 5444
pg1-socket-path=/tmp

[global]
repo1-cipher-pass=sUAeceWoDffSz9Q/d8sWREHe+wte3uOO9lggn5/5mTkQEempvBxQk5UbxsrDzHbw

repo1-cipher-type=aes-256-cbc
repo1-path=/var/lib/pgbackrest
repo1-retention-full=2
backup-user=postgres


[global:archive-push]
compress-level=3
#



[root@dbs ~]# pgbackrest version
pgBackRest 2.48
[root@dbs ~]#
#

*Postgres conf as follows... *

listen_addresses = '*'
port = 5444
unix_socket_directories = '/tmp'

archive_command = 'pgbackrest --stanza=demo archive-push %p'
archive_mode = on
log_filename = 'postgresql.log'
max_wal_senders = 3
wal_level = replica

#


*ERROR  Getting as follows ..What went wrong here ??*


 [root@dbs ~]# sudo -u postgres pgbackrest --stanza=demo
--log-level-console=info stanza-create
2023-11-20 21:04:05.223 P00   INFO: stanza-create command begin 2.48:
--exec-id=29527-bf5e2f80 --log-level-console=info
--pg1-path=/app/edb/as10/data --pg1-port=5444 --pg1-socket-path=/tmp
--repo1-cipher-pass= --repo1-cipher-type=aes-256-cbc
--repo1-path=/var/lib/pgbackrest --stanza=demo
WARN: unable to check pg1: [DbConnectError] unable to connect to
'dbname='postgres' port=5444 host='/tmp'': connection to server on socket
"/tmp/.s.PGSQL.5444" failed: fe_sendauth: no password supplied
ERROR: [056]: unable to find primary cluster - cannot proceed
   HINT: are all available clusters in recovery?
2023-11-20 21:04:05.224 P00   INFO: stanza-create command end: aborted with
exception [056]
[root@dbs ~]#

It complains about the password.  I followed the below tutorial link, but
no mention of password (Where to supply password, what parameter where ?)
setting here ==> https://pgbackrest.org/user-guide-rhel.html


Any hints welcome..  What am I missing here ??

Best,
Krishane


Re: Prepared statements versus stored procedures

2023-11-20 Thread Laurenz Albe
On Sun, 2023-11-19 at 17:30 +, Simon Connah wrote:
> I was reading about prepared statements and how they allow the server to
> plan the query in advance so that if you execute that query multiple times
> it gets sped up as the database has already done the planning work.
> 
> My question is this. If I make a stored procedure doesn't the database
> already pre-plan and optimise the query because it has access to the whole
> query? Or could I create a stored procedure and then turn it into a prepared
> statement for more speed? I was also thinking a stored procedure would help
> as it requires less network round trips as the query is already on the server.

Statements in functions and procedures don't get planned until the function
or procedure is called for the first time.  These plans don't get cached unless
the procedural language you are using has special support for that.

Currently, only functions and procedures written in PL/pgSQL cache execution
plans of static SQL statements.  And you are right, that is usually a good 
thing.

Yours,
Laurenz Albe




Re: pg_basebackup

2023-11-20 Thread Laurenz Albe
On Mon, 2023-11-20 at 07:30 +0100, Matthias Apitz wrote:
> We're facing in a customer installation (PostgreSQL 13.1 on Linux) the
> following problem for the first time and not reproducible:

13.1?  Your immediate reaction should be "update to the latest minor release".

> ${BINDIR}/pg_basebackup -U ${DBSUSER} -Ft -z -D ${BACKUPDIR}-${DATE}-${NUM}
> 
> The resulting stdout/stderr of the script:
> 
> 16.11.2023-20:20:02: pg_basebackup the cluster to 
> /Backup/postgres/sisis-20231116-1 ... 
> pg_basebackup: could not receive data from WAL stream: server closed the 
> connection unexpectedly
> This probably means the server terminated abnormally
> before or while processing the request.
> pg_basebackup: child process exited with error 1
> 
> pg-error.log:
> 
> 2023-11-16 20:34:13.538 CET [6250] LOG:  terminating walsender process due to 
> replication timeout
> 
> Why the PostgreSQL server says something about "replication", we do
> pg_basebackup?

Because "pg_basebackup" uses a replication connection.

> Some more information:
> 
> - wal_sender_timeout has default value (60s)

Increase "wal_sender_timeout", perhaps to 0 (which means "infinite").

Yours,
Laurenz Albe