Does pg_basebackup on a remote machine follow the standard libpq protocol. I
am not able to force it to use ssl, despite having an entry in pg_hba.conf:
hostnossl all all all reject
>From the same remote machine, psql is forced to use ssl.
Makes me wonder whether pg_basebackup has a different p
If pg_basebackup is run from a remote machine with compress option --gzip ,
compress level 9,
will the compression occur prior to the data being sent on the network or
after it has been received
at the remote machine.
--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.ht
Is there a way to audit a group like as follows
alter role db_rw set pgaudit.log = 'read,write,function,ddl'
and then any user part of db_rw role can be audited automatically. It does
not seem to work if I connect to the db as rakesh who is part of db_rw role.
--
Sent from: http://www.postg
By mask I mean pgaudit should log where ssn = '123-456-7891' as where ssn =
'?'
--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.o
Yes all who interact with HIPAA data are trained for HIPAA SOP.
--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql
No they do select.
It is fine in HIPAA to view data which are protected, if it is part of your
job. What is not fine is being careless with that protected data and let
unauthorized person view that data.
--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
--
Se
Is there a way in pgaudit to mask literal sqls like the below:
insert into table (col1,col2) values(1,2)
select * from table where col1 = 1
These sqls are typed by our QA folks using pgadmin. pgaudit records this
verbatim which runs afoul of our HIPAA requirement. Prepared statements are
not an
I am new to Docker env and I see that PG, as a container is started with
parameters like this:
docker run -it \
--detach \
--name name \
--restart=unless-stopped \
-p 5432:5432 \
-e PGDATA=/var/lib/postgresql/data/pg10
-N 500 \
-B 3GB \
-S 6291kB \
-c listen_addresses=* \
-c effective_cache_size=
Hi
I want to find an easy way to control who not to audit. Let us say we have
50 users out of which we don't want to monitor only 5 users. Is there a way
to set logging rules to include all except those 5.
alter system set pgaudit.log will include all. From here how do I exclude 5
users.
than
I installed latest pgaudit (1.2) with pg10. I am testing it and I see that
it does not log the login user name and host name.
For example, if user mary is running select * from sensitive_table, I want
Mary and the machine from where she ran in the log.
It seems to log the ids which needs to be
I am documenting on automating installation of pgaudit extension for
containers. On my laptop I see that the directory where the files
pgaudit.control and pgaudit--1.2.sql needs to be present is
/usr/share/postgresql/10/extension.
How do I know beforehand where the dir path is ?
--
Sent from
Hey I am not the container guy. I agree with you 100%.
--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
In the container world, sometime the only persistent storage path (that is,
storage outside container world) is PGDATA. Is it fine to create a subdir
inside PGDATA and store our stuff there, or will PG freak out seeing a
foreign object.
thanks
--
Sent from: http://www.postgresql-archive.org/
In PG 9.6 or PG 10, is there a way to force only SSL based connections coming
from pgadmin or dbeaver.
--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
https://blog.revolut.com/no-excuses-we-let-you-down-32f81e64f974
The career section of the company's web page lists PG as one of the tech
stack.
Would be interesting to know the details.
"At around 07:00 BST on Friday morning, our transaction database began to
malfunction. Naturally, we followed
Thanks John and JD.
John: Are you telling that the backup of a database has no protection?
--
View this message in context:
http://www.postgresql-archive.org/PG-and-database-encryption-tp5979618p5979624.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.
--
Sent vi
We have a requirement to encrypt the entire database. What is the best tool
to accomplish this. Our primary goal is that it should be transparent to the
application, with no change in the application, as compared to un-encrypted
database. Reading about pgcrypto module, it seems it is good for few
You can try DBeaver. It is a generic GUI tool which works with practically
all RDBMS. It is java based, and I find it bit slow. However judging by the
frequent updates I get, it seems to be very active.
--
View this message in context:
http://www.postgresql-archive.org/Developer-GUI-tools-for
>note postgres' WAL archive is by block, not by transaction.
My understanding is that only the first time a block is updated after a
checkpoint,
is the entire block is written to the WAL logs. And for that
full_page_writes has to be set to ON.
The only other time PG writes entire block to th
Greetings,
>The short answer is 'no'. There are complications around this,
>particularly at the edges and because files can be written and rewritten
>as you're reading them.
>Basically, no file with a timestamp after the
>checkpoint before the backup can be omitted from an incremental backup.
basebackup + WAL archive lets you do just exactly this. you can
restore to any transaction between when that basebackup was taken, and
the latest entry in the WAL archive, its referred in the documentation
as PITR, Point in Time Recovery.
Yes John I do know about using WAL archive. IMO tha
I found the following SQL in stackoverflow:
SELECT
pg_last_xlog_receive_location() receive,
pg_last_xlog_replay_location() replay,
(
extract(epoch FROM now()) -
extract(epoch FROM pg_last_xact_replay_timestamp())
)::int lag
I get different result in primary and slave. On primary the
"Are the TPS numbers per pgbench? If so, then you're getting
10x490=4900 TPS system wide, or 20*280=5600 TPS system wide. "
Per pgbench.
Your explanation makes sense. thanks.
--
View this message in context:
http://postgresql.nabble.com/pgbench-and-scaling-tp5930891p5931131.html
Sent from
23 matches
Mail list logo