On Fri, Sep 19, 2025 at 4:56 AM 張宸瑋 wrote:
> *log* the creation time of user accounts and database objects.
Yes, indirectly, via event triggers:
https://www.postgresql.org/docs/current/event-trigger-matrix.html
But that's specific to a DB, does [NOT fire for CREATE USER/ROLE][1],
only some objec
On Thursday, September 18, 2025, 張宸瑋 wrote:
>
>
> I would like to ask if there are any built-in features or extensions in
> open-source PostgreSQL—aside from the AUDIT mechanisms—that allow us to
> query or log the creation time of user accounts and database objects.
>
>
Th
Hello everyone,
I would like to ask if there are any built-in features or extensions in
open-source PostgreSQL—aside from the AUDIT mechanisms—that allow us to
query or log the creation time of user accounts and database objects.
The reason for this inquiry is that while audit logs do capture
Dear all,
I am gald to share an open-source project I've been developing in my free
time. It's a bitemporal solution built entirely within PostgreSQL, and
while it's still in its early beta stages, it has the ambition to provide a
comprehensive framework for temporal data manageme
On Tue, Sep 16, 2025 at 7:25 AM Vladlen Popolitov <
v.popoli...@postgrespro.ru> wrote:
> Checksum calculation takes ~0.5% of query time, it is not bottleneck
> in PostgreSQL.
I consider checksums=on to be a mandatory setting. Often, these types of
things are not bugs in postgres
message immediately in case of failure, and restore
database
from backup (and probably consider the change of the provider)
2) continue with disabled checksums, get programs crashes and finally
restore from backup.
Checksum calculation takes ~0.5% of query time, it is not bottleneck
in PostgreSQL
ive way of detecting that type of
an issue rather than discovering pile-up of SELECT queries leading to
CPU starvation or finding hanged backup jobs.
Thanks in advance for your suggestions!
I was originally running PostgreSQL 17.2 installed from project's deb
packages, under Debian 12
it to rebuild the main server.
>
> I'm wondering if there's any proactive way of detecting that type of
> an issue rather than discovering pile-up of SELECT queries leading to
> CPU starvation or finding hanged backup jobs.
>
> Thanks in advance for your suggesti
t; CPU starvation or finding hanged backup jobs.
> >
> > Thanks in advance for your suggestions!
> >
> > I was originally running PostgreSQL 17.2 installed from project's deb
> > packages, under Debian 12 on amd64. Environment is - without any
> > recent cr
On Sun, 2025-09-14 at 14:47 +0200, Pawel Kudzia wrote:
> On Sun, Sep 14, 2025 at 12:35 PM Laurenz Albe
> wrote:
> > On Sun, 2025-09-14 at 10:30 +0200, Pawel Kudzia wrote:
> > > I've hit a silent data corruption for TOAST data - leading to some
> > > infinite loop when accessing bytea column for v
On Mon, Sep 8, 2025 at 3:37 AM PALAYRET Jacques
wrote:
> Hello,
> Thanks for your response.
>
> # Currently, I have a function text and a function array with *the same
> body* but a *distinct type return*.
> -> Example with the array of text function :
> SELECT public.calfxi3s_all_elements_text_a
a bitemporal solution built entirely within PostgreSQL, and
> while it's still in its early beta stages, it has the ambition to provide a
> comprehensive framework for temporal data management. The project is
> inspired by the need for a simple, yet powerful way to handle historical
>
Cc: pgsql-general@lists.postgresql.org
Envoyé: Vendredi 5 Septembre 2025 18:21:08
Objet: Re: PostgreSQL include directive in plpgsql language PL/pgSQL
On Fri, Sep 5, 2025 at 7:14 AM PALAYRET Jacques < [
mailto:jacques.palay...@meteo.fr | jacques.palay...@meteo.fr ] > wrote:
Hello,
In
On Mon, Sep 8, 2025 at 5:37 AM PALAYRET Jacques
wrote:
> I can manage with this method, but it shoud be better with a include
> directive.
>
Even if we had includes, I would strongly advise "this method" which seems
pretty clean and maintainable. What is the issue with SQL functions that
call a
On Fri, Sep 5, 2025 at 9:14 AM PALAYRET Jacques
wrote:
> Hello,
>
> In a PL/pgSQL function, there is no command for sharing a common part of
> the body of several functions, is there?
>
> In my case, I would like a function that returns a numeric value; this
> value is associated with several oth
On Fri, Sep 5, 2025 at 7:14 AM PALAYRET Jacques
wrote:
> Hello,
>
> In a PL/pgSQL function, there is no command for sharing a common part of
> the body of several functions, is there?
>
> In my case, I would like a function that returns a numeric value; this
> value is associated with several oth
"David G. Johnston" writes:
> On Friday, September 5, 2025, PALAYRET Jacques
> wrote:
>> Is there an equivalent of #include (include directive) in functions
>> (written in plpgsql language)?
> No, there is not.
And there won't be one in the future either, as it would
give server filesystem acce
On Friday, September 5, 2025, PALAYRET Jacques
wrote:
>
>
> Is there an equivalent of #include (include directive) in functions
> (written in plpgsql language)?
>
No, there is not.
David J.
Hello,
In a PL/pgSQL function, there is no command for sharing a common part of the
body of several functions, is there?
In my case, I would like a function that returns a numeric value; this value is
associated with several other parameters, some numeric and others textual.
It would not be
THALES GROUP LIMITED DISTRIBUTION to email recipients
Hello Ruben,
I think you have understood my needs. I wanted to know which rpm packages I
have to install for PostgreSQL 17.6.
The problem is that the command provided at
https://stormatics.tech/blogs/offline-postgresql-installation-on-rhel
ken out.)
On Mon, Aug 18, 2025 at 1:07 PM COURTAULT Francois <
francois.courta...@thalesgroup.com> wrote:
> THALES GROUP LIMITED DISTRIBUTION to email recipients
>
>
>
> Hello again,
>
>
>
> I haven’t find any documentation describing the way to install Postg
COURTAULT Francois writes:
> I haven't find any documentation describing the way to install PostgreSQL
> using a list of RPM packages to apply !
The core project wouldn't document that, because we just ship source
code not RPMs. There might be a README in the core postgr
THALES GROUP LIMITED DISTRIBUTION to email recipients
Hello again,
I haven't find any documentation describing the way to install PostgreSQL using
a list of RPM packages to apply !
Best Regards.
From: COURTAULT Francois
Sent: lundi 18 août 2025 18:55
To: pgsql-general@lists.postgresq
THALES GROUP LIMITED DISTRIBUTION to email recipients
Hello everyone,
I want to install PostgreSQL 17.6 on a machine (Redhat 9) not connect to
Internet?
What are the minimum rpm packages to install ?
Nest Regards.
On 8/12/25 08:13, Ron Johnson wrote:
This link says that there's an ADBC driver for Postgresql: https://
arrow.apache.org/adbc/current/driver/status.html arrow.apache.org/adbc/current/driver/status.html>
I believe that is a different beast.
See:
https://help.sap
This link says that there's an ADBC driver for Postgresql:
https://arrow.apache.org/adbc/current/driver/status.html
You should probably research _IT_ and SAP as to why your SAP installation
does not have the PG driver.
On Tue, Aug 12, 2025 at 10:36 AM Ian Huang wrote:
> Hi Greg,
>
&
Hi Greg,
Thanks for your feedback.
I’ve tried connecting to PostgreSQL using the ADBC method, but
unfortunately the SAP Kernel doesn’t have the corresponding .os (DBMS)
file, so the connection could not be established.
May I ask if connecting via ADBC is actually feasible in this case?
Best
That's a little too vague of a question, but yes, Postgres should be able
to handle things just fine. The amount of SAP tools and SAP-supporting ETL
tools that also support Postgres should be a strong encouragement. I'd
recommend looking at the specific tools you would use and confirming they
have
On 8/6/25 14:39, Rumpi Gravenstein wrote:
That having a table and function with similar names causes this problem
is a bug.
Actually it is that a type and function have the same name. The
connection is the table as that where the type gets it's name. Further
complicated by you doing _type_nam
On Wed, Aug 6, 2025 at 2:39 PM Rumpi Gravenstein wrote:
> That having a table and function with similar names causes this problem is
> a bug.
>
>
Arguably. But it seems likely it's one we are not going to fix.
David J.
"David G. Johnston" writes:
> On Wednesday, August 6, 2025, Rumpi Gravenstein wrote:
>> Here's a reproducible test case that causes the problem in different
>> schemas. The issue appears to be related to creating a table and a
>> function that has the same name as the table with a prepended unde
That having a table and function with similar names causes this problem is
a bug.
Going forward I'll be sure to avoid the problem.
On Wed, Aug 6, 2025 at 5:35 PM David G. Johnston
wrote:
> On Wednesday, August 6, 2025, Rumpi Gravenstein
> wrote:
>
>> Here's a reproducible test case that causes
On Wednesday, August 6, 2025, Rumpi Gravenstein wrote:
> Here's a reproducible test case that causes the problem in different
> schemas. The issue appears to be related to creating a table and a
> function that has the same name as the table with a prepended underscore.
>
Don’t do that. Naming
version
-----
PostgreSQL 16.9 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0
20210514 (Red Hat 8.5.0-26), 64-bit
(1 row)
xxx_pub_dev_2_db=>
xxx_pub_dev_2_db=> drop table if exists rumpi_test;
DROP TABLE
xxx_pub_dev_2_db=>
xxx_pub_dev_2_db=> creat
On Wednesday, August 6, 2025, Rumpi Gravenstein wrote:
>
> xxx_pub_dev_2_db=# drop FUNCTION if exists _sa_setup_role;
> DROP FUNCTION
> xxx_pub_dev_2_db=# select proname, pronamespace, oid from pg_proc where
> proname like '%sa_setup%';
> proname | pronamespace | oid
> -+--+--
On 8/6/25 1:17 PM, Rumpi Gravenstein wrote:
Just coming back to this. Don't know how to interpret this:
xxx_pub_dev_2_db=# select proname, pronamespace, oid from pg_proc where
proname like '%sa_setup%';
proname | pronamespace | oid
+--+-
_sa_s
Just coming back to this. Don't know how to interpret this:
xxx_pub_dev_2_db=# select proname, pronamespace, oid from pg_proc where
proname like '%sa_setup%';
proname | pronamespace | oid
+--+-
_sa_setup_role | 7038406 | 7869125
(1 row)
xxx_pub_
Puzzling.
I'd do:
pg_dump --schema-only xxx_pub_dev_2_db | grep -i -A5 _sa_setup_role
Note the -i. That _might_ be important.
On Wed, Aug 6, 2025 at 4:18 PM Rumpi Gravenstein wrote:
> Just coming back to this. Don't know how to interpret this:
>
> xxx_pub_dev_2_db=# select proname, pronamesp
Dear PostgreSQL Community Team,
I hope this message finds you well.
I’m currently exploring options for implementing a staging (intermediate)
database to handle outbound data exports from an SAP ERP system. I would
like to inquire whether PostgreSQL is considered a suitable and stable
solution
On Tue, 2025-07-29 at 15:00 +0800, yexiu-glory wrote:
> The approach of adding unique indexes is not suitable for our business.
Well, then I guess that logical replication is not suitable for your business.
Yours,
Laurenz Albe
>>>On Fri, 2025-07-18 at 19:08 +0800, yexiu-glory wrote:
>
>>>I'm facing a problem here: our business requires logical data
replication to other departments,
>>>but at the same time, sensitive fields need to be filtered out.
Therefore,
>>>we used the column filtering function when creating logica
ll column field
Or create Surrogate Primary Key
2.
Use that index as the REPLICA IDENTITY.
Regards
Dinesh Nair
From: yexiu-glory
Sent: Friday, July 18, 2025 4:38 PM
To: pgsql-general
Subject: Regarding logical replication issues with PostgreSQL version
, Neon)
> > also share the storage between instances,
>
> That model is cool, but I think it's more of a solution for outliers as I was
> suggesting, not something that most would or should want.
>
> Best,
> Pierre
>
> On Sat, Jul 26, 2025, at 09:42, Vladimir Chu
k it's more of a solution for outliers as I was
suggesting, not something that most would or should want.
Best,
Pierre
On Sat, Jul 26, 2025, at 09:42, Vladimir Churyukin wrote:
> Sorry, I was referring to this:
>
> > But when PostgreSQL instances share storage rather than r
Sorry, I was referring to this:
> But when PostgreSQL instances share storage rather than replicate:
> - Consistency seems maintained (same data)
> - Availability seems maintained (client can always promote an accessible
node)
> - Partitions between PostgreSQL nodes don't preven
l 18, 2025 at 5:31 AM Pierre Barre wrote:
>> Now, I'm trying to understand how CAP theorem applies here. Traditional
>> PostgreSQL replication has clear CAP trade-offs - you choose between
>> consistency and availability during partitions.
>>
>> But when Post
w CAP theorem applies here. Traditional
> PostgreSQL replication has clear CAP trade-offs - you choose between
> consistency and availability during partitions.
>
> But when PostgreSQL instances share storage rather than replicate:
> - Consistency seems maintained (same data)
>
tency average = 337.762 ms
> initial connection time = 43.969 ms
> tps = 296.066616 (without initial connection time)
>
> Best,
> Pierre
>
>
> On Fri, Jul 25, 2025, at 11:25, Pierre Barre wrote:
>> Hi,
>>
>> I went ahead and did that test.
>>
>> Here
Barre wrote:
> Hi,
>
> I went ahead and did that test.
>
> Here is the postgresql config I used for reference (note the wal
> options (recycle, init_zero) as well as full_page_writes = off, because
> ZeroFS cannot have torn writes by design).
>
&g
Laurenz Albe writes:
> On Fri, 2025-07-25 at 14:10 -0400, Tom Lane wrote:
>> So ... any chance you have a data type named _sa_setup_role?
> ... it could also be a type "sa_setup_role", and "_sa_setup_role"
> is interpreted as the corresponding array type:
Oh, of course --- that's a good deal mor
On Fri, 2025-07-25 at 14:10 -0400, Tom Lane wrote:
> After looking at the code I remembered that the parser might be taking
> this as a type coercion request. With that idea, I can duplicate the
> observed behavior like so:
>
> regression=# select _sa_setup_role('af_repo_app');
> INFO: af_repo_a
I wrote:
> Well, there's got to be *something* different about that database.
After looking at the code I remembered that the parser might be taking
this as a type coercion request. With that idea, I can duplicate the
observed behavior like so:
regression=# select _sa_setup_role('af_repo_app');
Rumpi Gravenstein writes:
> Our databases are deployed with automation tools. They should all be
> created the same. They all have the same 17 extensions. I've asked a DBA
> to confirm.
Well, there's got to be *something* different about that database.
> This issue only appears in the functio
Now I'm wondering about stray entries in pg_cast. Also,
do you have any extensions loaded in that DB that aren't
in your other ones?
Our databases are deployed with automation tools. They should all be
created the same. They all have the same 17 extensions. I've asked a DBA
to confirm.
This
Rumpi Gravenstein writes:
> No ... just one version:
D'oh, actually this would have complained if there was more
than one match, so that theory is wrong:
> _pub_dev_2_db=# DROP FUNCTION if exists _sa_setup_role;
> DROP FUNCTION
Now I'm wondering about stray entries in pg_cast. Also,
do you
-
PostgreSQL 16.9 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0
20210514 (Red Hat 8.5.0-26), 64-bit
(1 row)
_pub_dev_2_db=# SHOW server_version;
server_version
16.9
(1 row)
_pub_dev_2_db=# DROP FUNCTION if exists _sa_setup_role;
DROP FUNCTION
On 7/25/25 09:59, Rumpi Gravenstein wrote:
No ... just one version:
_pub_dev_2_db=# \df _sa_setup_role
Do:
\df *._sa_setup_role
--
Rumpi Gravenstein
--
Adrian Klaver
adrian.kla...@aklaver.com
25 at 12:54 PM Tom Lane wrote:
> Rumpi Gravenstein writes:
> > I've been confound by the following behavior that I see in one of our
> > PostgreSQL 16 instances. In this case I am running this script from
> psql.
>
> I'd bet there is another function named _
Rumpi Gravenstein writes:
> I've been confound by the following behavior that I see in one of our
> PostgreSQL 16 instances. In this case I am running this script from psql.
I'd bet there is another function named _sa_setup_role() that takes
some kind of array, and the parser
On 7/25/25 09:36, Rumpi Gravenstein wrote:
PostgreSQL Experts,
I've been confound by the following behavior that I see in one of our
PostgreSQL 16 instances. In this case I am running this script from
PostgreSQL Experts,
I've been confound by the following behavior that I see in one of our
PostgreSQL 16 instances. In this case I am running this script from psql.
-
_pub_dev
Hi,
I went ahead and did that test.
Here is the postgresql config I used for reference (note the wal options
(recycle, init_zero) as well as full_page_writes = off, because ZeroFS cannot
have torn writes by design).
https://gist.github.com/Barre/8d68f0d00446389998a31f4e60f3276d
Test was
> This then begs the obvious question of how fast is this with
> synchronous_commit = on?
Probably not awful, especially with commit_delay.
I'll try that and report back.
Best,
Pierre
On Fri, Jul 25, 2025, at 00:03, Jeff Ross wrote:
> On 7/24/25 13:50, Pierre Barre wrote:
>
>> It’s not “safe”
! I applaud you.
>
> On Thu, Jul 17, 2025, 4:59 PM Pierre Barre wrote:
>> Hi everyone,
>>
>> I wanted to share a project I've been working on that enables PostgreSQL to
>> run on S3 storage while maintaining performance comparable to local NVMe.
>> The ap
t;
> I wanted to share a project I've been working on that enables PostgreSQL
> to run on S3 storage while maintaining performance comparable to local
> NVMe. The approach uses block-level access rather than trying to map
> filesystem operations to S3 objects.
>
> ZeroFS: ht
On 7/24/25 13:50, Pierre Barre wrote:
It’s not “safe” or “unsafe”, there’s mountains of valid workloads which don’t
require synchronous_commit. Synchronous_commit don’t make your system
automatically safe either, and if that’s a requirement, there’s many
workarounds, as you suggested, it cert
It’s not “safe” or “unsafe”, there’s mountains of valid workloads which don’t
require synchronous_commit. Synchronous_commit don’t make your system
automatically safe either, and if that’s a requirement, there’s many
workarounds, as you suggested, it certainly doesn’t make the setup useless.
Be
On Fri, Jul 18, 2025 at 12:57:39PM +0200, Pierre Barre wrote:
> - Postgres configured accordingly memory-wise as well as with
> synchronous_commit = off, wal_init_zero = off and wal_recycle = off.
Bingo. That's why it's fast (synchronous_commit = off). It's also why
it's not safe _unless_ you
.
+1
> > P.S. The full project includes a custom NFS filesystem too.
>
> "NFS" is a key word that does not inspire confidence in
> PostgreSQL circles...
Certainly NFSv3 should. NFSv4 is much safer but I've no experience
running PG on it and I assume there will be cases where recovery from
network and/or server failures is slow.
> "NFS" is a key word that does not inspire confidence in
PostgreSQL circles...
Coming back to this, I just implemented 9P, which should translates to proper
semantics for FSYNC.
mount -t 9p -o trans=tcp,port=5564,version=9p2000.L,msize=65536,access=user
127.0.0.1 /mnt/9p
Best,
P
On Wed, 23 Jul 2025 12:13:26 +0100
Achilleas Mantzios wrote:
…
> Have you consider pgpool-ii ? I know most support / service companies
> push for patroni, and manually separating read / write traffic from
> within the app,
Indeed
> but I still find pgpool's query routing very nice.
Using Pgp
On 6/18/25 18:12, vijay patil wrote:
Hi Team,
I am planning to implement a PostgreSQL High Availability (HA) and
Load Balancing setup and would greatly appreciate your feedback on the
architecture we've designed (see attached diagram).
image.png
*Overview of the Setup:*
*
>
> Is this architecture considered a best practice within the PostgreSQL
> community?
I would say Patroni is probably "best practice", but there are other people
out there happy with, and expert with, repmgr as well.
Are there any potential bottlenecks or failure points I
> Our setup: 5 Node Patroni Cluster with PostgreSQL 16.9.
> db1: current leader
> db2: sync-replica
> db3/4/5: replica
>
> The replicas connect to the leader using the host IP of the leader. So there
> are
> 4 walsender for patroni, 1 sync and 3 async.
>
> The p
(Note: I have also attached the whole email for better readability of the logs)
Hello!
Our setup: 5 Node Patroni Cluster with PostgreSQL 16.9.
db1: current leader
db2: sync-replica
db3/4/5: replica
The replicas connect to the leader using the host IP of the leader. So there
are 4 walsender for
On Fri, 2025-07-18 at 19:08 +0800, yexiu-glory wrote:
> I'm facing a problem here: our business requires logical data replication to
> other
> departments, but at the same time, sensitive fields need to be filtered out.
> Therefore, we used the column filtering function when creating logical
> re
u clarify how exactly you're running
>>> postgres in your tests? A specific AWS service? What's the test
>>> infrastructure that sits above the file system?
>>>
>>> On Thu, Jul 17, 2025 at 11:59 PM Pierre Barre wrote:
>>>> Hi everyone,
>>
u clarify how exactly you're running
> postgres in your tests? A specific AWS service? What's the test
> infrastructure that sits above the file system?
>
> On Thu, Jul 17, 2025 at 11:59 PM Pierre Barre wrote:
>
> Hi everyone,
>
> I wanted to share a project I
Now, I'm trying to understand how CAP theorem applies here. Traditional
PostgreSQL replication has clear CAP trade-offs - you choose between
consistency and availability during partitions.
But when PostgreSQL instances share storage rather than replicate:
- Consistency seems maintained
I'm facing a problem here: our business requires logical data replication to
other departments, but at the same time, sensitive fields need to be filtered
out. Therefore, we used the column filtering function when creating logical
replication. If we use `alter table table1 replica identity defau
fy how exactly you're running
> postgres in your tests? A specific AWS service? What's the test
> infrastructure that sits above the file system?
>
> On Thu, Jul 17, 2025 at 11:59 PM Pierre Barre wrote:
>> Hi everyone,
>>
>> I wanted to share a project I
ote:
> Hi everyone,
>
> I wanted to share a project I've been working on that enables PostgreSQL
> to run on S3 storage while maintaining performance comparable to local
> NVMe. The approach uses block-level access rather than trying to map
> filesystem operations to S3 object
hole system gradually by starting with throwable workloads
(e.g., persistent volumes in CI), then moving to data we can afford to lose,
then backups, and finally to production data.
>> P.S. The full project includes a custom NFS filesystem too.
>
> "NFS" is a key word th
it is not everything. If things
perform surprisingly well, people become suspicious.
> P.S. The full project includes a custom NFS filesystem too.
"NFS" is a key word that does not inspire confidence in
PostgreSQL circles...
Yours,
Laurenz Albe
Hi everyone,
I wanted to share a project I've been working on that enables PostgreSQL to run
on S3 storage while maintaining performance comparable to local NVMe. The
approach uses block-level access rather than trying to map filesystem
operations to S3 objects.
ZeroFS: https://githu
On Wed, Jul 16, 2025 at 8:42 PM Greg Sabino Mullane
wrote:
> On Wed, Jul 16, 2025 at 9:25 AM Amol Inamdar wrote:
>
>>
>>1. NFS mount point is for /nfs-mount/postgres (and permissions locked
>>down so that Postgres cannot create directories in here)
>>2. Postgres data directory is /nf
Amol Inamdar writes:
> @Laurenz Albe
>> If you pre-create the data directory with the appropriate permissions,
>> what keeps you from giving ownership to the correct user too?
> Our NFS server is not a regular linux based server,
> it's on zOS (Mainframes) with AT-TLS security enabled,
> hence i
ol will ensure only the Posgres user can
> read/write to
> > this directory)
> > Considering the above scenario/setup, what is the danger of removing the
> ownership check
> > in miscinit.c checkDataDir() function ?
>
> The danger is that somebody else than the Postgr
On Wed, Jul 16, 2025 at 9:25 AM Amol Inamdar wrote:
>
>1. NFS mount point is for /nfs-mount/postgres (and permissions locked
>down so that Postgres cannot create directories in here)
>2. Postgres data directory is /nfs-mount/postgres/db
>3.
>
>With secured NFS + AT-TLS setup P
cinit.c checkDataDir() function ?
The danger is that somebody else than the PostgreSQL user has permissions on
the data directory. You will argue that that somebody is root, and root has
these permissions anyway.
But there is another reason why PostgreSQL insists that the PostgreSQL user
owns th
Quoting Tom's earlier email:
"(But I too *would not use Postgres-over-NFS for any critical data*.
Too many moving parts. It's tough enough to ensure crash safety
with local storage.)"
You're going through a lot of security effort to implement a Worst Practice.
On Wed, Jul 16, 2025 at 9:25 AM Amo
Hi All,
I would like to rephrase the question a little bit, below is how our setup
going to be
1. NFS mount point is for /nfs-mount/postgres (and permissions locked
down so that Postgres cannot create directories in here)
2. Postgres data directory is /nfs-mount/postgres/db
3.
Wit
ot;gzh" wrote:
Dear,
I am encountering an issue with the COPY command in PostgreSQL regarding the
handling of line breaks in data fields. My PostgreSQL instance is installed on
a Linux system.
When I use the COPY command to export data to a CSV file and then import the
same CSV file back into
Thanks Tom and Laurenz for the explanation.
Let me try out a few things and get back to you if needed.
Thanks,
Amol
On Mon, Jul 14, 2025 at 7:37 PM Tom Lane wrote:
> Laurenz Albe writes:
> > It is not a good idea to have a mount point be the data directory.
>
> ^^^ This. ^^^
>
> That is primar
Thanks Laurenz.
On Mon, Jul 14, 2025 at 8:11 PM Laurenz Albe
wrote:
> On Mon, 2025-07-14 at 18:32 +0530, Amol Inamdar wrote:
> > > The data directory can either be created by "initdb", in which case
> > > the mount point must allow the PostgreSQL user to create a
em, there is a lost+found
directory where fsck places links to orphaned inodes.
If the PostgreSQL user owns the mount point and wants to use
"initdb" to create a data directory in it, the program will fail
and complain that the directory is not empty. The danger is great
that the use
Benjamin Wang writes:
> I am not sure whether PostgreSQL depends on system call `fsyncdata` to
> sync data to disk. If yes, then I don't think it's safe to use NFS.
Well, that's a whole other discussion. The point about mount
directories applies to any sort of dismountable
On Mon, Jul 14, 2025 at 12:02 PM Benjamin Wang
wrote:
> I am not sure whether PostgreSQL depends on system call `fsyncdata` to
> sync data to disk.
>
https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-WAL-SYNC-METHOD
David J.
I am not sure whether PostgreSQL depends on system call `fsyncdata` to
sync data to disk. If yes, then I don't think it's safe to use NFS.
When `fsyncdata` returns success, it doesn't mean the data has
really been synced to disk. But if PostgreSQL crashes right after
it returns suc
"Peter J. Holzer" writes:
> On 2025-07-14 10:07:20 -0400, Tom Lane wrote:
>> That is primarily for safety reasons: if for some reason the
>> filesystem gets dismounted, or hasn't come on-line yet during
>> a reboot, you do not want Postgres to be able to write on the
>> underlying mount-point dire
1 - 100 of 3005 matches
Mail list logo