Re: Non-Stored Generated Columns

2024-02-28 Thread Dominique Devienne
On Wed, Feb 28, 2024 at 8:11 PM Tom Lane  wrote:

> Dominique Devienne  writes:
> > Views can have foreign-keys?
>
> Surely you'd put the FK on the underlying table.
>

Again, the FKs are on the *generated* columns. So


> > Generated view columns be indexed?
>
> [...[ it's hard to see much use-case here
>

The use-case is Polymorphic Foreign Key (aka PFKs).
I've already outlined it on this list, in the past.

For NULL'able PFKs, you have two read-write concrete columns, the ID and
the CODE (aka TYPE) of the relation.
Then you have 1 generated column (that I call XArc) per possible CODE/TYPE
value (for that specific relation), which is either ID or NULL, depending
on the CODE.
And the "regular" FK is on that generated (and ideally Non-Stored, the
whole point of this thread), since it points to a single table now.
And since these FKs are CASCADE'ing, you want them INDEX'ed of course.

For NOT NULL PFKs, that more of a PITA, because you cannot SET NULL a
generated column (there's no INSTEAD OF trigger on generated columns).
So instead we need a BEFORE INSERT/UPDATE trigger on the ID column, to
dispatch to the proper per-TYPE-value column, which must now be concrete.
And make sure those now-non-generated XArc columns are always in sync with
ID/CODE.

That's a PITA to implement, I really wish this was built-in to PostgreSQL
(albeit non-standard SQL).
Especially given that PostgreSQL has table inheritance, but which don't
work with FKs.
(in our case, some PFKs are not inheritance based, even though we do have
inheritance in our logical models).
But at least it's fully automated in our case, since the physical schemas
are machine generated from logical ones.

In one of our smaller schemas, 25 tables and 293 columns, we have 18 PFK
virtual columns, i.e. 6% of the columns. So small, but not "that small".
We have PFK cardinalities (number of XArcs) that can go up to 8, in that
one small schema.
When the cardinality goes too high, we explicitly choose to disable
referential integrity for specific PFKs, sometimes.

So has Ron says. If SQLite and Oracle has them, that's not an accident.
And there's a real concrete use-case being it. Albeit an usual one.
Any OO language writer (Java, C++, etc...) can recognize the polymorphism
pattern (yes, often abused).
And in our case, it's a long established pattern (20 years old) in our data
models, transcribed to SQL and relational.
When this was started with SQLite, it wasn't enforced at the relational
model, but the mid-tier C++ level.
But now that we move to PostgreSQL with direct SQL access (two tier), it
must be PostgreSQL server-side enforced.

So, to conclude, it works with PostgreSQL now. But it's more wasteful that
it could/should be, because of the STORED only current limitation.

Thanks, --DD

PS: Since we are on this subject, I'd like to take an opportunity to ask a
question I've had for a long time, but didn't have time to research.
For a given PFK, all its XArc columns are mutually-exclusive (by design,
see above). So their indexes are full of NULLs.
Are the NULLs stored in these indexes, consuming extra space? In our use
case, we don't care about NULLs.
So should be use an expression index to explicitly not index the NULLs?
The sum of indexes for a given PFK should in theory be the same as a single
index, whatever the cardinality of the PFK, "logically".


Where the data stitching/update/deduplication should happen

2024-02-28 Thread yudhi s
Hello All,
It's related to data flow to OLTP database which is mostly going to be
postgres.

We are designing a system which is going to move data from input files(in
Avro format) to Goldengate to kafka topics to the database. Incoming
files-->GGS--> KAFKA-->OLTP Database. This would be a heavy transactional
system processing ~10K txn/second. The database is supposed to show the
near real time transactions to the users. The transactions which come from
kafka topics will be asynchronous in nature and also there are chances of
duplicate data being ingested from kafka topics. So the data has to be
stitched/ updated/deduplicated before showing it to the users a complete
transaction or say before persisting it to the normalized data model which
would be ready for the querying by the end users.

So where should we perform these stitching/update/deduplication stuff in
this workflow? Should it happen inside the application somewhere in the
kafka consumer(using poison pill concept) or should it happen in a stage
schema in the database by persisting all the pieces of transaction as it is
coming from kafka topics. Adding another stage layer within the database is
going to add some more time to the data to be visible to the users and thus
it may not be near real time. As it will take some more time to move the
data to the main transaction/normalized tables from the stage tables.

Or should we persist the data as is in the stage area and show the data
from stage itself if some users are okay with partial transaction data and
showing the complete transaction data from the normalized table to other
users who want to see it as a complete transaction but with some delay?

What is the appropriate design to address such use cases?

Regards
Yudhi


RE: walreceiver fails on asynchronous replica [EXTERNAL] [SEC=UNOFFICIAL]

2024-02-28 Thread Mark Schloss
UNOFFICIAL
Hello,

Thanks for looking at this. I think I complicated things by including barman. I 
was just wanting to point out each primary streams to two locations - the 
walreceiver on the replica and the walreciver used by barman. We think the 
reason the barman WAL-receiver didn't fail is because there is no apply of the 
WAL in Barman but the Standby is applying and it's WAL-receiver got terminated, 
so the barman server can be taken out of this picture completely, they were 
just there as a by-product in trying to determine the effect.  We are only 
interested in the killing of the standby wal-receiver and that the pg_waldump 
showed the failing lsn was a commit.

Thanks






UNOFFICIAL

-Original Message-
From: Kyotaro Horiguchi  
Sent: Wednesday, 28 February 2024 5:09 PM
To: Mark Schloss 
Cc: pgsql-general@lists.postgresql.org
Subject: Re: walreceiver fails on asynchronous replica [EXTERNAL] 
[SEC=UNOFFICIAL]

At Wed, 28 Feb 2024 03:23:37 +, Mark Schloss  
wrote in 
> Thank you for your reply. I can confirm there were no changes made to the 
> config of the replica.

Okay.

> Is there any significance in the parameters in the commit record -
> 'inval msgs: catcache 21; sync'.

I think it is not relevant.

> - The walreceiver on the barman server did not fail but the WAL file does not 
> contain the commit transaction

I don't have detailed knowledge of barman's behavior, but it seems to
be quite normal for barman to write out only on receiving a commit
record. What I don't understand here is how those WAL files on the
barman server are related to the failed replicas.

>From the information you provided, I guess that the replicas somehow
obtain the currently-written WAL file from the barman server at a
certain time through a route unknown to me, but you seem to know
that. I think that relationship has not been explained here.

Could you explain the routes and timings that WAL files are copied
between the servers?

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center

**
Please  note  that  your  email address  is known to  AUSTRAC  for the
purposes  of  communicating with you.  The information  transmitted in
this  e-mail is  for the  use of  the intended  recipient only and may
contain confidential and/or legally  privileged  material. If you have
received  this information  in error you must not disseminate, copy or
take  any  action on  it and we  request that you delete all copies of
this transmission together with attachments and notify the sender.

This footnote also confirms that this email message has been swept for
the presence of computer viruses.
**




Content for talk on Postgres Type System at PostgresConf

2024-02-28 Thread Guyren Howe
I am to talk about Postgres’s type system at PGConf:

https://postgresconf.org/conferences/2024/program/proposals/postgres-s-type-system

I picked the issue because I think it’s poorly understood, greatly 
under-discussed, and an excellent way to empower postgres users.

I am reasonably conversant with the issue. I’m not looking for others to write 
the talk for me, but in order to make the best talk I can, I’m asking:

What would you want to see in such a talk?

I’m planning on covering:

- The built-in types that are underused and their advantages (eg inet)
- domains
- such things as details of arrays including multidimensional arrays
- user-defined types, their relationship to tables, and generally how to use 
them

I would spend most of the time discussing ways to make effective use of types. 
Some examples:
- defining functions of rows so the table can be used kind of like a set of 
objects (including the dot notation for invoking functions)
- using UDFs to make code clearer eg if you have an idiosyncratic functional 
index, define it using a function of the row, so it’s easy to get right when 
querying
- using UDFs as a kind of better domain. eg differentiating imperial from 
metric units by requiring an explicit constructor, not just accepting any old 
number

I would mention enumerated types, although I’m inclined to advise that their 
inflexibility (eg can’t delete or rearrange them) means that a related table is 
probably better (I’d be delighted to be proved wrong).

Custom Range Types are an interesting feature, but I can’t think of a good use 
case. Any good examples?

Automatic casting is a feature I’m aware of, but I’d be interested in any cool 
use cases.


Anything I’m missing? Any existing good discussions of the subject?

Re: Excel Source [24]] Error: System.Exception: SqlTruncateException: Numeric arithmetic causes

2024-02-28 Thread Erik Wienhold
On 2024-02-28 21:22 +0100, Anthony Apollis wrote:
> Please assist. I am using SSIS to read data from an Excel sheet into
> Postgres. I have increased the column size a few times, just cant seem to
> get the data in. Getting errors relating to destination column size.

What is the data type of that column?

> [Excel Source [24]] Error: System.Exception: SqlTruncateException:
> Numeric arithmetic causes truncation.. RowNumber=1, ColumnName=Metric
> Value [DataType=DT_NUMERIC,Length=0], Data=[Length=12,
> Value=36655.63]
>at 
> ZappySys.PowerPack.Adapter.SqlDataComponentBase.HandleException(Exception
> ex)
>at ZappySys.PowerPack.Adapter.SqlDataComponentBase.PrimeOutput(Int32
> outputs, Int32[] outputIDs, PipelineBuffer[] buffers)
>at 
> Microsoft.SqlServer.Dts.Pipeline.ManagedComponentHost.HostPrimeOutput(IDTSManagedComponentWrapper100
> wrapper, Int32 outputs, Int32[] outputIDs, IDTSBuffer100[] buffers,
> IntPtr ppBufferWirePacket)

Not sure what I'm looking at, but screenshots 1 and 2 say precision=19
and scale=18 which would be numeric(19,18).  But value 36655.63
shown in the error message requires a precision that is at least 9
digits larger than the scale:

regress=# select '36655.63'::numeric(19,18);
ERROR:  numeric field overflow
DETAIL:  A field with precision 19, scale 18 must round to an absolute 
value less than 10^1.
regress=# select '36655.63'::numeric(27,18);
   numeric
--
 36655.63
(1 row)

-- 
Erik




Re: Non-Stored Generated Columns

2024-02-28 Thread Ron Johnson
On Wed, Feb 28, 2024 at 2:11 PM Tom Lane  wrote:

> Dominique Devienne  writes:
> > Views can have foreign-keys?
>
> Surely you'd put the FK on the underlying table.
>
> > Generated view columns be indexed?
>
> You want an index on a virtual column?  Sure, just build an expression
> index (on the underlying table) that matches it.
>
> I agree with Laurenz that it's hard to see much use-case here
> that's not sufficiently covered already.
>

Oracle and SQLite have them, so, by definition, *many* more than one person
(you don't get a feature added to Oracle that easily...) has seen
sufficient reasons for doing it instead of modifying the underlying table,
or building an expression index.

My guess as to why they do it is "clarity of design".


Re: Documentation diff

2024-02-28 Thread Daniel Gustafsson
> On 28 Feb 2024, at 19:59, Marcos Pegoraro  wrote:
> 
> > jian he  escreveu:
> > but I guess the OP wants a side by side rendered html comparison.
> > like you buy a iphone then you can compare it like this:
> > https://www.apple.com/iphone/compare/
> 
> No, I want show what changes ocurred on both versions, the page your are 
> seeing and other you selected. But both on same page, not side by side.

Patches welcome, pgsql-www is the list for submitting changes for the website
and HTML rendering of the docs.

--
Daniel Gustafsson





Re: Excel Source [24]] Error: System.Exception: SqlTruncateException: Numeric arithmetic causes

2024-02-28 Thread Anthony Apollis
Data in Excel

On Wed, 28 Feb 2024 at 22:22, Anthony Apollis 
wrote:

> Please assist. I am using SSIS to read data from an Excel sheet into
> Postgres. I have increased the column size a few times, just cant seem to
> get the data in. Getting errors relating to destination column size.
>
> [Excel Source [24]] Error: System.Exception: SqlTruncateException: Numeric 
> arithmetic causes truncation.. RowNumber=1, ColumnName=Metric Value 
> [DataType=DT_NUMERIC,Length=0], Data=[Length=12, Value=36655.63]
>at 
> ZappySys.PowerPack.Adapter.SqlDataComponentBase.HandleException(Exception ex)
>at ZappySys.PowerPack.Adapter.SqlDataComponentBase.PrimeOutput(Int32 
> outputs, Int32[] outputIDs, PipelineBuffer[] buffers)
>at 
> Microsoft.SqlServer.Dts.Pipeline.ManagedComponentHost.HostPrimeOutput(IDTSManagedComponentWrapper100
>  wrapper, Int32 outputs, Int32[] outputIDs, IDTSBuffer100[] buffers, IntPtr 
> ppBufferWirePacket)
>
>


Re: Question on Table creation

2024-02-28 Thread veem v
On Wed, 28 Feb 2024 at 01:24, sud  wrote:

> While testing the pg_partman extension I see it by default creates a
> "default partition" even if we provide the parameter p_start_partition as
> "current date". But if someone purposely doesn't want to have the default
> partitions , so as to avoid any unwanted data entering the table , which
> may be required from a data quality perspective. Is there any option in the
> create_parent function to achieve that? Or do we have to manually drop the
> default partition after getting this created through create_parent function?
>
> I am not able to understand the significance of the "p_template_table"
> parameter. When we create the partitions without passing this parameter the
> template table gets created automatically in the partman schema. Checking
> the details of the template table, it doesn't show any indexes or
> constraints present in it, but still the child partitions get created with
> all the necessary indexes and constraints as those are there defined for
> the main table. So wondering , in what real life scenario do we really need
> the template table to be defined and how will it help if the partitions are
> different then from the main table structure?
>
> Lastly , the template table is by default created in the partman schema
> but it also works without error,  if we pass the template table to be
> created in the application schema. So is there any downside of having the
> template table reside in the application schema?
>
>

As per my understanding , you can't control the creation of the default
partition. If you somehow don't want to keep it then, you can drop it post
creation of the partition for the first time i.e after the create_parent
function call.

Template_table is necessary when someone needs the child partitions to have
a difference in structure as compared to main table like difference in
indexes or constraints. But I think the number and data types of columns
should be the same in main table and template tables. I have not tried
though.

I think keeping a template table in an application schema is not a good
idea considering the fact that it's not a business table but a technical
one, and nobody should accidentally alter this.

>
>


Re: Orphan files filling root partition after crash

2024-02-28 Thread Laurenz Albe
On Wed, 2024-02-28 at 20:30 +0100, Dimitrios Apostolou wrote:
> Lucky you.  It should have been "relfilenode" rather than "oid",
> > and some catalog tables don't have their files listed in the catalog,
> > because they are needed *before* the database can access tables.
> 
> I actually double checked that the filenames don't appear anywhere in
> SELECT * FROM pg_class
> and that the files were multi-GB in size including all the
> 1GB-pieces. But luck was definitely a part of the equation, I didn't know
> that the files might be accessed before tables (at db startup?) or that
> "relfilenode" would be more appropriate. Why is that, where can I read
> more? I see that many (but not all) rows in pg_class have oid=relfilenode
> but for many rows relfilenode=0 which is meaningless as filename.

If you are curious about such things, start reading the source.
The object ID is immutable, and initially the filenode is the save,
but it changes whenever the table is rewritten (TRUNCATE, ALTER TABLE,
VACUUM (FULL), ...).

Yours,
Laurenz Albe




Re: Orphan files filling root partition after crash

2024-02-28 Thread Adrian Klaver

On 2/28/24 11:30, Dimitrios Apostolou wrote:

Thanks for the feedback Laurenz,

On Wed, 28 Feb 2024, Laurenz Albe wrote:


On Wed, 2024-02-28 at 12:16 +0100, Dimitrios Apostolou wrote:


I ended up doing some risky actions to remediate the problem: Find the
filenames that have no identically named "oid" in pg_class, and delete
(move to backup) the biggest ones while the database is stopped.
Fortunately the database started up fine after that!


Lucky you.  It should have been "relfilenode" rather than "oid",
and some catalog tables don't have their files listed in the catalog,
because they are needed *before* the database can access tables.


I actually double checked that the filenames don't appear anywhere in
SELECT * FROM pg_class
and that the files were multi-GB in size including all the
1GB-pieces. But luck was definitely a part of the equation, I didn't know
that the files might be accessed before tables (at db startup?) or that
"relfilenode" would be more appropriate. Why is that, where can I read


https://www.postgresql.org/docs/current/storage-file-layout.html


Caution

Note that while a table's filenode often matches its OID, this is not 
necessarily the case; some operations, like TRUNCATE, REINDEX, CLUSTER 
and some forms of ALTER TABLE, can change the filenode while preserving 
the OID. Avoid assuming that filenode and table OID are the same. Also, 
for certain system catalogs including pg_class itself, 
pg_class.relfilenode contains zero. The actual filenode number of these 
catalogs is stored in a lower-level data structure, and can be obtained 
using the pg_relation_filenode() function.




--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Orphan files filling root partition after crash

2024-02-28 Thread Dimitrios Apostolou

Thanks for the feedback Laurenz,

On Wed, 28 Feb 2024, Laurenz Albe wrote:


On Wed, 2024-02-28 at 12:16 +0100, Dimitrios Apostolou wrote:


I ended up doing some risky actions to remediate the problem: Find the
filenames that have no identically named "oid" in pg_class, and delete
(move to backup) the biggest ones while the database is stopped.
Fortunately the database started up fine after that!


Lucky you.  It should have been "relfilenode" rather than "oid",
and some catalog tables don't have their files listed in the catalog,
because they are needed *before* the database can access tables.


I actually double checked that the filenames don't appear anywhere in
SELECT * FROM pg_class
and that the files were multi-GB in size including all the
1GB-pieces. But luck was definitely a part of the equation, I didn't know
that the files might be accessed before tables (at db startup?) or that
"relfilenode" would be more appropriate. Why is that, where can I read
more? I see that many (but not all) rows in pg_class have oid=relfilenode
but for many rows relfilenode=0 which is meaningless as filename.




So what is the moral of the story? How to guard against this?


Monitor disk usage ...


It happened *fast*. And it was quite a big suprise coming
from "just" a disk-full situation.

A couple of suggestions; wouldn't it make sense:

+ for the index to be written by default to the table's tablespace?

+ for postgres to refuse to write non-wal files, if it's on
  the same device as the WAL and less than max_wal_size bytes are free?



The root of the problem is that you created the index in the default
tablespace.  You should have

  ALTER TABLE x ADD UNIQUE ... USING INDEX TABLESPACE bigtblspc;



Thank you, was reading the docs but didn't realize this
syntax is valid. I thought it was only for CREATE/ALTER INDEX.



Needless to say, I would have hoped the database cleaned-up after itself
even after an uncontrolled crash, or that it provided some tools for the
job. (I tried VACUUM FULL on the table, but the orphaned files did not go
away).


That is not so simple... Also, it would slow down crash recovery.

But I agree that it would be nice to have a tool that reports or
cleans up orphaned files.

Yours,
Laurenz Albe






Re: Non-Stored Generated Columns

2024-02-28 Thread Tom Lane
Dominique Devienne  writes:
> Views can have foreign-keys?

Surely you'd put the FK on the underlying table.

> Generated view columns be indexed?

You want an index on a virtual column?  Sure, just build an expression
index (on the underlying table) that matches it.

I agree with Laurenz that it's hard to see much use-case here
that's not sufficiently covered already.

regards, tom lane




Re: Documentation diff

2024-02-28 Thread Marcos Pegoraro
> jian he  escreveu:
> but I guess the OP wants a side by side rendered html comparison.
> like you buy a iphone then you can compare it like this:
>
https://www.apple.com/iphone/compare/

No, I want show what changes ocurred on both versions, the page your are
seeing and other you selected. But both on same page, not side by side.

On every page you have on title Supported Versions: 16 / 15 / 14 / 13 / 12.
I would like to see on right of that some radio buttons, for Comparing
Version: rb16 / rb15 / rb14 / rb13 / rb12. If these rb16, rb15 are radio
buttons, I can compare actual page with one of previous versions, only one.
So, if you are showing a CREATE TABLE of version 16 and you are comparing
with 14 version, that STORAGE ... part would be painted
with green background, because it was added on 16 version.

If you change this part of that page
[ STORAGE { PLAIN | EXTERNAL | EXTENDED | MAIN | DEFAULT } ]
could be rewritten as
 [ STORAGE { PLAIN |
EXTERNAL | EXTENDED | MAIN | DEFAULT } ] 

Then a simple JS would show that text with background color green only if
comparing version you've selected is 15 or less

This page has another change, but only if you compare with version 11 of
before, because WITH OIDS was removed on version 12.
So, this text
[ WITH ( *storage_parameter* [= *value*] [, ... ] ) | WITH OIDS | WITHOUT
OIDS ]
could be rewritten as
  [ WITH (
*storage_parameter* [= *value*] [, ... ] ) | WITH OIDS | WITHOUT OIDS ]
 

This time that part would show something red, just to show that this part
was removed, maybe.

I know that we have to rewrite all html pages changing that this or that
text was added or changed, but I think if possible to find a tool to do
that.
And to know what exactly was changed on every page of docs would be much
better.

regards
Marcos


Re: Need Assistance: Command to display procedures does not work

2024-02-28 Thread Adrian Klaver

On 2/28/24 10:15, Sasmit Utkarsh wrote:

Hi Adrian,

Sorry, Yes I did connect to the correct DB and it's just a test database 
and there's no shc schema. After giving the pronamespace::regnamespace. 
I do see it has another schema as test. Thanks a lot for your guidance, 


pgAdmin4 was telling you that with the Schemas(2) in the Object Explorer.

will make next time i verify with this details first, switching between 
different DBs and machines got my mind diverted


image.png






Regards,
Sasmit Utkarsh
+91-7674022625





--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Unable to get PostgreSQL 15 with Kerberos (GSS) working

2024-02-28 Thread Stephen Frost
Greetings,

* Matthew Dennison (m...@matty-uk.co.uk) wrote:
> Sorry, yes I tried that and received the same results.

Did you try doing a 'klist' after?  Would be interesting to know if you
got a Kerberos ticket or not.

Thanks,

Stephen


signature.asc
Description: PGP signature


Moving PostgreSQL servers to a new server

2024-02-28 Thread normandavis1990
Hi,
I am new to PostgreSQL. My scenario is as follows:

https://cdn.bloghunch.com/uploads/uNxNoi5uVKeDibd5.webp

I have a Master server and two Replica (Replica-1 and Replica-2) servers. One 
of these Replica servers is in read-only mode and a report is prepared from it 
using Microsoft PowerBI. Due to a series of issues, I want to transfer the 
Master server and one of the Replica servers to other servers in two steps, but 
I don't want to have any interruptions.
A) Assuming PostgreSQL is installed on the new server, what should I do first 
to migrate the Master? Please show me a tutorial and share your experiences.

B) After transferring the Master, I want to transfer one of the Replica 
servers. What should I do to transfer this server?

Cheers.

Re: Unable to get PostgreSQL 15 with Kerberos (GSS) working

2024-02-28 Thread Matthew Dennison
Sorry, yes I tried that and received the same results.

Sent from my iPhone

On 28 Feb 2024, at 17:11, Stephen Frost  wrote:

Greetings,

* Matthew Dennison (m...@matty-uk.co.uk) wrote:
I have subsequence discovered that the psql command running from remote Windows 
(server/client) and RHEL8 works as expected using GSS.  PGAdmin4 also works via 
Kerberos (was on my list of things to get working).  It's just locally on the 
server psql will not work to postgresql running on the same server.

I really don't get it, but have decided I can live without it working on the 
server.

Perhaps I wasn't clear- it's not going to work if you do:

psql -h localhost

Instead, try doing:

psql -h hostname.mydomain.net -U postgres -d postgres

Basically, Kerberos isn't really set up to work when the host you're
connecting to is named 'localhost' because the name of the host is how
you figure out what ticket to acquire from the KDC.

Thanks,

Stephen


signature.asc
Description: signature.asc


Re: Need Assistance: Command to display procedures does not work

2024-02-28 Thread Sasmit Utkarsh
Hi Adrian,

Sorry, Yes I did connect to the correct DB and it's just a test database
and there's no shc schema. After giving the  pronamespace::regnamespace. I
do see it has another schema as test. Thanks a lot for your guidance, will
make next time i verify with this details first, switching between
different DBs and machines got my mind diverted

[image: image.png]






Regards,
Sasmit Utkarsh
+91-7674022625


On Wed, Feb 28, 2024 at 10:50 PM Adrian Klaver 
wrote:

> On 2/28/24 09:13, Sasmit Utkarsh wrote:
>
> Reply to list also
> Ccing list
> > HI Adrian,
> >
> > Related to shc shema, they were part of different host/machine.
> > Currently, I'm seeing the result for this discrepancy on another server
> > machine where we have only a public schema.
>
> 1) I doubt that if this is the database you are showing in your pgAdmin4
> screenshot.
>
> 2) The information I asked for in my previous post is still relevant and
> needs answering.
>
> 3) Add pronamespace::regnamespace to your SELECT query to get the schema
> names for the the functions.
>
> >
> >
> > Regards,
> > Sasmit Utkarsh
> > +91-7674022625
> >
> >
> > On Wed, Feb 28, 2024 at 10:35 PM Adrian Klaver
> > mailto:adrian.kla...@aklaver.com>> wrote:
> >
> > On 2/28/24 08:52, Sasmit Utkarsh wrote:
> >  > Hi Adrian,
> >  >
> >  > Thanks for the info, But I have another question: I could see the
> > below
> >  > functions list with the help on query execution on psql command
> line
> >  >
> >  >shc_data=> S*ELECT proname AS function_name,proacl AS
> > privileges FROM
> >  > pg_proc WHERE proname in
> >  >
> >
>  
> ('sql_insert_data_procedure','sql_select_data_procedure','sql_select_size_procedure','sql_update_data_procedure');*
> >  > function_name   | privileges
> >  > ---+
> >  >   sql_select_size_procedure |
> >  >   sql_select_data_procedure |
> >  >   sql_insert_data_procedure |
> >  >   sql_update_data_procedure |
> >  > (4 rows)
> >  >
> >  > but when I try to connect to the same db on PgAdmin4, I don't see
> > the
> >  > list even after refresh. Why is that different?
> >
> > 1) Are you sure you are connecting to correct database.
> >
> > 2) If I followed your previous posts correctly those functions where
> > not
> > in the public schema you are looking at in pgAdmin4. They where in
> the
> > shc(?) schema which I am pretty sure is the second schema in the
> > Schemas(2) in the object tree.
> >
> >
> >  >  Thet
> >  > image.png
> >  >
> >  >
> >  >
> >  >
> >  >
> >  > Regards,
> >  > Sasmit Utkarsh
> >  > +91-7674022625
> >  >
> >  >
> >
> >
> > --
> > Adrian Klaver
> > adrian.kla...@aklaver.com 
> >
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


Re: PostgreSQL Guard

2024-02-28 Thread Ron Johnson
Read
https://www.tecmint.com/configure-postgresql-streaming-replication-in-centos-8/
.

On Wed, Feb 28, 2024 at 10:36 AM Jason Long  wrote:

> Hi,
> Thanks again.
> I have another questions:
>
> 1- Should I just install PostgreSQL normally on the Standby server?
>
> 2- Are the steps the same for other Linux distributions like Debian?
>
> On Wed, Feb 28, 2024 at 9:29 AM, Ron Johnson
>  wrote:
> As before, I encourage you to read
> https://www.tecmint.com/configure-postgresql-streaming-replication-in-centos-8/
> .
>
> On Tue, Feb 27, 2024 at 3:48 PM Jason Long  wrote:
>
> Hi,
> Do I have to run this command on the server where the main database is
> located?
>
> Suppose you want to transfer the database of a website like Amazon.com to
> a new server and delete the old one. Many users are buying and selling on
> this website at the same time and it is not possible to turn off the
> server. What do you do to move a database to another server?
>
> On Tue, Feb 27, 2024 at 9:30 PM, Ron Johnson
>  wrote:
> I used this web page to implement hot standby via physical streaming.
> This command sets up everything for you:
> pg_basebackup \
> --pgdata=$PGDATA \
> --dbname=service=basebackup \
> --verbose --progress \
> --checkpoint=fast \
> --write-recovery-conf \
> --wal-method=stream \
> --create-slot --slot=pgstandby1 \
> --compress=server-zstd
> I got it from
> https://www.tecmint.com/configure-postgresql-streaming-replication-in-centos-8/,
> which I encourage you to read.
>
> For guarding, and situations with a minimal RTO SLA, I use PgPool-II to
> manage the cluster, automatic failover and Virtual IP address.  Note that
> only two database servers are needed.  Install PgPool on the two DB
> servers, and a third, smaller system.
>
> There are other products which do the same thing as PgPool, and you might
> find them better.
>
> On Tue, Feb 27, 2024 at 12:00 PM Jason Long  wrote:
>
> Hi,
> Should I read
> https://www.postgresql.org/docs/current/warm-standby.html#STANDBY-SERVER-SETUP
> for question number 2?
>
> On Tue, Feb 27, 2024 at 7:28 PM, Adrian Klaver
>  wrote:
> On 2/27/24 07:08, Jason Long wrote:
>
> > Hi,
> > 1- Both split-brain protection and security. Any tool?
> >
> > 2- Consider a database related to a website, which data is regularly
> > entered. Now you want to transfer this database to another server.
> > What's the solution?
>
>
> https://www.postgresql.org/docs/current/high-availability.html
>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
>


Re: Non-Stored Generated Columns

2024-02-28 Thread Dominique Devienne
On Wed, Feb 28, 2024 at 5:59 PM Laurenz Albe 
wrote:

> On Wed, 2024-02-28 at 16:09 +0100, Dominique Devienne wrote:
> > From https://www.postgresql.org/docs/16/ddl-generated-columns.html:
> > > PostgreSQL currently implements only stored generated columns
> >
> > We use generated columns extensively.
> > And we have foreign-keys attached to those generated columns.
> > The fact they are always Stored thus wastes space in our case.
> > Any chance PostgreSQL might gain actual virtual / non-stored generated
> columns soon? Ever?
> >
> > For reference, both Oracle and SQLite have virtual / non-stored columns.
> > And both support FKs and indexes on those too.
> > Would be great to have feature parity on this particular point,
> eventually.
>
> Isn't almost all of that functionality covered by a view?
>

Views can have foreign-keys?
Generated view columns be indexed? (for efficient CASCADE of the FKs)

AFAIK, no. Plus that would double the relations in schemas too. --DD


Re: Need Assistance: Command to display procedures does not work

2024-02-28 Thread Adrian Klaver

On 2/28/24 09:13, Sasmit Utkarsh wrote:

Reply to list also
Ccing list

HI Adrian,

Related to shc shema, they were part of different host/machine. 
Currently, I'm seeing the result for this discrepancy on another server 
machine where we have only a public schema.


1) I doubt that if this is the database you are showing in your pgAdmin4 
screenshot.


2) The information I asked for in my previous post is still relevant and 
needs answering.


3) Add pronamespace::regnamespace to your SELECT query to get the schema 
names for the the functions.





Regards,
Sasmit Utkarsh
+91-7674022625


On Wed, Feb 28, 2024 at 10:35 PM Adrian Klaver 
mailto:adrian.kla...@aklaver.com>> wrote:


On 2/28/24 08:52, Sasmit Utkarsh wrote:
 > Hi Adrian,
 >
 > Thanks for the info, But I have another question: I could see the
below
 > functions list with the help on query execution on psql command line
 >
 >    shc_data=> S*ELECT proname AS function_name,proacl AS
privileges FROM
 > pg_proc WHERE proname in
 >

('sql_insert_data_procedure','sql_select_data_procedure','sql_select_size_procedure','sql_update_data_procedure');*
 >         function_name       | privileges
 > ---+
 >   sql_select_size_procedure |
 >   sql_select_data_procedure |
 >   sql_insert_data_procedure |
 >   sql_update_data_procedure |
 > (4 rows)
 >
 > but when I try to connect to the same db on PgAdmin4, I don't see
the
 > list even after refresh. Why is that different?

1) Are you sure you are connecting to correct database.

2) If I followed your previous posts correctly those functions where
not
in the public schema you are looking at in pgAdmin4. They where in the
shc(?) schema which I am pretty sure is the second schema in the
Schemas(2) in the object tree.


 >  Thet
 > image.png
 >
 >
 >
 >
 >
 > Regards,
 > Sasmit Utkarsh
 > +91-7674022625
 >
 >


-- 
Adrian Klaver

adrian.kla...@aklaver.com 



--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Need Assistance: Command to display procedures does not work

2024-02-28 Thread Sasmit Utkarsh
Hello Adria/Rob,

Please note, 'shc' schema was part of a different machine/host. For now, I
see issue on another host machine which i connect using psql and PgAdmin4
and get different result on both and we have only public schema here on
this machine

shc_data=> SELECT proname AS function_name,proacl AS privileges FROM
pg_proc WHERE proname in
('sql_insert_data_procedure','sql_select_data_procedure','sql_select_size_procedure','sql_update_data_procedure');
   function_name   | privileges
---+
 sql_select_size_procedure |
 sql_select_data_procedure |
 sql_insert_data_procedure |
 sql_update_data_procedure |
(4 rows)

shc_data=> DROP PROCEDURE sql_insert_data_procedure;
ERROR:  could not find a procedure named "sql_insert_data_procedure"
shc_data=> SHOW search_path
shc_data->
shc_data-> ^C
shc_data=> SHOW search_path;
   search_path
-
 "$user", public
(1 row)

shc_data=>

Regards,
Sasmit Utkarsh
+91-7674022625


On Wed, Feb 28, 2024 at 10:43 PM Adrian Klaver 
wrote:

> On 2/28/24 09:08, Sasmit Utkarsh wrote:
> > Thanks Rob
> >
> > So which one could I rely on for better information? probably, results
> > shown by PgAdmin4 right? As  I get the below when I try to drop
> > procedure through psql cmd and i see the procedure getting fetched from
> > the query
> >
> > shc_data=> SELECT proname AS function_name,proacl AS privileges FROM
> > pg_proc WHERE proname in
> >
> ('sql_insert_data_procedure','sql_select_data_procedure','sql_select_size_procedure','sql_update_data_procedure');
> > function_name   | privileges
> > ---+
> >   sql_select_size_procedure |
> >   sql_select_data_procedure |
> > *sql_insert_data_procedure* |
> >   sql_update_data_procedure |
> > (4 rows)
> >
> > shc_data=> DROP PROCEDURE *sql_insert_data_procedure*;
> > ERROR:  could not find a procedure named "sql_insert_data_procedure"
> > shc_data=>
>
> We've been down this road before, it is a search_path issue.
>
> In psql do:
>
> SHOW search_path;
>
> then do:
>
> \df *.sql_insert_data_procedure
>
> Then see if the schema shown in the second command is in the search_path
> returned by the first command.
>
> >
> > Regards,
> > Sasmit Utkarsh
> > +91-7674022625
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


Re: Need Assistance: Command to display procedures does not work

2024-02-28 Thread Adrian Klaver

On 2/28/24 09:08, Sasmit Utkarsh wrote:

Thanks Rob

So which one could I rely on for better information? probably, results 
shown by PgAdmin4 right? As  I get the below when I try to drop 
procedure through psql cmd and i see the procedure getting fetched from 
the query


shc_data=> SELECT proname AS function_name,proacl AS privileges FROM 
pg_proc WHERE proname in 
('sql_insert_data_procedure','sql_select_data_procedure','sql_select_size_procedure','sql_update_data_procedure');

        function_name       | privileges
---+
  sql_select_size_procedure |
  sql_select_data_procedure |
*sql_insert_data_procedure* |
  sql_update_data_procedure |
(4 rows)

shc_data=> DROP PROCEDURE *sql_insert_data_procedure*;
ERROR:  could not find a procedure named "sql_insert_data_procedure"
shc_data=>


We've been down this road before, it is a search_path issue.

In psql do:

SHOW search_path;

then do:

\df *.sql_insert_data_procedure

Then see if the schema shown in the second command is in the search_path 
returned by the first command.




Regards,
Sasmit Utkarsh
+91-7674022625




--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Unable to get PostgreSQL 15 with Kerberos (GSS) working

2024-02-28 Thread Stephen Frost
Greetings,

* Matthew Dennison (m...@matty-uk.co.uk) wrote:
> I have subsequence discovered that the psql command running from remote 
> Windows (server/client) and RHEL8 works as expected using GSS.  PGAdmin4 also 
> works via Kerberos (was on my list of things to get working).  It's just 
> locally on the server psql will not work to postgresql running on the same 
> server.
> 
> I really don't get it, but have decided I can live without it working on the 
> server.

Perhaps I wasn't clear- it's not going to work if you do:

psql -h localhost

Instead, try doing:

psql -h hostname.mydomain.net -U postgres -d postgres

Basically, Kerberos isn't really set up to work when the host you're
connecting to is named 'localhost' because the name of the host is how
you figure out what ticket to acquire from the KDC.

Thanks,

Stephen


signature.asc
Description: PGP signature


Re: Need Assistance: Command to display procedures does not work

2024-02-28 Thread Sasmit Utkarsh
Thanks Rob

So which one could I rely on for better information? probably, results
shown by PgAdmin4 right? As  I get the below when I try to drop procedure
through psql cmd and i see the procedure getting fetched from the query

shc_data=> SELECT proname AS function_name,proacl AS privileges FROM
pg_proc WHERE proname in
('sql_insert_data_procedure','sql_select_data_procedure','sql_select_size_procedure','sql_update_data_procedure');
   function_name   | privileges
---+
 sql_select_size_procedure |
 sql_select_data_procedure |
 *sql_insert_data_procedure* |
 sql_update_data_procedure |
(4 rows)

shc_data=> DROP PROCEDURE *sql_insert_data_procedure*;
ERROR:  could not find a procedure named "sql_insert_data_procedure"
shc_data=>

Regards,
Sasmit Utkarsh
+91-7674022625


On Wed, Feb 28, 2024 at 10:25 PM Rob Sargent  wrote:

>
>
> > On Feb 28, 2024, at 8:53 AM, Sasmit Utkarsh 
> wrote:
> >
> > Hi Postgresql Team,
> >
> > Getting error while executing the below \df command to list the
> procedures/functions. whereas query gives the appropriate results Please
> assist on how to troubleshoot this.
> >
> > [sutkars...@dxctravel.svcs.entsvcs.com@zuccmshcd2 data-exports]$ psql
> "host=10.166.18.116 port=5432 dbname=shc_data user=shc_admin
> password=abc123 sslmode=require"
> > psql (10.23, server 14.9)
> > WARNING: psql major version 10, server major version 14.
> >  Some psql features might not work.
>
> The version difference between your psql client installation and the
> server installation is the problem.
>
>
>


Re: Need Assistance: Command to display procedures does not work

2024-02-28 Thread Adrian Klaver

On 2/28/24 08:52, Sasmit Utkarsh wrote:

Hi Adrian,

Thanks for the info, But I have another question: I could see the below 
functions list with the help on query execution on psql command line


   shc_data=> S*ELECT proname AS function_name,proacl AS privileges FROM 
pg_proc WHERE proname in 
('sql_insert_data_procedure','sql_select_data_procedure','sql_select_size_procedure','sql_update_data_procedure');*

        function_name       | privileges
---+
  sql_select_size_procedure |
  sql_select_data_procedure |
  sql_insert_data_procedure |
  sql_update_data_procedure |
(4 rows)

but when I try to connect to the same db on PgAdmin4, I don't see the 
list even after refresh. Why is that different?


1) Are you sure you are connecting to correct database.

2) If I followed your previous posts correctly those functions where not 
in the public schema you are looking at in pgAdmin4. They where in the 
shc(?) schema which I am pretty sure is the second schema in the 
Schemas(2) in the object tree.




 Thet
image.png





Regards,
Sasmit Utkarsh
+91-7674022625





--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Non-Stored Generated Columns

2024-02-28 Thread Laurenz Albe
On Wed, 2024-02-28 at 16:09 +0100, Dominique Devienne wrote:
> From https://www.postgresql.org/docs/16/ddl-generated-columns.html:
> > PostgreSQL currently implements only stored generated columns
> 
> We use generated columns extensively.
> And we have foreign-keys attached to those generated columns.
> The fact they are always Stored thus wastes space in our case.
> Any chance PostgreSQL might gain actual virtual / non-stored generated 
> columns soon? Ever?
> 
> For reference, both Oracle and SQLite have virtual / non-stored columns.
> And both support FKs and indexes on those too.
> Would be great to have feature parity on this particular point, eventually.

Isn't almost all of that functionality covered by a view?

Yours,
Laurenz Albe




Re: Need Assistance: Command to display procedures does not work

2024-02-28 Thread Rob Sargent



> On Feb 28, 2024, at 8:53 AM, Sasmit Utkarsh  wrote:
> 
> Hi Postgresql Team,
> 
> Getting error while executing the below \df command to list the 
> procedures/functions. whereas query gives the appropriate results Please 
> assist on how to troubleshoot this.
> 
> [sutkars...@dxctravel.svcs.entsvcs.com@zuccmshcd2 data-exports]$ psql 
> "host=10.166.18.116 port=5432 dbname=shc_data user=shc_admin password=abc123 
> sslmode=require"
> psql (10.23, server 14.9)
> WARNING: psql major version 10, server major version 14.
>  Some psql features might not work.

The version difference between your psql client installation and the server 
installation is the problem.






Re: Need Assistance: Command to display procedures does not work

2024-02-28 Thread Sasmit Utkarsh
Hi Adrian,

Thanks for the info, But I have another question: I could see the below
functions list with the help on query execution on psql command line

  shc_data=> S*ELECT proname AS function_name,proacl AS privileges FROM
pg_proc WHERE proname in
('sql_insert_data_procedure','sql_select_data_procedure','sql_select_size_procedure','sql_update_data_procedure');*
   function_name   | privileges
---+
 sql_select_size_procedure |
 sql_select_data_procedure |
 sql_insert_data_procedure |
 sql_update_data_procedure |
(4 rows)

but when I try to connect to the same db on PgAdmin4, I don't see the list
even after refresh. Why is that different?

[image: image.png]





Regards,
Sasmit Utkarsh
+91-7674022625


On Wed, Feb 28, 2024 at 9:33 PM Adrian Klaver 
wrote:

> On 2/28/24 07:53, Sasmit Utkarsh wrote:
> > Hi Postgresql Team,
> >
> > Getting error while executing the below \df command to list the
> > procedures/functions. whereas query gives the appropriate results Please
> > assist on how to troubleshoot this.
> >
> > [sutkars...@dxctravel.svcs.entsvcs.com@zuccmshcd2 data-exports]$ psql
> > "host=10.166.18.116 port=5432 dbname=shc_data user=shc_admin
> > password=abc123 sslmode=require"
> > psql (10.23, server 14.9)
> > WARNING: psql major version 10, server major version 14.
> >   Some psql features might not work.
>
> The above is not enough of a hint?
>
> Use psql version 14 as:
>
> https://www.postgresql.org/docs/11/release-11.html
>
> Replace system catalog pg_proc's proisagg and proiswindow columns with
> prokind (Peter Eisentraut)
>
> > SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits:
> > 256, compression: off)
> > Type "help" for help.
> >
> > *shc_data=> \df*
> > *ERROR:  column p.proisagg does not exist
> > LINE 6:   WHEN p.proisagg THEN 'agg'
> > ^
> > HINT:  Perhaps you meant to reference the column "p.prolang".*
> > *
> > *
> > shc_data=> S*ELECT proname AS function_name,proacl AS privileges FROM
> > pg_proc WHERE proname in
> >
> ('sql_insert_data_procedure','sql_select_data_procedure','sql_select_size_procedure','sql_update_data_procedure');*
> > function_name   | privileges
> > ---+
> >   sql_select_size_procedure |
> >   sql_select_data_procedure |
> >   sql_insert_data_procedure |
> >   sql_update_data_procedure |
> > (4 rows)
> >
> >
> > Regards,
> > Sasmit Utkarsh
> > +91-7674022625
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


Re: Need Assistance: Command to display procedures does not work

2024-02-28 Thread Adrian Klaver

On 2/28/24 07:53, Sasmit Utkarsh wrote:

Hi Postgresql Team,

Getting error while executing the below \df command to list the 
procedures/functions. whereas query gives the appropriate results Please 
assist on how to troubleshoot this.


[sutkars...@dxctravel.svcs.entsvcs.com@zuccmshcd2 data-exports]$ psql 
"host=10.166.18.116 port=5432 dbname=shc_data user=shc_admin 
password=abc123 sslmode=require"

psql (10.23, server 14.9)
WARNING: psql major version 10, server major version 14.
          Some psql features might not work.


The above is not enough of a hint?

Use psql version 14 as:

https://www.postgresql.org/docs/11/release-11.html

Replace system catalog pg_proc's proisagg and proiswindow columns with 
prokind (Peter Eisentraut)


SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 
256, compression: off)

Type "help" for help.

*shc_data=> \df*
*ERROR:  column p.proisagg does not exist
LINE 6:   WHEN p.proisagg THEN 'agg'
                ^
HINT:  Perhaps you meant to reference the column "p.prolang".*
*
*
shc_data=> S*ELECT proname AS function_name,proacl AS privileges FROM 
pg_proc WHERE proname in 
('sql_insert_data_procedure','sql_select_data_procedure','sql_select_size_procedure','sql_update_data_procedure');*

        function_name       | privileges
---+
  sql_select_size_procedure |
  sql_select_data_procedure |
  sql_insert_data_procedure |
  sql_update_data_procedure |
(4 rows)


Regards,
Sasmit Utkarsh
+91-7674022625


--
Adrian Klaver
adrian.kla...@aklaver.com





Need Assistance: Command to display procedures does not work

2024-02-28 Thread Sasmit Utkarsh
Hi Postgresql Team,

Getting error while executing the below \df command to list the
procedures/functions. whereas query gives the appropriate results Please
assist on how to troubleshoot this.

[sutkars...@dxctravel.svcs.entsvcs.com@zuccmshcd2 data-exports]$ psql
"host=10.166.18.116 port=5432 dbname=shc_data user=shc_admin
password=abc123 sslmode=require"
psql (10.23, server 14.9)
WARNING: psql major version 10, server major version 14.
 Some psql features might not work.
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits:
256, compression: off)
Type "help" for help.

*shc_data=> \df*



*ERROR:  column p.proisagg does not existLINE 6:   WHEN p.proisagg THEN
'agg'   ^HINT:  Perhaps you meant to reference the column
"p.prolang".*

shc_data=> S*ELECT proname AS function_name,proacl AS privileges FROM
pg_proc WHERE proname in
('sql_insert_data_procedure','sql_select_data_procedure','sql_select_size_procedure','sql_update_data_procedure');*
   function_name   | privileges
---+
 sql_select_size_procedure |
 sql_select_data_procedure |
 sql_insert_data_procedure |
 sql_update_data_procedure |
(4 rows)


Regards,
Sasmit Utkarsh
+91-7674022625


Re: Efficient rows filter for array inclusion with gin index

2024-02-28 Thread Shanti-Dominique

Replying to myself after more investigation.

On 28/02/2024 12:05, Shanti-Dominique wrote:




2)
    SELECT  *
    FROM    items i1
    JOIN item_paths p1 ON i1.ref_id = p1.ref_id
    JOIN items i2 ON ARRAY[i2.ref_id] <@ p1.item_path
    WHERE   ...

2) uses the operator <@ which is supported by the gin index, the test 
for inclusion is fast and the query does not run a sequential scan 
over the whole "item_paths" table. However, because of the 
ARRAY[i2.ref_id] construct, it performs a sequential scan on i2.
I was under the assumption that the ARRAY[] construct prevented 
postgresql from efficiently using the index on the other side of the 
operator, but I think I was mistaken. On a database full of data, I 
tried getting around this but did not see any improvement of performance.


First I tried to add an index on the single element array:

CREATE FUNCTION uuidarr(ref_id uuid) RETURNS uuid[]
  LANGUAGE SQL
  IMMUTABLE
  RETURNS NULL ON NULL INPUT
  RETURN ARRAY[ref_id];

CREATE INDEX items_ref_id_arr2_idx ON items USING gin (uuidarr(ref_id));

EXPLAIN
SELECT  i2.*
FROM    items i1
    JOIN item_paths p1 ON i1.ref_id = p1.ref_id
    JOIN items i2 ON uuidarr(i2.ref_id) <@ p1.item_path
WHERE   i1.name = 'a';


The performance was even worse. Then I tried with a generated column:

CREATE TABLE items (
    ref_id uuid DEFAULT public.gen_random_uuid() NOT NULL,
    ref_id_array uuid[] GENERATED ALWAYS AS (uuidarr(ref_id)) STORED,
    name character varying,
    parent_ref_id uuid
);

CREATE INDEX items_ref_id_array_idx ON items USING gin (ref_id_array);

EXPLAIN
SELECT  i2.*
FROM    items i1
    JOIN item_paths p1 ON i1.ref_id = p1.ref_id
    JOIN items i2 ON i2.ref_id_array <@ p1.item_path
WHERE   i1.name = 'a';

The performance was very similar to the query with ARRAY[...]

It seems there is no good solution for the general case, apart from 
changing the structure of my dataset and removing the use of arrays 
entirely.


I think I'll update my codebase and use <@ where it makes sense and = 
ANY in other places, but it'll be difficult to know for sure without 
running the query which one will be better.






Re: PostgreSQL Read-only mode usage

2024-02-28 Thread Jason Long
Hi,1- What does it mean, not the whole database?

2- Can this be considered a kind of guard?

3- Can you tell me a read-only scenario? For example, if we have three servers, 
then one of the servers can be the main server and the other two servers can be 
read only. 
 
  On Wed, Feb 28, 2024 at 4:24 PM, Francisco Olarte 
wrote:   On Wed, 28 Feb 2024 at 13:34, Jason Long  wrote:
> What is the use of a database in read-only mode?

Normally it is a connection/session to the database what is in
read-only mode, not the whole database.

As it has been pointed out, you connect in read only mode to read, so
you are protected from accidental modifications. Also, knowing your
transactions are not going to write make life easier for optimizers
and other things.

Francisco Olarte.
  


Re: PostgreSQL Guard

2024-02-28 Thread Jason Long
Hi,Thanks again.I have another questions:
1- Should I just install PostgreSQL normally on the Standby server? 
2- Are the steps the same for other Linux distributions like Debian?
 
  On Wed, Feb 28, 2024 at 9:29 AM, Ron Johnson wrote:  
 As before, I encourage you to read 
https://www.tecmint.com/configure-postgresql-streaming-replication-in-centos-8/.
On Tue, Feb 27, 2024 at 3:48 PM Jason Long  wrote:

Hi,Do I have to run this command on the server where the main database is 
located?

Suppose you want to transfer the database of a website like Amazon.com to a new 
server and delete the old one. Many users are buying and selling on this 
website at the same time and it is not possible to turn off the server. What do 
you do to move a database to another server?
 
 
  On Tue, Feb 27, 2024 at 9:30 PM, Ron Johnson wrote:  
 I used this web page to implement hot standby via physical streaming.  This 
command sets up everything for you:pg_basebackup \
            --pgdata=$PGDATA \
            --dbname=service=basebackup \
            --verbose --progress \
            --checkpoint=fast \
            --write-recovery-conf \
            --wal-method=stream \
            --create-slot --slot=pgstandby1 \
            --compress=server-zstd
I got it from 
https://www.tecmint.com/configure-postgresql-streaming-replication-in-centos-8/,
 which I encourage you to read.
For guarding, and situations with a minimal RTO SLA, I use PgPool-II to manage 
the cluster, automatic failover and Virtual IP address.  Note that only two 
database servers are needed.  Install PgPool on the two DB servers, and a 
third, smaller system.
There are other products which do the same thing as PgPool, and you might find 
them better.
On Tue, Feb 27, 2024 at 12:00 PM Jason Long  wrote:

Hi,Should I read 
https://www.postgresql.org/docs/current/warm-standby.html#STANDBY-SERVER-SETUP 
for question number 2?
 
 
  On Tue, Feb 27, 2024 at 7:28 PM, Adrian Klaver 
wrote:   On 2/27/24 07:08, Jason Long wrote:
> Hi,
> 1- Both split-brain protection and security. Any tool?
> 
> 2- Consider a database related to a website, which data is regularly 
> entered. Now you want to transfer this database to another server. 
> What's the solution?

https://www.postgresql.org/docs/current/high-availability.html



-- 
Adrian Klaver
adrian.kla...@aklaver.com

  

  

  


Non-Stored Generated Columns

2024-02-28 Thread Dominique Devienne
Hi.

>From https://www.postgresql.org/docs/16/ddl-generated-columns.html:
> PostgreSQL currently implements only stored generated columns

We use generated columns extensively.
And we have foreign-keys attached to those generated columns.
The fact they are always Stored thus wastes space in our case.
Any chance PostgreSQL might gain actual virtual / non-stored generated
columns soon? Ever?

For reference, both Oracle and SQLite have virtual / non-stored columns.
And both support FKs and indexes on those too.
Would be great to have feature parity on this particular point, eventually.

Thanks, --DD


Re: Orphan table files at data/base/

2024-02-28 Thread Riivo Kolka
I *know*, beyond reasonable doubt.
Also, I have the luxury of stopping connections temporarily and having
a backup ready.
I know backups do not include such data.
db restored from a backup is ~50GB.
I can afford to screw up even.

Thanks.




Kontakt Laurenz Albe () kirjutas kuupäeval
K, 28. veebruar 2024 kell 16:30:
>
> On Wed, 2024-02-28 at 15:44 +0200, Riivo Kolka wrote:
> > I was an unfortunate sequence of commands (all in single transaction)
> >
> > DROP TABLE huge;
> > CREATE TABLE huge AS... (huge PostGIS ST_Union operation);
> > CREATE INDEX ON huge USING GIST (geometry);
> >
> > by a well-meaning user, that caused a crash+recovery:
> >
> > server process (PID 92411) was terminated by signal 9: Killed
> > terminating any other active server processes
> > all server processes terminated; reinitializing
> > database system was not properly shut down; automatic recovery in progress
> >
> > And that left behind 280GB of files (of TABLE huge):
> > data/base/16384/2403959
> > ...
> > data/base/16384/2403959.282
> >
> >
> > SELECT pg_filenode_relation(0,2403959);
> > -- returns NULL
> >
> > may I do
> > sudo rm data/base/2403959*
> > ?
>
> If you *know* these files belong to the table created with
> CREATE TABLE huge AS SELECT ...
> then you can do that.
>
> If you are not 100% certain, go the safe way and use dump/restore
> to a new database.  Then DROP DATABASE on the old database, and
> all orphaned files will be gone.
>
> Yours,
> Laurenz Albe




Re: Orphan files filling root partition after crash

2024-02-28 Thread Laurenz Albe
On Wed, 2024-02-28 at 12:16 +0100, Dimitrios Apostolou wrote:
> yesterday I was doing:
> 
> ALTER TABLE x ADD UNIQUE ... , ADD PRIMARY KEY ...;
> 
> The table is almost a billion rows long but lies in its own TABLESPACE
> that has plenty of space.  But apparently the ALTER TABLE command is
> writing a lot to the default tablespace (not the temp_tablespace, that is
> already moved to a different partition).
> 
> That quickly filled up the 50GB free space in my root partition:
> 
> 
> 20:18:04.222 UTC [94144] PANIC:  could not write to file 
> "pg_wal/xlogtemp.94144": No space left on device
> [...]
> 20:19:11.578 UTC [94140] LOG:  WAL writer process (PID 94144) was terminated 
> by signal 6: Aborted
> 20:19:11.578 UTC [94140] LOG:  terminating any other active server processes
> 
> 
> After postgresql crashed and restarted, the disk space in the root
> partition was still not freed! I believe this is because of "orphaned
> files" as discussed in mailing list thread [1].
> 
> [1] 
> https://www.postgresql.org/message-id/CAN-RpxDBA7HbTsJPq4t4VznmRFJkssP2SNEMuG%3DoNJ%2B%3DsxLQew%40mail.gmail.com
> 
> I ended up doing some risky actions to remediate the problem: Find the
> filenames that have no identically named "oid" in pg_class, and delete
> (move to backup) the biggest ones while the database is stopped.
> Fortunately the database started up fine after that!

Lucky you.  It should have been "relfilenode" rather than "oid",
and some catalog tables don't have their files listed in the catalog,
because they are needed *before* the database can access tables.

> So what is the moral of the story? How to guard against this?

Monitor disk usage ...

The root of the problem is that you created the index in the default
tablespace.  You should have

   ALTER TABLE x ADD UNIQUE ... USING INDEX TABLESPACE bigtblspc;

> Needless to say, I would have hoped the database cleaned-up after itself
> even after an uncontrolled crash, or that it provided some tools for the
> job. (I tried VACUUM FULL on the table, but the orphaned files did not go
> away).

That is not so simple... Also, it would slow down crash recovery.

But I agree that it would be nice to have a tool that reports or
cleans up orphaned files.

Yours,
Laurenz Albe




Re: PostgreSQL Read-only mode usage

2024-02-28 Thread Andreas Joseph Krogh


På onsdag 28. februar 2024 kl. 13:55:34, skrev Andreas Kretschmer <
andr...@a-kretschmer.de >:


Am 28.02.24 um 13:34 schrieb Jason Long:
> Hello,
> What is the use of a database in read-only mode?

a standby-database will also be in read-only mode.
Right, we have PowerBI connected to a standby-DB, streaming-replication.






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com 
www.visena.com 
 


Re: Orphan table files at data/base/

2024-02-28 Thread Greg Sabino Mullane
No, I would not remove those files without making 100% sure they do not
belong to that database or any other. Are you sure you are inside database
16384 when you ran those commands? Does a 'stat' on those files line up
with the time of the crash? If so, I would stop pg, move the files
someplace else, do a pg_dump > /dev/null for another sanity check, then
remove those files.

Cheers,
Greg


Re: Orphan table files at data/base/

2024-02-28 Thread Laurenz Albe
On Wed, 2024-02-28 at 15:44 +0200, Riivo Kolka wrote:
> I was an unfortunate sequence of commands (all in single transaction)
> 
> DROP TABLE huge;
> CREATE TABLE huge AS... (huge PostGIS ST_Union operation);
> CREATE INDEX ON huge USING GIST (geometry);
> 
> by a well-meaning user, that caused a crash+recovery:
> 
> server process (PID 92411) was terminated by signal 9: Killed
> terminating any other active server processes
> all server processes terminated; reinitializing
> database system was not properly shut down; automatic recovery in progress
> 
> And that left behind 280GB of files (of TABLE huge):
> data/base/16384/2403959
> ...
> data/base/16384/2403959.282
> 
> 
> SELECT pg_filenode_relation(0,2403959);
> -- returns NULL
> 
> may I do
> sudo rm data/base/2403959*
> ?

If you *know* these files belong to the table created with
CREATE TABLE huge AS SELECT ...
then you can do that.

If you are not 100% certain, go the safe way and use dump/restore
to a new database.  Then DROP DATABASE on the old database, and
all orphaned files will be gone.

Yours,
Laurenz Albe




Re: PostgreSQL Guard

2024-02-28 Thread Greg Sabino Mullane
On Tue, Feb 27, 2024 at 3:48 PM Jason Long  wrote:

> Suppose you want to transfer the database of a website like Amazon.com to
> a new server and delete the old one. Many users are buying and selling on
> this website at the same time and it is not possible to turn off the
> server. What do you do to move a database to another server?
>

If we assume your question is "how can we migrate to a new system while
keeping the old one up and running to minimize downtime?" the answer is
logical replication.

https://www.crunchydata.com/blog/data-to-go-postgres-logical-replication

Cheers,
Greg


Orphan table files at data/base/

2024-02-28 Thread Riivo Kolka
I was an unfortunate sequence of commands (all in single transaction)

DROP TABLE huge;
CREATE TABLE huge AS... (huge PostGIS ST_Union operation);
CREATE INDEX ON huge USING GIST (geometry);

by a well-meaning user, that caused a crash+recovery:

server process (PID 92411) was terminated by signal 9: Killed
terminating any other active server processes
all server processes terminated; reinitializing
database system was not properly shut down; automatic recovery in progress

And that left behind 280GB of files (of TABLE huge):
data/base/16384/2403959
...
data/base/16384/2403959.282


SELECT pg_filenode_relation(0,2403959);
-- returns NULL

may I do
sudo rm data/base/2403959*
?




RE: Unable to get PostgreSQL 15 with Kerberos (GSS) working

2024-02-28 Thread Matthew Dennison
I have subsequence discovered that the psql command running from remote Windows 
(server/client) and RHEL8 works as expected using GSS.  PGAdmin4 also works via 
Kerberos (was on my list of things to get working).  It's just locally on the 
server psql will not work to postgresql running on the same server.

I really don't get it, but have decided I can live without it working on the 
server.

R
-Original Message-
From: Stephen Frost  
Sent: Monday, February 26, 2024 7:33 PM
To: Matthew Dennison 
Cc: pgsql-general@lists.postgresql.org
Subject: Re: Unable to get PostgreSQL 15 with Kerberos (GSS) working

Greetings,

* Matthew Dennison (m...@matty-uk.co.uk) wrote:
> No matter what I try I don't seem to be able to get the psl command locally 
> to work using Kerberos.  I receive for following message:
> FATAL:  GSSAPI authentication failed for user "postgres"
> FATAL:  GSSAPI authentication failed for user 
> myad.usern...@mydomain.net
[...]
> kinit -kt /pgcluster/data/postgres.keytab 
> POSTGRES/hostname.mydomain@mydomain.net ydomain@mydomain.net>
> klist
> Ticket cache: KCM:0:20151
> Default principal: POSTGRES/hostname.mydomain@mydomain.net
> 
> Valid starting ExpiresService principal
> 23/02/24 10:19:12  23/02/24 20:19:12  krbtgt/mydomain@mydomain.net
> renew until 23/02/24 20:19:12

Doesn't look like you're actually getting a PG tickets ...

> psql -h localhost -U postgres -d postgres

And this might be why.  Don't use '-h localhost' because that'll, by default 
anyway, cause the Kerberos library to try to do reverse DNS on the address you 
are trying to connect to (::1/128, for example) ... and that possibly just 
resolves to 'localhost', which isn't the server's name that you're trying to 
connect to.  If the rDNS lookup fails then we'll use what you provided- but 
again, that's just 'localhost' and isn't the server's name in the AD realm.

Try doing:

psql -h hostname.mydomain.net -U postgres -d postgres

instead, and update your pg_hba.conf to match those connections which are 
coming into the system's actual IP address instead of only matching loaclhost 
connection attempts.

You're definitely not going to have any succcess until you can do a 'klist' and 
see a POSTGRES/hostname.mydomain.net ticket listed after doing your psql 
attempt.

Thanks,

Stephen




Re: Orphan files filling root partition after crash

2024-02-28 Thread Dimitrios Apostolou

Hi Sergey,

On Wed, 28 Feb 2024, Sergey Fukanchik wrote:


Hi Dimitrios,
Do you have wal archiving enabled?
$PGDATA/pg_wal/ is a bit different from tablespaces (including
"default" one). It stores transaction journal.


I don't think I have WAL archiving enabled. Here are the relevant WAL
options in my config:

wal_compression = zstd
max_wal_size = 8GB
min_wal_size = 1GB
track_wal_io_timing = on
wal_writer_flush_after = 8MB

The issue happens because the WAL directory is by default in the same
filesystem with the default tablespace (root partition for Ubuntu). So
when the default tablespace filled up because of my ALTER TABLE operation,
there was no space for WAL either.


Thanks,
Dimitris





Re: PostgreSQL Read-only mode usage

2024-02-28 Thread Andreas Kretschmer




Am 28.02.24 um 13:34 schrieb Jason Long:

Hello,
What is the use of a database in read-only mode?


a standby-database will also be in read-only mode.

Regards, Andreas

--
Andreas Kretschmer
CYBERTEC PostgreSQL Services and Support





Re: PostgreSQL Read-only mode usage

2024-02-28 Thread Francisco Olarte
On Wed, 28 Feb 2024 at 13:34, Jason Long  wrote:
> What is the use of a database in read-only mode?

Normally it is a connection/session to the database what is in
read-only mode, not the whole database.

As it has been pointed out, you connect in read only mode to read, so
you are protected from accidental modifications. Also, knowing your
transactions are not going to write make life easier for optimizers
and other things.

Francisco Olarte.




Sv: PostgreSQL Read-only mode usage

2024-02-28 Thread Andreas Joseph Krogh


På onsdag 28. februar 2024 kl. 13:34:30, skrev Jason Long mailto:hack3r...@yahoo.com>>:
Hello,
What is the use of a database in read-only mode?

Thank you.
To be able to read, not modify, the data.

Reporting-tools like Microsoft PowerBI can connect to PG and generate nice 
reports.






--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com 
www.visena.com 
 


PostgreSQL Read-only mode usage

2024-02-28 Thread Jason Long
Hello,
What is the use of a database in read-only mode?

Thank you.




Re: Orphan files filling root partition after crash

2024-02-28 Thread Sergey Fukanchik
Hi Dimitrios,
Do you have wal archiving enabled?
$PGDATA/pg_wal/ is a bit different from tablespaces (including
"default" one). It stores transaction journal.
Instructions are here:
https://www.postgresql.org/docs/current/continuous-archiving.html#BACKUP-ARCHIVING-WAL
Some more info here:
https://www.postgresql.org/docs/current/wal-intro.html and here
https://www.postgresql.org/docs/current/wal-configuration.html
---
Sergey

On Wed, 28 Feb 2024 at 14:18, Dimitrios Apostolou  wrote:
>
> Hello list,
>
> yesterday I was doing:
>
> ALTER TABLE x ADD UNIQUE ... , ADD PRIMARY KEY ...;
>
> The table is almost a billion rows long but lies in its own TABLESPACE
> that has plenty of space.  But apparently the ALTER TABLE command is
> writing a lot to the default tablespace (not the temp_tablespace, that is
> already moved to a different partition).
>
> That quickly filled up the 50GB free space in my root partition:
>
>
> 20:18:04.222 UTC [94144] PANIC:  could not write to file 
> "pg_wal/xlogtemp.94144": No space left on device
> [...]
> 20:19:11.578 UTC [94140] LOG:  WAL writer process (PID 94144) was terminated 
> by signal 6: Aborted
> 20:19:11.578 UTC [94140] LOG:  terminating any other active server processes
>
>
> After postgresql crashed and restarted, the disk space in the root
> partition was still not freed! I believe this is because of "orphaned
> files" as discussed in mailing list thread [1].
>
> [1] 
> https://www.postgresql.org/message-id/CAN-RpxDBA7HbTsJPq4t4VznmRFJkssP2SNEMuG%3DoNJ%2B%3DsxLQew%40mail.gmail.com
>
> I ended up doing some risky actions to remediate the problem: Find the
> filenames that have no identically named "oid" in pg_class, and delete
> (move to backup) the biggest ones while the database is stopped.
> Fortunately the database started up fine after that!
>
> So what is the moral of the story? How to guard against this?
>
> Why did the database write so much to the default tablespace, even when
> both the table and the temp tablespace are elsewhere?  Also should one
> always keep the default tablespace away from the wal partition? (I guess
> it would have helped to avoid the crash, but the ALTER TABLE command would
> have still run out of space, so I'm not sure if the orphan files would
> have been avoided)?
>
> Needless to say, I would have hoped the database cleaned-up after itself
> even after an uncontrolled crash, or that it provided some tools for the
> job. (I tried VACUUM FULL on the table, but the orphaned files did not go
> away).
>
> My postgresql version is 16.2 installed on Ubuntu.
>
> Thank you,
> Dimitris
>
>
>


-- 
Sergey




Orphan files filling root partition after crash

2024-02-28 Thread Dimitrios Apostolou

Hello list,

yesterday I was doing:

ALTER TABLE x ADD UNIQUE ... , ADD PRIMARY KEY ...;

The table is almost a billion rows long but lies in its own TABLESPACE
that has plenty of space.  But apparently the ALTER TABLE command is
writing a lot to the default tablespace (not the temp_tablespace, that is
already moved to a different partition).

That quickly filled up the 50GB free space in my root partition:


20:18:04.222 UTC [94144] PANIC:  could not write to file 
"pg_wal/xlogtemp.94144": No space left on device
[...]
20:19:11.578 UTC [94140] LOG:  WAL writer process (PID 94144) was terminated by 
signal 6: Aborted
20:19:11.578 UTC [94140] LOG:  terminating any other active server processes


After postgresql crashed and restarted, the disk space in the root
partition was still not freed! I believe this is because of "orphaned
files" as discussed in mailing list thread [1].

[1] 
https://www.postgresql.org/message-id/CAN-RpxDBA7HbTsJPq4t4VznmRFJkssP2SNEMuG%3DoNJ%2B%3DsxLQew%40mail.gmail.com

I ended up doing some risky actions to remediate the problem: Find the
filenames that have no identically named "oid" in pg_class, and delete
(move to backup) the biggest ones while the database is stopped.
Fortunately the database started up fine after that!

So what is the moral of the story? How to guard against this?

Why did the database write so much to the default tablespace, even when
both the table and the temp tablespace are elsewhere?  Also should one
always keep the default tablespace away from the wal partition? (I guess
it would have helped to avoid the crash, but the ALTER TABLE command would
have still run out of space, so I'm not sure if the orphan files would
have been avoided)?

Needless to say, I would have hoped the database cleaned-up after itself
even after an uncontrolled crash, or that it provided some tools for the
job. (I tried VACUUM FULL on the table, but the orphaned files did not go
away).

My postgresql version is 16.2 installed on Ubuntu.

Thank you,
Dimitris





Efficient rows filter for array inclusion with gin index

2024-02-28 Thread Shanti-Dominique

Hello,

I have a problem writing performant queries that requires to test value 
inclusion within an array.


First: the schema consists of an "items" table with a "ref_id" primary 
key (uuid, primary key). The items are hierarchical and can have a 
"parent_ref_id" (uuid) that references their parent. In order to avoid 
recursive queries,there is a secondary table "item_paths" populated via 
triggers that have two columns "ref_id" (uuid that references a row in 
"items") and "item_path" (uuid[] which contains the path of items from 
the root to the item included, a gin index).


On order to test if an item i2 isa descendant of another item i1, I see 
two ways to query the database:


1)
    SELECT  *
    FROM    items i1
    JOIN item_paths p1 ON i1.ref_id = p1.ref_id
    JOIN items i2 ON i2.ref_id = ANY (p1.item_path)
    WHERE   ...

2)
    SELECT  *
    FROM    items i1
    JOIN item_paths p1 ON i1.ref_id = p1.ref_id
    JOIN items i2 ON ARRAY[i2.ref_id] <@ p1.item_path
    WHERE   ...

The is that neither of these two solutions seems good for the general case:

1) does not make use of the gin index as the "= ANY(...)" construct is 
not part of the supported operators. What it seems to be doing is that 
it runs a sequential scan against p1 while it uses the index to find the 
item i2.


2) uses the operator <@ which is supported by the gin index, the test 
for inclusion is fast and the query does not run a sequential scan over 
the whole "item_paths" table. However, because of the ARRAY[i2.ref_id] 
construct, it performs a sequential scan on i2.


I use this kind of construct in many places in my code and I'd welcome a 
solution that would use the index in all cases. Is it possible?


Thank you.

Here below a SQL file that demonstrate the problem using EXPLAIN:

-- Database Schema

SET enable_seqscan  = off;

CREATE TABLE items (
    ref_id uuid DEFAULT public.gen_random_uuid() NOT NULL,
    name character varying,
    parent_ref_id uuid
);

CREATE TABLE item_paths (
    ref_id uuid NOT NULL,
    item_path uuid[]
);

CREATE INDEX items_ref_id_idx ON items USING btree (ref_id);
CREATE INDEX items_name_idx ON items USING btree (name);
CREATE INDEX items_parent_ref_id_idx ON items USING btree (parent_ref_id);
CREATE INDEX item_paths_ref_id_idx ON item_paths USING btree (ref_id);
CREATE INDEX item_paths_item_path_idx ON item_paths USING gin (item_path);



-- Fast: select the small number of items i1, join with their paths and 
from the

-- path values take all the values from i2 using the index on i2

EXPLAIN
SELECT  i2.*
FROM    items i1
    JOIN item_paths p1 ON i1.ref_id = p1.ref_id
    JOIN items i2 ON i2.ref_id = ANY (p1.item_path)
WHERE   i1.name = 'a';

-- Nested Loop  (cost=5.52..102.86 rows=903 width=64)
--   ->  Nested Loop  (cost=5.37..46.14 rows=21 width=32)
-- ->  Bitmap Heap Scan on items i1 (cost=4.18..12.64 rows=4 
width=16)

--   Recheck Cond: ((name)::text = 'a'::text)
--   ->  Bitmap Index Scan on items_name_idx 
(cost=0.00..4.18 rows=4 width=0)

-- Index Cond: ((name)::text = 'a'::text)
-- ->  Bitmap Heap Scan on item_paths p1 (cost=1.19..8.32 rows=5 
width=48)

--   Recheck Cond: (ref_id = i1.ref_id)
--   ->  Bitmap Index Scan on item_paths_ref_id_idx  
(cost=0.00..1.19 rows=5 width=0)

-- Index Cond: (ref_id = i1.ref_id)
--   ->  Index Scan using items_ref_id_idx on items i2 (cost=0.15..2.27 
rows=43 width=64)

-- Index Cond: (ref_id = ANY (p1.item_path))



-- Slow: select the small number of items then performs a sequantial 
scan on the

-- item_paths to find the paths that have this item ref_id in their path.

EXPLAIN
SELECT  item_paths.*
FROM    items
    JOIN item_paths ON items.ref_id = ANY (item_paths.item_path)
WHERE   items.name = 'a';

-- Nested Loop  (cost=104.18..1000140.35 rows=209 width=48)
--   Join Filter: (items.ref_id = ANY (item_paths.item_path))
--   ->  Seq Scan on item_paths (cost=100.00..120.70 
rows=1070 width=48)

--   ->  Materialize  (cost=4.18..12.66 rows=4 width=16)
-- ->  Bitmap Heap Scan on items  (cost=4.18..12.64 rows=4 width=16)
--   Recheck Cond: ((name)::text = 'a'::text)
--   ->  Bitmap Index Scan on items_name_idx 
(cost=0.00..4.18 rows=4 width=0)

-- Index Cond: ((name)::text = 'a'::text)



-- Slow: Find by index i1 and p1, then perform a sequantial scan on i2 
(probably

-- caused by the construct ARRAY[i1.ref_id]) to find the matching item

EXPLAIN
SELECT  i2.*
FROM    items i1
    JOIN item_paths p1 ON i1.ref_id = p1.ref_id
    JOIN items i2 ON ARRAY[i2.ref_id] <@ p1