Re: Psql wants to use IP6 when connecting to self using tcp...

2021-06-23 Thread Alan Hodgson
On Wed, 2021-06-23 at 17:25 -0500, Jerry LeVan wrote:
> 
> So the question is: Why does using the short name evidently cause postresql
> to use the ipv6 address
> and using the full name use the ipv4 address?

I'm thinking this might be coming from Avahi, which might be enabled on Fedora
by default. That uses local network broadcasts for name discovery, and is the
only thing I can think of that would result in an fe80:: address showing up in
a name search if you didn't deliberately add it to DNS.

Check /etc/nsswitch.conf and remove anything like mdns4_minimal.

https://fedoramagazine.org/find-systems-easily-lan-mdns/



signature.asc
Description: This is a digitally signed message part


Re: Partitioned Table Index Column Order

2021-06-23 Thread David Rowley
On Thu, 24 Jun 2021 at 11:56, Tom Lane  wrote:
>
> David Rowley  writes:
> > On Thu, 24 Jun 2021 at 10:55, Alvaro Herrera  
> > wrote:
> >> It is not relevant from the partitioning point of view.  Other factors
> >> can be used to decide the column order.
>
> > I'm not so sure that's really 100% true.  There is at least one
> > partitioning feature that will work when the partitioning column is
> > first and won't when it's not.
> > Ordered partition scans work with RANGE and LIST partitioning:
>
> Sure, but is that any different from the behavior with unpartitioned
> tables?  You have to make the index column order agree with the
> ORDER BY you want to use, in either case.

The reason I mentioned it is that the performance of the ordered
partitioned scans pretty good.  If the application does ORDER BY a,b
just as often as it does ORDER BY b,a and you just get to pick 1
index, then it's better to have the index with the partitioned key
first. At least one of the queries can get away without doing a Sort
that way.  If you have the partition key last in the index then both
queries need to sort...  You could fix that by adding a 2nd index, but
that's not always practical, so it seems worth a mention, at least to
me.

David




Re: Psql wants to use IP6 when connecting to self using tcp...

2021-06-23 Thread Tom Lane
Jerry LeVan  writes:
> bigbox has a freshly installed Fedora 34 system and it runs postgresql 13.3

> the following are being run ON bigbox

> nslookup bigbox  returns the correct ipv4 address
> nslookup bigbox.skynet returns the correct ipv4 address

> psql -h bigbox.skynet allows me to connect to the database
> psql -h bigbox  fails because the ipv6 address is returned and there is no 
> hba entry..

It's quite odd that nslookup isn't agreeing with glibc's name
resolution code.  I suspect that the problem lies with systemd, which
has relentlessly continued its campaign to break everything in sight,
and has lately gotten its fingers into name resolution as well:

https://fedoraproject.org/wiki/Changes/systemd-resolved

Perhaps something on that page will help you figure out what's up.
My guess based on what you've said here is that nslookup and glibc
are not using the same DNS search path.  Traditionally that would be
set by a "search" line in /etc/resolv.conf, but ghod knows how it's
determined under systemd-resolved.

regards, tom lane




Re: Partitioned Table Index Column Order

2021-06-23 Thread Tom Lane
David Rowley  writes:
> On Thu, 24 Jun 2021 at 10:55, Alvaro Herrera  wrote:
>> It is not relevant from the partitioning point of view.  Other factors
>> can be used to decide the column order.

> I'm not so sure that's really 100% true.  There is at least one
> partitioning feature that will work when the partitioning column is
> first and won't when it's not.
> Ordered partition scans work with RANGE and LIST partitioning:

Sure, but is that any different from the behavior with unpartitioned
tables?  You have to make the index column order agree with the
ORDER BY you want to use, in either case.

regards, tom lane




Re: Partitioned Table Index Column Order

2021-06-23 Thread David Rowley
On Thu, 24 Jun 2021 at 10:55, Alvaro Herrera  wrote:
>
> On 2021-Jun-23, Rumpi Gravenstein wrote:
>
> > As a best practice is it better to create the primary key starting or
> > ending with the partition column?
>
> It is not relevant from the partitioning point of view.  Other factors
> can be used to decide the column order.

I'm not so sure that's really 100% true.  There is at least one
partitioning feature that will work when the partitioning column is
first and won't when it's not.

Ordered partition scans work with RANGE and LIST partitioning:

create table ab (a int, b int, primary key(a,b)) partition by range(a);
create table ab1 partition of ab for values from (0) to (10);
create table ab2 partition of ab for values from (10) to (20);
explain (costs off) select * from ab order by a;
QUERY PLAN
--
 Append
   ->  Index Only Scan using ab1_pkey on ab1 ab_1
   ->  Index Only Scan using ab2_pkey on ab2 ab_2

Reverse the order and you get:

QUERY PLAN
--
 Sort
   Sort Key: ab.a
   ->  Append
 ->  Seq Scan on ab1 ab_1
 ->  Seq Scan on ab2 ab_2

David




Re: Partitioned Table Index Column Order

2021-06-23 Thread Tom Lane
Alvaro Herrera  writes:
> On 2021-Jun-23, Rumpi Gravenstein wrote:
>> As a best practice is it better to create the primary key starting or
>> ending with the partition column?

> It is not relevant from the partitioning point of view.  Other factors
> can be used to decide the column order.

See in particular the hints in

https://www.postgresql.org/docs/current/indexes-multicolumn.html

The only thing that's different about partitioned situations is that
any particular child partition might have only a few values of the
partitioning column, which'd suggest putting it last if there are
no other relevant considerations.  However, if you need a particular
column order to match query requirements, that's certainly going
to be a more important consideration.

regards, tom lane




Re: Is there something similar like flashback query from Oracle planned for PostgreSQL

2021-06-23 Thread Thomas Munro
On Thu, Jun 24, 2021 at 6:54 AM Dirk Krautschick
 wrote:
> Is there something planned to get a behaviour like Oracle's flashback query 
> based on the old values
> before deleted by vacuum?
>
> So a feature to recreate old versions of rows if still there?
>
> Or are there any related extensions or tools doing this?

There are some things like pg_dirtyread and probably more.  You might
be interested in some of the references in this thread:

https://www.postgresql.org/message-id/flat/CAKLmikOkK%2Bs0V%2B3Pi1vS2GUWQ0FAj8fEkVj9WTGSwZE9nRsCbQ%40mail.gmail.com

As for the SQL standard's approach to this, there are some active
-hackers threads on that with patches in development... look for
"temporal tables" and "system versioned".




Re: How to hash a large amount of data within Postgres?

2021-06-23 Thread Tomas Vondra
On 6/23/21 7:39 PM, Peter J. Holzer wrote:
> On 2021-06-21 15:53:09 +0200, Thorsten Schöning wrote:
>> Some years ago I implemented some SQL to read all files, build a table
>> of SHA256 hashes and tell me how much data is redundant. The goal was
>> to have a look at which files share the same hash with different LOIDs
>> and optionally change that, so that all those files are only stored
>> once on the end.
>>
>> While the approach was pretty naive, because it simply read all files
>> into memory to calculate the hashes, I'm somewhat sure it worked in
>> the past with Postgres 9.6. The executing server had enough free RAM
>> available as well to process the at most ~4 GiB large files one after
>> another.
>>
>> I tried that SQL today with Postgres 11 on UB 18.04 and it failed:
>>
>>> [Code: 0, SQL State: XX000]  FEHLER: invalid memory alloc request size 
>>> 1898107949
>>>   Wobei: PL/pgSQL-Funktion loid_hash_calc(oid,text)[...]
> [...]
>> I searched regaridng that issue and only found two relevant results:
>> Corrupted rows for some reason and simply size restrictions when
>> allocating memory. The latter is more likely than the former in my
>> case, as the restrictions seems to be 1 GiB and I do have larger
>> files.
> 
> 1 GB is the maximum size of quite a few data structures in PostgreSQL. I
> don't use PL/pgSQL, but I wouldn't be surprised if it was the maximum
> size of whatever loread() returns (a bytea?). I would be surprised if
> this limit was higher in version 9.6 than it is in version 11, however.
> 

Well, it's actually a bit worse than that - the maximum allocation size
is (1GB - 1B), as it's defined like this:

#define MaxAllocSize((Size) 0x3fff) /* 1 gigabyte - 1 */

And this includes both the "user data" and a small "header" used for the
bytea value. Depending on what format you use to output the values there
may be additional limits (e.g. 'hex' requires 2 characters per byte, so
doubling the amount of memory needed).

For large objects this is not an issue, because we store them in small
chunks, not as one large bytea value.

> 
>> I'm doing the following simply currently, because I didn't find any
>> interfaces allowing to forward blocks of data, LOIDs, file descriptors
>> or anything like that working with smaller buffers or alike.
>>
>>> fd  := lo_open( loid,  INV_READ);
>>> size:= lo_lseek(fd, 0, SEEK_END);
>>> PERFORMlo_lseek(fd, 0, SEEK_SET);
>>
>>> hashBin := digest(loread(fd, size), algorithm);
>>> hashHex := encode(hashBin,  'hex');
>>
>> So, is there any way to work around the problem I have currently?
> 
> Normally, hash libararies have a way to feed chunks of data into a hash
> computations to avoid having to keep the whole thing in RAM.
> The pgcrypto extension seems to be lacking such functionality, however.
> 
> I would build something similar to a Merkle tree:
> 
> Choose a convenient chunk size (a few MB is probably ok), read the large
> object in chunks of this size, computing the hash for each. Concatenate
> all the hashes and compute the hash of that. Add intermediate levels if
> the the concatenated hashes are still too large to fit in memory.
> 

Not sure where you searched, but there definitely are interfaces to read
chunks of data from large objects - see this:

1) lo_get (loid, offset, length)
   https://www.postgresql.org/docs/13/lo-funcs.html

2) lo_seek() + lo_read()
   https://www.postgresql.org/docs/13/lo-interfaces.html

Obviously, you can't do "loread(fd, size)" because that's going to
attempt building one large bytea, failing because of the alloc limit.
You have to stream the data into the hash.

Doing that in plpgsql is possible, although possibly somewhat slow.


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: Partitioned Table Index Column Order

2021-06-23 Thread Alvaro Herrera
On 2021-Jun-23, Rumpi Gravenstein wrote:

> As a best practice is it better to create the primary key starting or
> ending with the partition column?

It is not relevant from the partitioning point of view.  Other factors
can be used to decide the column order.

-- 
Álvaro Herrera   Valdivia, Chile




Partitioned Table Index Column Order

2021-06-23 Thread Rumpi Gravenstein
All,

I'm on PostgreSQL 13 and have a partitioned table with a primary key.

create table t( a integer, b integer, c varchar, d .. ) partitioned by
range( a );

As a best practice is it better to create the primary key starting or
ending with the partition column?

e.g.
1)  t_pkey primary key (a, b, c)

or

2)  t_pkey primary key (b, c, a)

Neither the PostgreSQL documentation nor Google have an answer - at least
as far as I could find.  I see examples in the Postgres documentation that
use 2) but no explanation of why that choice was made.

Does it even make a difference?

Thoughts?

-- 
Rumpi Gravenstein


Re: Psql wants to use IP6 when connecting to self using tcp...

2021-06-23 Thread Jerry LeVan



> On Jun 23, 2021, at 12:51 PM, Peter J. Holzer  wrote:
> 
> On 2021-06-21 19:44:16 -0500, Jerry Levan wrote:
>> The below fails... the 'fe80' is the ip6 address of big box.
>> [jerry@bigbox ~]$ psql -h bigbox
>> psql: error: FATAL:  no pg_hba.conf entry for host 
>> "fe80::fe3f:dbff:fed1:f62e%enp7s0", user "lxxx", database "lxxx", SSL off
> 
> This looks like a configuration error in your DNS. FE80:... is a
> link-local address, and you generally don't want them in DNS. Either
> remove that completely (as others have already suggested) or replace if
> with the global unicast (or at least unique local) IPv6 address (and
> then add that to pg_hba.conf).
> 
> If the address isn't in DNS, check your /etc/hosts.
> 

My dns is the Synology DNS that comes with their diskstation.

When I set it up I did NOT include any IP6 info so it is not returning
the IP6 address. The local name of my network is 'skynet'.

bigbox has a freshly installed Fedora 34 system and it runs postgresql 13.3

the following are being run ON bigbox

nslookup bigbox  returns the correct ipv4 address
nslookup bigbox.skynet returns the correct ipv4 address

psql -h bigbox.skynet allows me to connect to the database
psql -h bigbox  fails because the ipv6 address is returned and there is no hba 
entry..

I have three other machines running a postgresql database and I can use attach 
to them
using the long and short names with no problem from bigbox.

ie psql -h  and psql -h .skynet all work.

So the question is: Why does using the short name evidently cause postresql to 
use the ipv6 address
and using the full name use the ipv4 address?

The problem has a workaround 1) disable the server from listening on ipv6
 2) or add bigbox and bigbox.skynet to /etc/hosts

let me note that systemd-resolve { bigbox | bigbox.skynet } always returns the 
ipv4 address...





Re: Is there something similar like flashback query from Oracle planned for PostgreSQL

2021-06-23 Thread Vijaykumar Jain
On Thu, 24 Jun 2021 at 00:24, Dirk Krautschick <
dirk.krautsch...@trivadis.com> wrote:

> Hi,
> Is there something planned to get a behaviour like Oracle's flashback
> query based on the old values
> before deleted by vacuum?
>
> So a feature to recreate old versions of rows if still there?
>
> Or are there any related extensions or tools doing this?
>
>
postgresql has external  tools like barman that ship WALs to a different
location for point in time recovery.
That way, you can restore the db to any point in the past since the time
you were collecting WALs.

Barman Manual (pgbarman.org) 
if this is not the same, then please ignore the above :)


-- 
Thanks,
Vijay
Mumbai, India


Re: Postgres PANIC when it could not open file in pg_logical/snapshots directory

2021-06-23 Thread Vijaykumar Jain
On Tue, 22 Jun 2021 at 14:34, Vijaykumar Jain <
vijaykumarjain.git...@gmail.com> wrote:

>
> On Tue, 22 Jun 2021 at 13:32, Mike Yeap  wrote:
>
>> Hi all,
>>
>> I have a Postgres version 11.11 configured with both physical replication
>> slots (for repmgr) as well as some logical replication slots (for AWS
>> Database Migration Service (DMS)). This morning, the server went panic with
>> the following messages found in the log file:
>>
>> 2021-06-22 04:56:35.314 +08 [PID=19457 application="[unknown]"
>> user_name=dms database=** host(port)=**(48360)] PANIC:  could not open file
>> "pg_logical/snapshots/969-FD606138.snap": Operation not permitted
>>
>> 2021-06-22 04:56:35.317 +08 [PID=1752 application="" user_name= database=
>> host(port)=] LOG:  server process (PID 19457) was terminated by signal 6:
>> Aborted
>>
>> 2021-06-22 04:56:35.317 +08 [PID=1752 application="" user_name= database=
>> host(port)=] LOG:  terminating any other active server processes
>>
>
>
I just tried the below case, when a running logical replication is denied
access to the snapshot folder via chattr +i  to make the pg crash at
publisher.
Hence I was speculating, something was occured at the filesystem, this may
not be related, but just trying to get a scenario to simulate a repeatable
crash.

A (port 5001 publisher) -> logical replication all tables -> B(port 5002
subscriber), all working fine.

postgres@db:~/playground/logical_replication$ psql -p 5001 -c 'select
count(1) from t;'
 count
---
  1000
(1 row)

postgres@db:~/playground/logical_replication$ psql -p 5002 -c 'select
count(1) from t;'
 count
---
  1000
(1 row)

now i change file attributes of pg_logical folder  at the publisher

root@db:/home/postgres/playground/logical_replication/db1/pg_logical# cd
/home/postgres/playground/logical_replication/db1/pg_logical
root@db:/home/postgres/playground/logical_replication/db1/pg_logical# lsattr
--e- ./mappings
--e- ./snapshots
--e- ./replorigin_checkpoint
root@db:/home/postgres/playground/logical_replication/db1/pg_logical#
chattr -R +i *   # do not allow mod
root@db:/home/postgres/playground/logical_replication/db1/pg_logical# lsattr
i-e- ./mappings
i-e- ./snapshots
i-e- ./replorigin_checkpoint

 psql -p 5001 -c 'delete from t; checkpoint;'  # and crash

2021-06-24 00:22:36.998 IST [2899] LOG:  could not remove file
"pg_logical/snapshots/0-16CE9D8.snap": Operation not permitted
2021-06-24 00:22:36.998 IST [2899] LOG:  could not remove file
"pg_logical/snapshots/0-16E72B8.snap": Operation not permitted
2021-06-24 00:22:36.998 IST [2899] LOG:  could not remove file
"pg_logical/snapshots/0-16CE9A0.snap": Operation not permitted
2021-06-24 00:22:36.998 IST [2899] LOG:  could not remove file
"pg_logical/snapshots/0-16E7398.snap": Operation not permitted
2021-06-24 00:22:37.003 IST [2899] PANIC:  could not open file
"pg_logical/replorigin_checkpoint": Operation not permitted
2021-06-24 00:22:37.092 IST [2897] LOG:  checkpointer process (PID 2899)
was terminated by signal 6: Aborted
2021-06-24 00:22:37.092 IST [2897] LOG:  terminating any other active
server processes
WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back the
current transaction and exit, because another server process exited
abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and
repeat your command.
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
connection to server was lost
2021-06-24 00:22:37.093 IST [2897] LOG:  all server processes terminated;
reinitializing
postgres@db:~/playground/logical_replication$ 2021-06-24 00:22:37.100 IST
[2920] LOG:  database system was interrupted; last known up at 2021-06-24
00:20:54 IST
2021-06-24 00:22:37.100 IST [2921] FATAL:  the database system is in
recovery mode
2021-06-24 00:22:37.210 IST [2920] LOG:  could not open file
"./pg_logical/snapshots/0-16CE9D8.snap": Operation not permitted
2021-06-24 00:22:37.210 IST [2920] LOG:  could not open file
"./pg_logical/snapshots/0-16E72B8.snap": Operation not permitted
2021-06-24 00:22:37.210 IST [2920] LOG:  could not open file
"./pg_logical/snapshots/0-16CE9A0.snap": Operation not permitted
2021-06-24 00:22:37.210 IST [2920] LOG:  could not open file
"./pg_logical/snapshots/0-16EF1E0.snap": Operation not permitted
2021-06-24 00:22:37.210 IST [2920] LOG:  could not open file
"./pg_logical/snapshots/0-16E7398.snap": Operation not permitted
2021-06-24 00:22:37.210 IST [2920] LOG:  could not open file
"./pg_logical/replorigin_checkpoint": Operation not permitted
2021-06-24 00:22:37.212 IST [2920] LOG:  database system was not properly
shut down; automatic recovery in progress
2021-06-24 00:22:37.214 IST [2920] LOG:  redo starts at 0/1

Is there something similar like flashback query from Oracle planned for PostgreSQL

2021-06-23 Thread Dirk Krautschick
Hi,

just a curious question...

Is there something planned to get a behaviour like Oracle's flashback query 
based on the old values
before deleted by vacuum?

So a feature to recreate old versions of rows if still there?

Or are there any related extensions or tools doing this?

Thanks

Dirk




Re: Psql wants to use IP6 when connecting to self using tcp...

2021-06-23 Thread Peter J. Holzer
On 2021-06-21 19:44:16 -0500, Jerry Levan wrote:
> The below fails... the 'fe80' is the ip6 address of big box.
> [jerry@bigbox ~]$ psql -h bigbox
> psql: error: FATAL:  no pg_hba.conf entry for host 
> "fe80::fe3f:dbff:fed1:f62e%enp7s0", user "lxxx", database "lxxx", SSL off

This looks like a configuration error in your DNS. FE80:... is a
link-local address, and you generally don't want them in DNS. Either
remove that completely (as others have already suggested) or replace if
with the global unicast (or at least unique local) IPv6 address (and
then add that to pg_hba.conf).

If the address isn't in DNS, check your /etc/hosts.

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: How to hash a large amount of data within Postgres?

2021-06-23 Thread Peter J. Holzer
On 2021-06-21 15:53:09 +0200, Thorsten Schöning wrote:
> Some years ago I implemented some SQL to read all files, build a table
> of SHA256 hashes and tell me how much data is redundant. The goal was
> to have a look at which files share the same hash with different LOIDs
> and optionally change that, so that all those files are only stored
> once on the end.
> 
> While the approach was pretty naive, because it simply read all files
> into memory to calculate the hashes, I'm somewhat sure it worked in
> the past with Postgres 9.6. The executing server had enough free RAM
> available as well to process the at most ~4 GiB large files one after
> another.
> 
> I tried that SQL today with Postgres 11 on UB 18.04 and it failed:
> 
> > [Code: 0, SQL State: XX000]  FEHLER: invalid memory alloc request size 
> > 1898107949
> >   Wobei: PL/pgSQL-Funktion loid_hash_calc(oid,text)[...]
[...]
> I searched regaridng that issue and only found two relevant results:
> Corrupted rows for some reason and simply size restrictions when
> allocating memory. The latter is more likely than the former in my
> case, as the restrictions seems to be 1 GiB and I do have larger
> files.

1 GB is the maximum size of quite a few data structures in PostgreSQL. I
don't use PL/pgSQL, but I wouldn't be surprised if it was the maximum
size of whatever loread() returns (a bytea?). I would be surprised if
this limit was higher in version 9.6 than it is in version 11, however.


> I'm doing the following simply currently, because I didn't find any
> interfaces allowing to forward blocks of data, LOIDs, file descriptors
> or anything like that working with smaller buffers or alike.
> 
> > fd  := lo_open( loid,  INV_READ);
> > size:= lo_lseek(fd, 0, SEEK_END);
> > PERFORMlo_lseek(fd, 0, SEEK_SET);
> 
> > hashBin := digest(loread(fd, size), algorithm);
> > hashHex := encode(hashBin,  'hex');
> 
> So, is there any way to work around the problem I have currently?

Normally, hash libararies have a way to feed chunks of data into a hash
computations to avoid having to keep the whole thing in RAM.
The pgcrypto extension seems to be lacking such functionality, however.

I would build something similar to a Merkle tree:

Choose a convenient chunk size (a few MB is probably ok), read the large
object in chunks of this size, computing the hash for each. Concatenate
all the hashes and compute the hash of that. Add intermediate levels if
the the concatenated hashes are still too large to fit in memory.

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: [postgis-users] Exclusion constraint with custom operator not working as expected

2021-06-23 Thread Komяpa
Hello,

The reason why constraint is not working is that GIST scan using your
operator does not return what you expect. Deeper debugging needed on your
side to fix that select to return the rows you need.


12:23:37 [kom] > explain select * from test_1 where g |*| 'LINESTRING(10
10,50 50)';
┌──┐

│  QUERY PLAN
 │
├──┤

│ Index Scan using test_1_g_excl on test_1  (cost=0.14..22.26 rows=635
width=36)   │
│   Index Cond: (g |*|
'01020002002440244049404940'::geometry)
│
└──┘

(2 rows)

Time: 0,916 ms
12:23:41 [kom] > select * from test_1 where g |*| 'LINESTRING(10 10,50
50)';
┌─┬───┐
│ fid │ g │
├─┼───┤
└─┴───┘
(0 rows)

Time: 0,638 ms


On Tue, Jun 22, 2021 at 11:30 PM Rhys A.D. Stewart 
wrote:

> Greetings All,
>
> Firstly, apologies for cross posting.
> I would like to create a table which will contain postGIS geometries,
> specifically linestrings.  Each line string should be unique, unique in the
> sense that no linestring should st_equals any other. (see
> https://postgis.net/docs/manual-3.1/ST_Equals.html)
>
> So, LINESTRING(10 10, 50 50) and LINESTRING(50 50, 10 10) are "st_equal".
>
>  I did the following:
>
> BEGIN;
>
> DROP OPERATOR IF EXISTS |*| (geometry, geometry) CASCADE;
>
> CREATE OPERATOR |*| (
> FUNCTION = st_equals,
> LEFTARG = geometry,
> RIGHTARG = geometry,
> COMMUTATOR = |*|
> );
>
> CREATE OPERATOR CLASS my_ops FOR TYPE geometry
> USING gist FAMILY gist_geometry_ops_2d AS
> OPERATOR 99 |*| (geometry, geometry);
>
> -- This returns True
> SELECT  'LINESTRING(10 10, 50 50)'::geometry |*| 'LINESTRING(50 50, 10
> 10)'::geometry;
>
> DROP TABLE IF EXISTS test_1 ;
> CREATE TABLE test_1 (
> fid integer PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
> g geometry,
> EXCLUDE USING GIST (g WITH |*|)
> );
>
> INSERT INTO test_1 (g) VALUES ('LINESTRING(10 10, 50 50)') ON CONFLICT DO
> NOTHING;
> INSERT INTO test_1 (g) VALUES ('LINESTRING(50 50, 10 10)') ON CONFLICT DO
> NOTHING; -- This should do nothing;
>
> SELECT fid, st_astext(g) FROM test_1; -- both rows returned, exclusion
> doesn't work as I think it should.
>
> ROLLBACK;
>
> But where I expected the second insert to 'DO NOTHING', it very much did
> something. So clearly I am missing something somewhere or my understanding
> of exclusion constraints is lacking...or both.  Any suggestions to get the
> desired outcome? (Using a trigger doesn't count :-D )
>
> But
> Rhys
> Peace & Love | Live Long & Prosper
> ___
> postgis-users mailing list
> postgis-us...@lists.osgeo.org
> https://lists.osgeo.org/mailman/listinfo/postgis-users
>


-- 
Darafei "Komяpa" Praliaskouski
OSM BY Team - http://openstreetmap.by/