merge with view

2024-03-11 Thread Lorusso Domenico
Hello guys,
Merge isn't work on view; anyone know why?
I mean, merge could be performed in many way, but essentially is:

   - join sets
   - if matched update or delete
   - if not matched insert

it doesn't seem to be relevant if set is a table or a view.

Moreover also "insert + on conflict" doesn't work with view (in my case is
a view on a single table with an instead of trigger)

There is a way to workaround to this issue?

-- 
Domenico L.

per stupire mezz'ora basta un libro di storia,
io cercai di imparare la Treccani a memoria... [F.d.A.]


extract ddl to devops pipeline

2024-03-06 Thread Lorusso Domenico
Hello guys,
I need to export the DDL (tables, funcitons views) of some schemas (with
cross references) to load them in a pipeline like DevOps ready.

Problem: export/backup doesn't care about the appropriate sequence of objet
because these stuff will be solved by import phase.

So there is a way to automatically generate DDL in the right order?

I mean, if function Foo reference in input/output definition (also i
declare section?) to table Bar, I've need to create the table before the
function.
And if Bar uses function Lir (in trigger? check? else?), Lir must be create
before Bar.


Migrate schemas

2024-02-08 Thread Lorusso Domenico
Hello guys,
I have 4 schemas with cross references (e.g.: a function refers to a
rowtype of a table of another schema, or a table invokes a function).

Backup schemas by pgadmin the resulting script doesn't follow the correct
order to ensure the object creations.

There is a way to tell postgresql to check the reference at the end of
script? Or a way to arrange DDL in the right order?

thanks'


-- 
Domenico L.


Re: select from composite type

2024-02-05 Thread Lorusso Domenico
ah ehm.. I solved, it was very easy but I believed it should use the
from clause...

execute 'select ($1).* ' using _attribute into _r;

Il giorno mar 6 feb 2024 alle ore 01:01 Lorusso Domenico <
domenico@gmail.com> ha scritto:

> here an example (the actual case in more complex, but the point it's the
> same)
>
> do $$
> declare
> _attribute_list temp1.my_type[];
> _attribute temp1.my_type;
>
> _r record;
> begin
> _attribute_list=array[row(1,'Hello') , row(2,'Goodbye')];
>
> _attribute= row(1,'Doh');
>
> raise notice '%', _attribute_list;
>
> for _r in execute 'select * from unnest($1) where foo=1' using
> _attribute_list loop
> raise notice '%', _r;
> end loop;
>
> --Error
> execute 'select * from $1' using _attribute into _r;
>
> raise notice '%', _r;
> end;
> $$;
>
> So I able to manage an array of complex type (why I use an array, because
> in a previous answer the community suggest to me to use an array to pass a
> list of information instead of temporary table), but I can't do the same
> thing with just an element.
>
> Of course I can set an element as part of an array with just that element
> but. it's sad...
>
>
> Il giorno lun 5 feb 2024 alle ore 01:48 David G. Johnston <
> david.g.johns...@gmail.com> ha scritto:
>
>> On Sun, Feb 4, 2024 at 5:39 PM Adrian Klaver 
>> wrote:
>>
>>>
>>> >
>>> > attribute_list is  an array of composite type (with 20 fields).
>>>
>>> I am trying to wrap my head around "array of composite type". Please
>>> provide an example.
>>>
>>
>> ARRAY[ (1,2)::point, (3,4)::point ]::point[]
>>
>> The main problem is the concept of writing "from($1)" in any query makes
>> no sense, you cannot parameterize a from clause directly like that.  You
>> have to put the value somewhere an expression is directly allowed.
>>
>> David J.
>>
>
>
> --
> Domenico L.
>
> per stupire mezz'ora basta un libro di storia,
> io cercai di imparare la Treccani a memoria... [F.d.A.]
>


-- 
Domenico L.

per stupire mezz'ora basta un libro di storia,
io cercai di imparare la Treccani a memoria... [F.d.A.]


Re: select from composite type

2024-02-05 Thread Lorusso Domenico
here an example (the actual case in more complex, but the point it's the
same)

do $$
declare
_attribute_list temp1.my_type[];
_attribute temp1.my_type;

_r record;
begin
_attribute_list=array[row(1,'Hello') , row(2,'Goodbye')];

_attribute= row(1,'Doh');

raise notice '%', _attribute_list;

for _r in execute 'select * from unnest($1) where foo=1' using
_attribute_list loop
raise notice '%', _r;
end loop;

--Error
execute 'select * from $1' using _attribute into _r;

raise notice '%', _r;
end;
$$;

So I able to manage an array of complex type (why I use an array, because
in a previous answer the community suggest to me to use an array to pass a
list of information instead of temporary table), but I can't do the same
thing with just an element.

Of course I can set an element as part of an array with just that element
but. it's sad...


Il giorno lun 5 feb 2024 alle ore 01:48 David G. Johnston <
david.g.johns...@gmail.com> ha scritto:

> On Sun, Feb 4, 2024 at 5:39 PM Adrian Klaver 
> wrote:
>
>>
>> >
>> > attribute_list is  an array of composite type (with 20 fields).
>>
>> I am trying to wrap my head around "array of composite type". Please
>> provide an example.
>>
>
> ARRAY[ (1,2)::point, (3,4)::point ]::point[]
>
> The main problem is the concept of writing "from($1)" in any query makes
> no sense, you cannot parameterize a from clause directly like that.  You
> have to put the value somewhere an expression is directly allowed.
>
> David J.
>


-- 
Domenico L.

per stupire mezz'ora basta un libro di storia,
io cercai di imparare la Treccani a memoria... [F.d.A.]


select from composite type

2024-02-04 Thread Lorusso Domenico
Hello guys,
I'm trying to find out the equivalent behaviour of unnest, when I've got
just a composite type.

This is the statement is something like that (but more complex):

 _sqlUpdate text=$$
with s as (
select * from ($1)
)
update myView as q set
(attribute_fullname, modify_user_id)
=(s.attribute_fullname, $2)
where  s.product_code=q.product_code
and s.attribute_uid=q.attribute_uid
$$;

And I use this command.
execute _sqlUpdate using  attribute_list[_i], _modify_user_id;

attribute_list is  an array of composite type (with 20 fields).

If I use the unnest with the entire array the sql works, but I can't find a
way to treat the single record as a row of  a table.


Any suggestions?

Bye and thanks'

Domenico


Re: Efficient Partitioning Strategies for PostgreSQL Table with KSUID and High Volume

2023-09-05 Thread Lorusso Domenico
Hello,
Increase of access cost is logarithmic when we access by index, so
partition often isn't a solution to improve performance, but a solution to
solve human difficulties to manage huge amounts of data, to
develop expensive parallel jobs and, in some cases, to improve performance
for sequential reads.

If I understand correctly your need you have 2 topics:
- read by index --> no issue
- read by user_id --> could be fixed by an index, but, because the record
are distributed, the access plan use multiple access by index

Some numbers.
each week table increases of 5 millions (5 * 10 ^6) record
after 10 year (520 weeks) table should have 2.6 billion of record
A btree could create a 64 based index that means you need of...5 access to
retrieve the right index address plus 1 access to retrieve the row.

This doesn't seem to be a great problem.

But if you have just 100 users the plan calculation foresees to perform
2,6*10^9 / 10^2= 2.6*10^7 access by index and each access (that actually
cost 6 access for each).

But probably you don't need to read all the records inserted or modified by
a user_id.

An approach could be to create generational partitions.
Main partition stores just the most recent records (with some assumption
you have to define), the oldest record could be moved to other historical
partition(s).

This could be realize with a technical field where store value that lead
the partition identification:
field_for_partition=current --> main partition
field_for_partition=h01 --> historical partition 1
field_for_partition=h02 --> historical partition 2
and so on.

Unfortunately, this approach requires to implement a cron job that each x
days change the value of field_for_partition and perform e vacuum.

A  more simple solution is to create index with user_id + creation_date and
ALWAYS access by user_id and creation_date > now() - interval 'xx days'

But, if you are users that insert record once each 2 months and users that
insert 1000 record per day this approach could not fit your need.
Maybe you need also to store last insertion time for each user and also the
number of record inserted.
In other word you could to find a way to define the right amount of days
for each users






Il giorno mar 5 set 2023 alle ore 10:24 wheels  ha
scritto:

> Hi Team,
>
> I'm working with a PostgreSQL table containing terabytes of data, and
> it grows by millions of rows weekly. Each row is identified by a
> [KSUID][1], and my primary read patterns are:
>
> 1. Retrieve a row by its KSUID.
> 2. List rows by `user_id` in descending order, pagination acceptable.
>
> Currently, the table is unpartitioned and read performance is
> sluggish. I'm contemplating partitioning the table by month using the
> KSUID column, [leveraging its embedded uint32 timestamp][2], something
> like this:
>
> ```sql
> CREATE TABLE table_y2023m09 PARTITION OF ksuid
> FOR VALUES FROM ('[current_month_ts][128 zeros]') TO
> ('[next_month_ts][128 zeros]')
> ```
>
> This allows each 'Get row by KSUID' query to be isolated to a single
> partition.
>
> For listing rows by `user_id`, I'm considering keyset pagination:
>
> ```sql
> SELECT *
> FROM table_name
> WHERE user_id = ?
>   AND ksuid > last_seen_ksuid
> ORDER BY ksuid
> LIMIT 10;
> ```
>
> However, this method still would need to search through multiple
> partitions depending on `last_seen_ksuid`, but I guess that with an
> index by `user_id` might be enough.
>
> ### Questions:
>
> 1. Is using KSUID as a partitioning key viable, especially given that
> the column can be represented as text or bytes?
> 2. Is there a more efficient way to implement listing by `user_id`
> other than keyset pagination?
> 3. Are there any pitfalls or performance issues I should be aware of
> with this approach?
> 4. Would it be better to just partition based on `created_at` and
> extract the timestamp from the ksuid on application layer and add it
> explicitly to the query?
>
> Thank you very much,
> Best regards.
>
>   [1]: https://github.com/segmentio/ksuid
>   [2]:
> https://github.com/segmentio/ksuid#:~:text=a%2032%2Dbit%20unsigned%20integer%20UTC%20timestamp
>
>
>

-- 
Domenico L.

per stupire mezz'ora basta un libro di storia,
io cercai di imparare la Treccani a memoria... [F.d.A.]


Array vs Temporary table vs Normal Table + truncate at end

2023-09-04 Thread Lorusso Domenico
Hello guys,
I prepared a function that build a set of records of type (myComplexType).

That works, but I've to use this record set in different situations.

One of them require to scan multiple times the function results:

   1. comparing with another table (myProducteAttributeTable) to determine
   if I need to delete old record
   2. check if the recordset contains attribute never registered before
   3. in this case insert in appropriate table (myAttribute), retrieve
   attribute uid and type
   4. finally update first table (myProducteAttributeTable) with new
   information coming from the recordset (with a merge)


So, which is the best approach?


   - Store result in an array and use in each subsequent query the unnest
   (with a complex record type?)
   - Create temporary table on commit delete, with the same structure? But
   in this case I can't declare a cursor for the table, and I can't use update
   where current of (for point 3)
   - Create normal table and truncate as part of execution, but what
   happens in case of parallel execution?


rollback to savepoint issue

2023-09-04 Thread Lorusso Domenico
Hello guys,
I can't use the savepoint and rollback to savepoint clause.
I've found some similar problems around on the web, but I can't catch the
good way to proceed.

What I'm trying to do is:

   - compare new set of attribute with older
   - if some attributes are part of old set and not in the new one: delete
   the old
   - but if parameter "on_misisng_delete" is false rollback delete command
   and rais exception

The original code in embedded in a function, but the problem is the same:
ERROR:  syntax error at or near "to"
LINE 41:   rollback to savepoint deleteAttribute;
^

SQL state: 42601
Character: 768


Code extracted:
do $body$
declare
on_missing_delete  boolean=false;
_i integer;
_vAttributeName text='paperi';
_importo integer= 5000;
begin
savepoint deleteAttribute;

execute format($$
with ru as (
update  public.%I as q
set is_closed=true
, modify_user_id='process:definitionUpdate'
where q.importo > $1
returning 1
)
select count(*) from ru
;$$
, _vAttributeName)
using _importo
into _i;
--> If I can't delete and there are row to delete raise excpetion
if not on_missing_delete  and _i > 0 then
--> undo deletion
rollback to savepoint deleteAttribute;

raise exception 'New attributes list foresees to remove % attribute(s), but
parameter "on missing delete" is false.', _i;
else
release savepoint deleteAttribute;
end if;

rollback;
end;
$body$;


I know I can modify the code, but I wish to understand why I can't use
rollback to save point


Schema renaming cascade

2023-08-17 Thread Lorusso Domenico
Hello guys,
I need to rename a schema, including each reference to it (also for
functions)

I mean:
I've 2 schemas called "schema1" and "schema2".
In schema1 there are tables, composite types, and functions
Functions call other functions in the same schema

In schema2
Also in schema2 there are tables, composite types, and functions
Plus many function and tables refer to object of schema1


I need to rename schema1 to schema 3 substituting any reference to schema1
with schema3
Is there a way to to that?

I found this answer

that
clone a schema (that solve partially the problem) but is it still working
for v15?

At the end, I need to replace schema1 with schema3 for each function
(because table and type are automatically update)

Any suggestions?


Re: Dynamically accessing columns from a row type in a trigger

2023-08-17 Thread Lorusso Domenico
Well, some weeks ago, I read an article about that.

The more efficient approach (in pgplsql) is to use hstore.

With a similar topic, at the end, I created a group of functions that store
in an internal table, data structure for each table under the same generic
trigger.
I also store, primary key.
The goal of this approach is to compare easley old and new values,
discarding automatic and special (for my project) fields, so I can manage
historical versions, audit information etc...



Il giorno sab 12 ago 2023 alle ore 22:16 Rhys A.D. Stewart <
rhys.stew...@gmail.com> ha scritto:

> Greetings all,
>
> I am writing a trigger and would like to know how to dynamically
> access a column from the "OLD" variable. pk_col is the column name
> from the table.
>
> I've come up with either doing this:
> EXECUTE format('SELECT $1.%1$I', pk_col) INTO pk_val USING OLD;
>
> which looks a bit excessive, or this
>
> pk_val = to_jsonb(OLD.*)->pk_col
>
> which looks cleaner, but then I am having to incur a little overhead
> by using the to_jsonb function. Ideally, something like this would be
> great:
>
> pk_val = OLD[pk_col]
>
> but evidently we can't subscript ROW types.
>
> Am I missing out on a simpler or more elegant solution?
>
> Rhys
> Peace & Love | Live Long & Prosper
>
>
>

-- 
Domenico L.

per stupire mezz'ora basta un libro di storia,
io cercai di imparare la Treccani a memoria... [F.d.A.]


Re: How to add function schema in search_path in option definitio

2023-07-12 Thread Lorusso Domenico
eh.. yes to be honest what I create is more similar to an extension rather
than a package, I don't believe to be able to create extensions in cloud
sql (the actual target db).

However , is there a way to share this preliminar "extension" with the
community?
I'm interested to discuss about solution used

For example: the software creates a custom type to manage bitemporality and
audit_record inline for each table; but another approach is to use a
primitive table inherited by all tables.
I'm quite sure the composite type has one or two fields I can remove.
etc :-D

Il giorno mar 11 lug 2023 alle ore 22:51 David G. Johnston <
david.g.johns...@gmail.com> ha scritto:

> On Sat, Jul 8, 2023 at 10:00 AM Lorusso Domenico 
> wrote:
>
>> Hello Adrian,
>> I've created a schema to handle some specific features.
>> In the schema there are a couple of tables used by many functions (more
>> than 20).
>> In other words, I've created a schema as a package (as suggested in many
>> points).
>>
>> I wish, in a function of this schema, to be able to call each other
>> functions of this schema without adding the schema name in the call
>> instruction.
>>
>
> PostgreSQL isn't really designed for that - especially if you aren't
> formally creating an extension but simply putting stuff into a schema.
>
> The script code you use to install your makeshift package should handle
> dynamic schema naming.  It's a pain, do you really need to allow the name
> of the schema to be unknown at installation time?
>
> You should read up on how extensions are implemented - you are basically
> writing your own CREATE EXTENSION implementation.
>
>
> https://www.postgresql.org/docs/current/extend-extensions.html#EXTEND-EXTENSIONS-RELOCATION
>
> In any case, there really aren't any smarts here: explicitly schema
> qualify your function calls and forget that search_path even exists.
> Unless you are writing custom operators, and even then, consider
> search_path to be evil.
>
> David J.
>
>

-- 
Domenico L.

per stupire mezz'ora basta un libro di storia,
io cercai di imparare la Treccani a memoria... [F.d.A.]


Re: How to add function schema in search_path in option definitio

2023-07-11 Thread Lorusso Domenico
Adrian I have read your suggestions, but:
1) I need to change path inside a group of function (like a package) not
for any connection
2) Yes inside function is clear I can do that
3) some like 1 and, moreover, I can't be sure to create my own role,
because the target db will be cloud sql

My question doesn't aim just to fix a specific issue, but to learn the best
way (if exist) to solve the class of issues (raised by the case) in
postgresql.

Il giorno mar 11 lug 2023 alle ore 21:41 Adrian Klaver <
adrian.kla...@aklaver.com> ha scritto:

> On 7/11/23 12:35, Lorusso Domenico wrote:
> > yes, but this is a code inside each function.
> >   having 20 functions I've to start executing this instruction 20 times,
> > one per function.
> > I know it could work, I looking for something of more smart :-)
>
> Here:
>
>
> https://www.postgresql.org/message-id/66677cb3-17b3-20d1-1c22-9a7ac3208921%40aklaver.com
>
> I provide options 1) and 3) to use instead of including the path change
> in the functions(s).
>
> >
> > Il giorno dom 9 lug 2023 alle ore 23:21 Adrian Klaver
> > mailto:adrian.kla...@aklaver.com>> ha
> scritto:
> >
> > On 7/9/23 13:47, Lorusso Domenico wrote:
> >  > Hello Adrian and Peter,
> >  > yes the set parameter in function definition is also my preferred
> >  > choice, but
> >  >
> >  > I need to add the schema to path, not to substitute the path,
> > this is my
> >  > problem
> >  >
> >
> > This post:
> >
> >
> https://www.postgresql.org/message-id/9f3db764-804e-7154-3f8c-f5f19628178e%40aklaver.com
> <
> https://www.postgresql.org/message-id/9f3db764-804e-7154-3f8c-f5f19628178e%40aklaver.com
> >
> >
> > showed a way to do that.
> >
> > --
> > Adrian Klaver
> > adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>
> >
> >
> >
> > --
> > Domenico L.
> >
> > per stupire mezz'ora basta un libro di storia,
> > io cercai di imparare la Treccani a memoria... [F.d.A.]
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>

-- 
Domenico L.

per stupire mezz'ora basta un libro di storia,
io cercai di imparare la Treccani a memoria... [F.d.A.]


Re: Trigger Function question

2023-07-11 Thread Lorusso Domenico
I've just finish to do the same thing.
For my needs, I decided to create a table with the information I need for
each view (yes I set trigger on view not on table).

anyway, hstore is more performant rather than jsonb but both of them could
treat NEW and OLD as hashmap (or associative array)


Il giorno lun 10 lug 2023 alle ore 20:53 Christophe Pettus 
ha scritto:

>
>
> > On Jul 10, 2023, at 11:46, DAVID ROTH  wrote:
> >
> > Is there a way to get new.* into a jsonb column?
>
> The to_jsonb() function accepts a row type like NEW.*, and returns a JSONB
> object with the keys as column names.
>
>

-- 
Domenico L.

per stupire mezz'ora basta un libro di storia,
io cercai di imparare la Treccani a memoria... [F.d.A.]


Re: How to add function schema in search_path in option definitio

2023-07-11 Thread Lorusso Domenico
yes, but this is a code inside each function.
 having 20 functions I've to start executing this instruction 20 times, one
per function.
I know it could work, I looking for something of more smart :-)

Il giorno dom 9 lug 2023 alle ore 23:21 Adrian Klaver <
adrian.kla...@aklaver.com> ha scritto:

> On 7/9/23 13:47, Lorusso Domenico wrote:
> > Hello Adrian and Peter,
> > yes the set parameter in function definition is also my preferred
> > choice, but
> >
> > I need to add the schema to path, not to substitute the path, this is my
> > problem
> >
>
> This post:
>
>
> https://www.postgresql.org/message-id/9f3db764-804e-7154-3f8c-f5f19628178e%40aklaver.com
>
> showed a way to do that.
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>

-- 
Domenico L.

per stupire mezz'ora basta un libro di storia,
io cercai di imparare la Treccani a memoria... [F.d.A.]


Re: Moving data from huge table slow, min() query on indexed column taking 38s

2023-07-11 Thread Lorusso Domenico
Hello Dimitry,
at the end, a table is a file with many other functionalities, these
functionalities consume resources.

If the DBMS (like oracle or db2) allow us to disable the functionalities so
we can perform a copy between tables, otherwise (and often also for oracle
and db2) the best approach is to use an export.

because export /import functionalities are very optimized to do their job.

Anyway, when you approach as DBA you must block the db or at least a table.
Don't try to  reorg schema or db with connected users.

Il giorno lun 10 lug 2023 alle ore 17:58 Dimitrios Apostolou 
ha scritto:

> Thanks! I have completed the transfer by taking down the database and
> working exclusively on it, but I still wonder how one could do it in
> production without exclusive locks. The loop with small DELETE...INSERT
> transactions I posted on the parent post bloats the table fast.
>
> The link you posted contains very useful info, I was not expecting that
> the constraints could blow up the memory like that. Comments from me:
>
> Disabling and then re-enabling the foreign key constraints is easily done
> with ALTER TABLE.
>
> Unfortunately it doesn't seem to be the same case for indices. One has to
> create the table without indices, and then create the indices separately.
> With such a process there is a risk of ending up with non-identical
> table schemas.
>
> By the way, with COPY one must use an intermediate file, right? There is
> no way to COPY from table to table directly?
>
>
> Thanks,
> Dimitris
>
> On Thu, 6 Jul 2023, Lorusso Domenico wrote:
>
> > Hello,
> > this is a standard problem during bulk copy.
> >
> > here some suggestions; for example disable indexes.
> >
> > The main issue is related to index, lock escalation and log writing.
> > In other dbms you should set log off on the table, but postgresql does
> not seem to have this feature.
> >
> > Anyway, using an explicit lock table exclusive should prevent lock
> escalation.
> >
> > So: disable indexes in target table
> > lock exclusive both table
> > insert data
> > truncate old table
> >
> > If this doesn't work you can consider using the copy command.
> >
> >
> >
> >
> >
> >
> >
> > Il giorno gio 6 lug 2023 alle ore 18:12 Dimitrios Apostolou <
> ji...@gmx.net> ha scritto:
> >   On Thu, 6 Jul 2023, Dimitrios Apostolou wrote:
> >
> >   > + First I chose the method to DELETE...INSERT everything.
> >
> >   Just to clarify, the query looked more like
> >
> >  WITH rows AS ( DELETE FROM tbl_legacy RETURNING * )
> >INSERT INTO tbl
> >  SELECT * FROM rows;
> >
> >   >   I noticed that the postgres process was growing without
> bounds, up to
> >   >   the point that it consumed almost all of swap space and I had
> to kill
> >   >   the command. Any idea why that happens?
> >
> >   Also note that my settings for work_mem, temp_buffers,
> shared_buffers etc
> >   are all well below the RAM size and postgres has never shown
> unbound
> >   growth again. Postgres version is 15.2.
> >
> >
> >   Dimitris
> >
> >
> >
> >
> >
> > --
> > Domenico L.
> >
> > per stupire mezz'ora basta un libro di storia,
> > io cercai di imparare la Treccani a memoria... [F.d.A.]
> >
> >
>


-- 
Domenico L.

per stupire mezz'ora basta un libro di storia,
io cercai di imparare la Treccani a memoria... [F.d.A.]


Re: How to add function schema in search_path in option definitio

2023-07-09 Thread Lorusso Domenico
Hello Adrian and Peter,
yes the set parameter in function definition is also my preferred choice,
but

I need to add the schema to path, not to substitute the path, this is my
problem

Il giorno dom 9 lug 2023 alle ore 13:02 Peter J. Holzer 
ha scritto:

> On 2023-07-08 19:00:02 +0200, Lorusso Domenico wrote:
> > I've created a schema to handle some specific features.
> > In the schema there are a couple of tables used by many functions (more
> than
> > 20).
> > In other words, I've created a schema as a package (as suggested in many
> > points).
>
> So this is one specific schema with a specific name and you don't need
> this to work in any schema the function happens to be created in?
>
> Then I think setting the search path as part of the function definition
> (as already hinted at by Adrian) may be the easiest solution:
>
> create function my_schema.foo (...)
> returns ...
> set search_path to my_schema, public
> as $$
> ...
> $$;
>
> You could also do something like:
>
> set search_path to my_schema, public;
> create function foo (...)
> returns ...
> set search_path from current
> as $$
> ...
> $$;
>
> hp
>
> --
>_  | Peter J. Holzer| Story must make more sense than reality.
> |_|_) ||
> | |   | h...@hjp.at |-- Charles Stross, "Creative writing
> __/   | http://www.hjp.at/ |   challenge!"
>


-- 
Domenico L.

per stupire mezz'ora basta un libro di storia,
io cercai di imparare la Treccani a memoria... [F.d.A.]


Re: How to add function schema in search_path in option definitio

2023-07-08 Thread Lorusso Domenico
Hello Adrian,
I've created a schema to handle some specific features.
In the schema there are a couple of tables used by many functions (more
than 20).
In other words, I've created a schema as a package (as suggested in many
points).

I wish, in a function of this schema, to be able to call each other
functions of this schema without adding the schema name in the call
instruction.

For example:
in schema my_schema there are function foo and bar.

in foo I need to call bar:
foo as $$
begin
[...]
_returnOfBar=my_schema.bar(p1,p2,..,pn);
[...]
end;
$$

I'd like to use just _returnOfBar=bar(p1,p2,..,pn);









Il giorno sab 8 lug 2023 alle ore 17:46 Adrian Klaver <
adrian.kla...@aklaver.com> ha scritto:

> On 7/8/23 08:06, Lorusso Domenico wrote:
> > Hello guys,
> > there is a comfortable way to add the schema of the function as in
> > search path?
> >
> > something like
> > create my_schema.function() as $body$...$body$
> > set search_path to function_schema() || search_path
>
> You can do something like:
>
>
> show  search_path ;
> search_path
> -
>   public, history
>
>
> select set_config('search_path', 'test, ' ||
> current_setting('search_path'), 'f');
>set_config
> ---
>   test, public, history
>
> show  search_path ;
>search_path
> ---
>   test, public, history
>
> >
> > --
> > Domenico L.
> >
> > per stupire mezz'ora basta un libro di storia,
> > io cercai di imparare la Treccani a memoria... [F.d.A.]
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>

-- 
Domenico L.

per stupire mezz'ora basta un libro di storia,
io cercai di imparare la Treccani a memoria... [F.d.A.]


How to add function schema in search_path in option definitio

2023-07-08 Thread Lorusso Domenico
Hello guys,
there is a comfortable way to add the schema of the function as in search
path?

something like
create my_schema.function() as $body$...$body$
set search_path to function_schema() || search_path

-- 
Domenico L.

per stupire mezz'ora basta un libro di storia,
io cercai di imparare la Treccani a memoria... [F.d.A.]


Re: [bug]? insert returning composite type fails

2023-07-07 Thread Lorusso Domenico
Adrian come on 🤭
This is a reduced example.


The real usecase involves many tables with the bitemporal record


However I solved using a record type ...

Il ven 7 lug 2023, 01:20 Adrian Klaver  ha
scritto:

> On 7/6/23 14:52, Lorusso Domenico wrote:
> > Hello guys,
> > In my db (version 15) I've defined a composite type with some domains
> >
> > CREATE DOMAIN my_feat.audit_record_jsonb_domain
> >  AS jsonb
> >  NOT NULL;
> >
> > ALTER DOMAIN my_feat.audit_record_jsonb_domain OWNER TO postgres;
> >
> > CREATE DOMAIN my_feat.boolean_true_domain
> >  AS boolean
> >  DEFAULT true
> >  NOT NULL;
> >
> > ALTER DOMAIN my_feat.boolean_true_domain OWNER TO postgres;
> > CREATE TYPE my_feat.bitemporal_record AS
> > (
> > user_ts_range tstzrange,
> > db_ts_range tstzrange,
> > has_future_record timestamp with time zone,
> > audit_record my_feat.audit_record_jsonb_domain,
> > is_valid my_feat.boolean_true_domain
> > );
> >
> > ALTER TYPE my_feat.bitemporal_record
> >  OWNER TO postgres;
> > So I've a table like that:
> > CREATE TABLE IF NOT EXISTS my_feat.try_bt_info
> > (
> >  id bigint NOT NULL GENERATED BY DEFAULT AS IDENTITY ( INCREMENT 1
> > START 1 MINVALUE 1 MAXVALUE 9223372036854775807 CACHE 1 ),
> >  bt_info my_feat.bitemporal_record,
> >  CONSTRAINT try_bt_info_pk PRIMARY KEY (id)
> > )
>
> Seems a long way around to arrive at:
>
> CREATE TABLE IF NOT EXISTS my_feat.try_bt_info
> (
> id bigint NOT NULL GENERATED BY DEFAULT AS IDENTITY ( INCREMENT 1 START
> 1 MINVALUE 1 MAXVALUE 9223372036854775807 CACHE 1 ) PRIMARY KEY,
> user_ts_range tstzrange,
> db_ts_range tstzrange,
> has_future_record timestamp with time zone,
> is_valid boolean NOT NULL 't'
> );
>
>
> > --
> > Domenico L.
> >
> > per stupire mezz'ora basta un libro di storia,
> > io cercai di imparare la Treccani a memoria... [F.d.A.]
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


[bug]? insert returning composite type fails

2023-07-06 Thread Lorusso Domenico
Hello guys,
In my db (version 15) I've defined a composite type with some domains

CREATE DOMAIN my_feat.audit_record_jsonb_domain
AS jsonb
NOT NULL;

ALTER DOMAIN my_feat.audit_record_jsonb_domain OWNER TO postgres;

CREATE DOMAIN my_feat.boolean_true_domain
AS boolean
DEFAULT true
NOT NULL;

ALTER DOMAIN my_feat.boolean_true_domain OWNER TO postgres;
CREATE TYPE my_feat.bitemporal_record AS
(
user_ts_range tstzrange,
db_ts_range tstzrange,
has_future_record timestamp with time zone,
audit_record my_feat.audit_record_jsonb_domain,
is_valid my_feat.boolean_true_domain
);

ALTER TYPE my_feat.bitemporal_record
OWNER TO postgres;
So I've a table like that:
CREATE TABLE IF NOT EXISTS my_feat.try_bt_info
(
id bigint NOT NULL GENERATED BY DEFAULT AS IDENTITY ( INCREMENT 1 START
1 MINVALUE 1 MAXVALUE 9223372036854775807 CACHE 1 ),
bt_info my_feat.bitemporal_record,
CONSTRAINT try_bt_info_pk PRIMARY KEY (id)
)

TABLESPACE pg_default;

ALTER TABLE IF EXISTS my_feat.try_bt_info
OWNER to postgres;

When I try an insert asking to return bt_info I catch and error; here the
example:

do $$
declare
_bt_info my_feat.bitemporal_record;
begin
insert into my_feat.try_bt_info (bt_info)
values (row(
tstzrange(now(),'infinity','[)')
, tstzrange(now(),'infinity','[)')
, null
, '{"user_id":"alpha"}'
, true)::my_feat.bitemporal_record
)
returning bt_info into _bt_info;

raise notice '%', _bt_info;
end;
$$;

Error:
ERROR: Too many commas.malformed range literal: "("[""2023-07-06
23:50:30.991122+02"",infinity)","[""2023-07-06
23:50:30.991122+02"",infinity)",,"{""user_id"": ""alpha""}",t)" ERROR:
malformed range literal: "("[""2023-07-06
23:50:30.991122+02"",infinity)","[""2023-07-06
23:50:30.991122+02"",infinity)",,"{""user_id"": ""alpha""}",t)" SQL state:
22P02
Detail: Too many commas.

It seems to be a bug, but maybe there is a workaround; any idea?

-- 
Domenico L.

per stupire mezz'ora basta un libro di storia,
io cercai di imparare la Treccani a memoria... [F.d.A.]


Re: Moving data from huge table slow, min() query on indexed column taking 38s

2023-07-06 Thread Lorusso Domenico
Hello,
this is a standard problem during bulk copy.

here  some suggestions;
for example disable indexes.

The main issue is related to index, lock escalation and log writing.
In other dbms you should set log off on the table, but postgresql does not
seem to have this feature.

Anyway, using an explicit lock table exclusive should prevent lock
escalation.

So: disable indexes in target table
lock exclusive both table
insert data
truncate old table

If this doesn't work you can consider using the copy command.







Il giorno gio 6 lug 2023 alle ore 18:12 Dimitrios Apostolou 
ha scritto:

> On Thu, 6 Jul 2023, Dimitrios Apostolou wrote:
>
> > + First I chose the method to DELETE...INSERT everything.
>
> Just to clarify, the query looked more like
>
>WITH rows AS ( DELETE FROM tbl_legacy RETURNING * )
>  INSERT INTO tbl
>SELECT * FROM rows;
>
> >   I noticed that the postgres process was growing without bounds, up to
> >   the point that it consumed almost all of swap space and I had to kill
> >   the command. Any idea why that happens?
>
> Also note that my settings for work_mem, temp_buffers, shared_buffers etc
> are all well below the RAM size and postgres has never shown unbound
> growth again. Postgres version is 15.2.
>
>
> Dimitris
>
>
>
>

-- 
Domenico L.

per stupire mezz'ora basta un libro di storia,
io cercai di imparare la Treccani a memoria... [F.d.A.]


Re: Strange behaviour on function

2023-07-05 Thread Lorusso Domenico
Erik, Adrian,

The trigger function is very long and complex (and not yet well
documented), but bind_action is a normal function callable everywhere, the
problem as discovered by Erik was in the null value contained in the hstore.

Anyway, when the ecosystem of functions will work correctly I can share, so
you can give me how improve :-D


Il giorno mer 5 lug 2023 alle ore 17:33 Erik Wienhold  ha
scritto:

> > On 05/07/2023 17:16 CEST Adrian Klaver 
> wrote:
> >
> > https://www.postgresql.org/docs/current/plpgsql-trigger.html
> >
> > 1)
> > "A trigger function must return either NULL or a record/row value having
> > exactly the structure of the table the trigger was fired for."
> >
> > 2) I am not seeing where you use:
> >
> > "TG_ARGV[]
> >
> >  Data type array of text; the arguments from the CREATE TRIGGER
> > statement. The index counts from 0. Invalid indexes (less than 0 or
> > greater than or equal to tg_nargs) result in a null value."
> >
> > So I don't see how sqlstr is being set?
>
> Domenico did not provide the trigger definition, only function bind_action
> which he calls from a trigger function.  Also bind_action cannot be a
> trigger
> function because it does not return trigger.
>
> --
> Erik
>


-- 
Domenico L.

per stupire mezz'ora basta un libro di storia,
io cercai di imparare la Treccani a memoria... [F.d.A.]


Re: Strange behaviour on function

2023-07-05 Thread Lorusso Domenico
yes!!
I solved using quote_nullable on hrec[k].

 I was convinced string || NULL --> string

but I'm wrong.

Thanks!!



Il giorno mer 5 lug 2023 alle ore 15:53 Erik Wienhold  ha
scritto:

> > On 05/07/2023 14:23 CEST Lorusso Domenico 
> wrote:
> >
> > Hello guys,
> > here a simple function
> >
> > CREATE OR REPLACE FUNCTION bind_action(
> >  sqlstr text,
> >  hrec hstore)
> >  RETURNS text
> >  LANGUAGE 'plpgsql'
> >  COST 100
> >  immutable PARALLEL SAFE
> > AS $BODY$
> > declare
> >  _sqlstr text=sqlstr;
> >  _k text;
> >  _debug text;
> > begin
> >  _debug= '--Start' || _sqlstr;
> >  foreach _k in array akeys(hrec) loop
> >  _debug =_debug || format($$
> >  hstore: %s %s
> >  sqlStr:$$, _k, hrec[_k]);
> >  _sqlstr=replace(_sqlstr, ':'||_k||':', hrec[_k]);
> >  _debug =_debug || _sqlstr;
> >
> >  end loop;
> >
> >  raise notice 'final %',_debug;
> >  return _sqlstr;
> > end;
> > $BODY$;
> >
> > and here a simple test
> > do
> > $$
> > declare
> >  sqlstr text=':id::bignt,:surpa:,:disfa:';
> >  hs hstore;
> > begin
> >  hs['id']=789;
> >  hs['disfa']='';
> >  raise notice '%',bind_action(sqlstr,hs);
> > end;
> > $$;
> >
> > and it works.
> > But...
> > When I call this function in a function called by a trigger it down't
> work
> > _debug variable becomes null, also _sqlstr becomes null...
>
> Does the hstore contain nulls?  Function replace returns null in that case.
>
> Please show us the trigger, its function, and a reproducer.
>
> --
> Erik
>


-- 
Domenico L.

per stupire mezz'ora basta un libro di storia,
io cercai di imparare la Treccani a memoria... [F.d.A.]


Strange behaviour on function

2023-07-05 Thread Lorusso Domenico
Hello guys,
here a simple function

CREATE OR REPLACE FUNCTION bind_action(
sqlstr text,
hrec hstore)
RETURNS text
LANGUAGE 'plpgsql'
COST 100
immutable PARALLEL SAFE
AS $BODY$
declare
_sqlstr text=sqlstr;
_k text;
_debug text;
begin
_debug= '--Start' || _sqlstr;
foreach _k in array akeys(hrec) loop
_debug =_debug || format($$
hstore: %s %s
sqlStr:$$, _k,  hrec[_k]);
_sqlstr=replace(_sqlstr, ':'||_k||':', hrec[_k]);
_debug =_debug ||  _sqlstr;

end loop;

raise notice 'final %',_debug;
return _sqlstr;
end;
$BODY$;

and here a simple test
do
$$
declare
sqlstr text=':id::bignt,:surpa:,:disfa:';
hs hstore;
begin
hs['id']=789;
hs['disfa']='';
raise notice '%',bind_action(sqlstr,hs);
end;
$$;

and it works.
But...
When I call this function in a function called by a trigger it down't work
_debug variable becomes null, also _sqlstr becomes null...

I can't identify what is going to happens...

any idea?


thanks

-- 
Domenico L.

per stupire mezz'ora basta un libro di storia,
io cercai di imparare la Treccani a memoria... [F.d.A.]


Re: foreign keys on multiple parent table

2023-06-23 Thread Lorusso Domenico
ehm.. I'm not sure I understood correctly :-D
in which way do you generate column?

Il giorno mer 21 giu 2023 alle ore 09:47 Dominique Devienne <
ddevie...@gmail.com> ha scritto:

> On Tue, Jun 20, 2023 at 10:47 PM Lorusso Domenico 
> wrote:
>
>> Could work, but is there a way to set a reference key over the uuid of
>> all the tables?
>>
>
> Yes, it's possible. We do it. There are several ways to emulate what I
> call "polymorphic" FKs.
>
> All approaches have pros and cons, the one we use relies on CHECK
> constraints and virtual/generated columns.
> It assumes all mutually exclusive FKs are of the same type. For ON DELETE
> CASCADE FKs, you have the primary
> "fk" concrete column, plus a secondary "fk$t" type column, telling you
> which FK is active, then N "fk$N" virtual columns
> whose expression automatically turn them ON (="fk") or OFF (is NULL) based
> on "fk$t"'s value. A CHECK constraint
> ensures only 0 or 1 "fk$N" column is ON, depending on "fk"'s NULLablity.
> For ON DELETE SET NULL, you need to
> reverse the concrete and virtual columns, so the constraint can *write*
> the "fk$N" columns, with more CHECK constraints.
>
> The technique works because FKs on virtual column works fine. As with all
> FKs with ON DELETE CASCADE, you want
> to index your FKs to avoid full scans. With partial indexes (since the FKs
> are mutually exclusive and full of NULLs), the
> storage overhead from multiplicating (virtual) columns and indexes can be
> limited (i.e. not as bad as N times the single index).
> Of course, this is tricky to pull-off correctly w/o automatic schema
> generation from a logic model. We have dozens of these PFKs,
> of various cardinality, maintaining those manually would be a nightmare.
> And when the polymorphism is too much,
> we give up on referential integrity on a case by case basis, to avoid
> bloating the tables and schema. It's a tradeof, as always.
>
> I'm sure I didn't invent this technique. But it sure isn't very common and
> it has been our "secret sauce" for a few years.
> On Oracle first, now on PostgreSQL. A Dalibo consultant once told me I
> should present it at a PGCon conference :).
>
> Good luck if you try that. FWIW, --DD
>


-- 
Domenico L.

per stupire mezz'ora basta un libro di storia,
io cercai di imparare la Treccani a memoria... [F.d.A.]


Re: foreign keys on multiple parent table

2023-06-23 Thread Lorusso Domenico
Thank you Les for the link, it's a very good example, unfortunately my need
is more applicative (we need to store user of application, not the on pg,
proces who start etc), but for sure I can take advantage of it.

Il giorno mar 20 giu 2023 alle ore 23:01 Les  ha scritto:

> .
>>
>
>
>> From programming point of view and also to reduce the number of objects
>> in DB could be convinient create just an audit table with a structure like:
>>
>>- auditi id
>>- reference_uuid (the key of the main table)
>>- table_name
>>- list of audit data
>>
>>
> Could work, but is there a way to set a reference key over the uuid of all
>> the tables?
>>
>
> For existing solution, check out
> https://github.com/2ndQuadrant/audit-trigger
>
> Regarding fk constraints, a single fk constraint can only reference the
> primary key of a single table.
>
> But, if you want to be serious about audit logs, then you need to keep
> logs of deletions too, and for those, foreign key constraints would not
> work anyway.
>
> You may also want to consider bulk insert speed. Foreign key constraint
> checking can reduce speed.
>
>   Laszlo
>
>
>
>
>
>
>>

-- 
Domenico L.

per stupire mezz'ora basta un libro di storia,
io cercai di imparare la Treccani a memoria... [F.d.A.]


foreign keys on multiple parent table

2023-06-20 Thread Lorusso Domenico
Hello guys,
I've many tables representing as many concepts.

For each record of each table I need to store extra information (think to
audit information, but more complex than a simple text)

The relation is 1:N, for each record there could be many audit records.

>From programming point of view and also to reduce the number of objects in
DB could be convinient create just an audit table with a structure like:

   - auditi id
   - reference_uuid (the key of the main table)
   - table_name
   - list of audit data

Could work, but is there a way to set a reference key over the uuid of all
the tables?


-- 
Domenico L.

per stupire mezz'ora basta un libro di storia,
io cercai di imparare la Treccani a memoria... [F.d.A.]


Re: Dynamic binding issue

2023-06-13 Thread Lorusso Domenico
Because the function is general and should do the same thing for many
different table.
Unique constraint, each table must have a field with a specific type

Il mar 13 giu 2023, 01:31 Adrian Klaver  ha
scritto:

> On 6/12/23 15:13, Lorusso Domenico wrote:
> > Hello guys,
> > I'm a problem with dynamic sql.
> > I am trying to write a generic function that is able to read and update
> > a table based on some data coming from e previous record.
> > Here the example
> > _sqlStr=format('select *
> >from  %1$s.%2$s
> >where (' || array_to_string(_activeRec.pk_columns_list, ',') || ') in
> > (select ' ||
> >'row($1[''' || array_to_string(_activeRec.pk_columns_list,
> > '''],$1[''') || ''']))'
> >, _activeRec.name_of_schema, _activeRec.main_table);
> >
> > execute _sqlStr using oldRec into _rec;
> >
> > My problem is oldRec is a type record, so the substitution performed by
> > execute fails, because it can't recognize the field if the variable is
> > record and not a specific composite record type.
>
> Why not use a row type?:
>
>
> https://www.postgresql.org/docs/current/plpgsql-declarations.html#PLPGSQL-DECLARATION-ROWTYPES
>
> >
> > I suppose this is a recurrent question, but I can't find a solution...
> >
> > --
> > Domenico L.
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


Dynamic binding issue

2023-06-12 Thread Lorusso Domenico
Hello guys,
I'm a problem with dynamic sql.
I am trying to write a generic function that is able to read and update a
table based on some data coming from e previous record.
Here the example
_sqlStr=format('select *
  from  %1$s.%2$s
  where (' || array_to_string(_activeRec.pk_columns_list, ',') || ') in
(select ' ||
  'row($1[''' || array_to_string(_activeRec.pk_columns_list, '''],$1[''')
|| ''']))'
  , _activeRec.name_of_schema, _activeRec.main_table);

execute _sqlStr using oldRec into _rec;

My problem is oldRec is a type record, so the substitution performed by
execute fails, because it can't recognize the field if the variable is
record and not a specific composite record type.

I suppose this is a recurrent question, but I can't find a solution...

-- 
Domenico L.


Re: Question about where to deploy the business logics for data processing

2023-06-12 Thread Lorusso Domenico
Hi Nim,
well this is a very particular scenario.
In a few words, these projects will never go live for production purposes,
but just to verify some hypotheses.

In this case, could be acceptable to generate schema on the fly, but isn't
easy to automatize each aspect related to optimization (partitioning, index
and so on).

Coming to your last question, where set the logic of data manipulation,
again, in this case, minimize the lan traffic could be your main goal, this
means logic inside the DB.


Il giorno ven 9 giu 2023 alle ore 18:34 Lorusso Domenico <
domenico@gmail.com> ha scritto:

> Uhm me need to start form 2 concepts:
>
>1. competence
>2. Network lag
>
> Competence: usually programmers aren't skilled enough about the
> architectures and the actual needs of each layer.
> This is a problem, because often programmers try to do something with what
> he already know (e.g. perform join in Java).
>
> A correct design requires to identify at least the data logic, the process
> logic, the business logic and the presentation logic.
>
> One of the most important goals of Data logic is to ensure the
> correctness of data from many point of view (all is impossible).
>
> That involve:
>
>- audit information
>- bitemporal management
>- strictly definition and verification of data (foreign key, checks,
>management of compatibility)
>- replicate consistently data for different usage
>- isolate access for actual needs
>- design
>
> So an application that requires changing the data model does not seem to
> be well designed...
>
> Network lag
> The first problem is latency, I must minimize the passage of data over the
> network.
> This means, for example, creating a service that allows the caller to
> choose only the information it needs.
> But it also means, to get all the information needed in a single call,
> design asynchronous service, use cache data physically near to the frontend
> or the middle layer.
>
> Based on these 2 concepts I suggest:
>
>- develop the Data logic near or inside the database;
>- design powerful and addictive api;
>- don't allow model change by the business logic
>- organize/copy data in jsonb with a powerful json schema to provide
>coherence through every layer
>- ensure a system to grant ACID features to your process.
>
>
>
> Il giorno ven 9 giu 2023 alle ore 05:22 Nim Li  ha
> scritto:
>
>> Hello.
>>
>> We have a PostgreSQL database with many tables, as well as foreign table,
>> dblink, triggers, functions, indexes, etc, for managing the business logics
>> of the data within the database.  We also have a custom table for the
>> purpose of tracking the slowly changing dimensions (type 2).
>>
>> Currently we are looking into using TypeORM (from Nest JS framework) to
>> connect to the database for creating a BE that provides web service.  Some
>> reasons of using TypeORM are that it can update the database schema without
>> any SQL codes, works very well with Git, etc.  And from what I am reading,
>> Git seems to work better with TypeORM, rather than handling individual
>> batch files with SQL codes (I still need to find out more about this)  Yet
>> I do not think the ORM concept deals with database specify functions, such
>> as dblink and/or trigger-function, etc, which handles the business logics
>> or any ETL automation within the database itself (I should read more about
>> this as well.)
>>
>> Anyway, in our team discussion, I was told that in modern programming
>> concept, the world is moving away from deploying programming logics within
>> the database (eg, by using PL/SQL).  Instead, the proper way should be to
>> deploy all the programming logics to the framework which is used to connect
>> to the database, such as NestJS in our case.  So, all we need in a database
>> should be only the schema (managed by ORM), and we should move all the
>> existing business logics (currently managed by things like the database
>> triggers, functions, dblink, etc.) to the Typescript codes within the
>> NestJS framework.
>>
>> I wonder if anyone in the community has gone through changes like this?
>> I mean ... moving the business logics from PL/SQL within the database to
>> the codes in NestJS framework, and reply on only the TypeORM to manage the
>> update of the database without any SQL codes?  Any thoughts about such a
>> change?
>>
>> Thank you!!
>>
>>
>
> --
> Domenico L.
>
> per stupire mezz'ora basta un libro di storia,
> io cercai di imparare la Treccani a memoria... [F.d.A.]
>


-- 
Domenico L.

per stupire mezz'ora basta un libro di storia,
io cercai di imparare la Treccani a memoria... [F.d.A.]


Re: Question about where to deploy the business logics for data processing

2023-06-09 Thread Lorusso Domenico
Uhm me need to start form 2 concepts:

   1. competence
   2. Network lag

Competence: usually programmers aren't skilled enough about the
architectures and the actual needs of each layer.
This is a problem, because often programmers try to do something with what
he already know (e.g. perform join in Java).

A correct design requires to identify at least the data logic, the process
logic, the business logic and the presentation logic.

One of the most important goals of Data logic is to ensure the
correctness of data from many point of view (all is impossible).

That involve:

   - audit information
   - bitemporal management
   - strictly definition and verification of data (foreign key, checks,
   management of compatibility)
   - replicate consistently data for different usage
   - isolate access for actual needs
   - design

So an application that requires changing the data model does not seem to be
well designed...

Network lag
The first problem is latency, I must minimize the passage of data over the
network.
This means, for example, creating a service that allows the caller to
choose only the information it needs.
But it also means, to get all the information needed in a single call,
design asynchronous service, use cache data physically near to the frontend
or the middle layer.

Based on these 2 concepts I suggest:

   - develop the Data logic near or inside the database;
   - design powerful and addictive api;
   - don't allow model change by the business logic
   - organize/copy data in jsonb with a powerful json schema to provide
   coherence through every layer
   - ensure a system to grant ACID features to your process.



Il giorno ven 9 giu 2023 alle ore 05:22 Nim Li  ha
scritto:

> Hello.
>
> We have a PostgreSQL database with many tables, as well as foreign table,
> dblink, triggers, functions, indexes, etc, for managing the business logics
> of the data within the database.  We also have a custom table for the
> purpose of tracking the slowly changing dimensions (type 2).
>
> Currently we are looking into using TypeORM (from Nest JS framework) to
> connect to the database for creating a BE that provides web service.  Some
> reasons of using TypeORM are that it can update the database schema without
> any SQL codes, works very well with Git, etc.  And from what I am reading,
> Git seems to work better with TypeORM, rather than handling individual
> batch files with SQL codes (I still need to find out more about this)  Yet
> I do not think the ORM concept deals with database specify functions, such
> as dblink and/or trigger-function, etc, which handles the business logics
> or any ETL automation within the database itself (I should read more about
> this as well.)
>
> Anyway, in our team discussion, I was told that in modern programming
> concept, the world is moving away from deploying programming logics within
> the database (eg, by using PL/SQL).  Instead, the proper way should be to
> deploy all the programming logics to the framework which is used to connect
> to the database, such as NestJS in our case.  So, all we need in a database
> should be only the schema (managed by ORM), and we should move all the
> existing business logics (currently managed by things like the database
> triggers, functions, dblink, etc.) to the Typescript codes within the
> NestJS framework.
>
> I wonder if anyone in the community has gone through changes like this?  I
> mean ... moving the business logics from PL/SQL within the database to the
> codes in NestJS framework, and reply on only the TypeORM to manage the
> update of the database without any SQL codes?  Any thoughts about such a
> change?
>
> Thank you!!
>
>

-- 
Domenico L.

per stupire mezz'ora basta un libro di storia,
io cercai di imparare la Treccani a memoria... [F.d.A.]


Re: How to manipulate field in New record

2023-06-07 Thread Lorusso Domenico
yes I know, the trigger should be before insert update and delete, and the
function decide what manipulate basing on TG_OP

Il giorno mer 7 giu 2023 alle ore 15:57 Adrian Klaver <
adrian.kla...@aklaver.com> ha scritto:

> On 6/7/23 05:12, Lorusso Domenico wrote:
> > Hello,
> > Looking for a global solution I've write e trigger function that as
> > optional parameter (argv[0]) receive the name of a specific parameter.
> >
> > My need is to get the filed from NEW and OLD record manipulate and set
> > back the field with new value.
>
> A new row won't have the OLD RECORD.
>
> Read:
>
> https://www.postgresql.org/docs/current/plpgsql-trigger.html
>
> and look at the examples.
>
> >
> > Problem 1: how to get and set this field
> > Problem 2: the field has Composit type
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>

-- 
Domenico L.

per stupire mezz'ora basta un libro di storia,
io cercai di imparare la Treccani a memoria... [F.d.A.]


How to manipulate field in New record

2023-06-07 Thread Lorusso Domenico
Hello,
Looking for a global solution I've write e trigger function that as
optional parameter (argv[0]) receive the name of a specific parameter.

My need is to get the filed from NEW and OLD record manipulate and set back
the field with new value.

Problem 1: how to get and set this field
Problem 2: the field has Composit type


Re: Is there any good optimization solution to improve the query efficiency?

2023-06-06 Thread Lorusso Domenico
 Seq Scan on tbl_pov t_pov
> (cost=0.00..51380.09 rows=330109 width=9) (actual time=0.029..124.632
> rows=330109 loops=1)
>
>   ->  Index Scan using tbl_cust_pk on tbl_cust
> t_cust_1  (cost=0.43..18.42 rows=2 width=23) (actual time=0.011..0.012
> rows=1 loops=13)
>
> Index Cond: (RSNO = t_res_1.RSNO)
>
> Filter: ((dispseq <> ''::numeric) AND
> ((stsflg)::text = 'T'::text) AND ((KFIX)::text = '0'::text))
>
> Rows Removed by Filter: 2
>
> ->  Index Scan using tbl_cust_pk on tbl_cust t_cust
> (cost=0.43..8.31 rows=1 width=61) (actual time=0.003..0.003 rows=1 loops=11)
>
>   Index Cond: ((RSNO = t_cust_1.RSNO) AND (gstseq =
> (min(t_cust_1.gstseq
>
>   Filter: ((t_cust_1.KNO)::text = (KNO)::text)
>
>   ->  Hash  (cost=9.44..9.44 rows=472 width=108) (actual
> time=216.361..216.361 rows=15123 loops=1)
>
> Buckets: 16384 (originally 1024)  Batches: 1 (originally 1)
> Memory Usage: 882kB
>
> ->  CTE Scan on t_res  (cost=0.00..9.44 rows=472 width=108)
> (actual time=55.591..211.698 rows=15123 loops=1)
>
> Planning Time: 1.417 ms
>
> Execution Time: 411.019 ms
>
>
> 
>
>
>
>
>
>
> At 2023-06-05 22:53:56, "Lorusso Domenico"  wrote:
>
> try this (there is some comment)
>
> with t_res as (
> select RSNO,  KNO
> from TBL_RES
> where TBL_RES.CID >= to_date('2022/07/01', '/MM/DD')
> and TBL_RES.CID <= to_date('2022/07/31', '/MM/DD')
> and TBL_RES.COD >= to_date('2022/07/01', '/MM/DD')
> and TBL_RES.COD <= to_date('2022/07/31', '/MM/DD')
> ), t_pov2 as (
> select T_CUST.RSNO ,
> T_CUST.KNO ,
> MIN(T_CUST.GSTSEQ) GSTSEQ
> from T_RES -- this is tbl_res already filter by date
> inner join TBL_CUST T_CUST on T_RES.RSNO = T_CUST.RSNO
> inner join TBL_POV T_POV on T_POV.CRSNO = T_RES.CRSNO -- why you use this
> table? it doesn't seem to be used to extract data. Are you trying to
> extract data from T_RES that have at least a record in T_POV? in this case
> could work better move this join in the first with (using distinct or group
> by to ensure there will be just a record for rsno and kno)
> where T_CUST.STSFLG = 'T'
> and T_CUST.DISPSEQ <> 
> AND T_CUST.KFIX = '0'
> group by T_CUST.RSNO , T_CUST.KNO
> ), t_pov3 as (
> select T_CUST.RSNO RSNO2 ,
> T_CUST.KNO ,
> T_CUST.AGE ,
> T_CUST.GST
> from TBL_CUST T_CUST
> inner join t_pov2 on T_POV2.RSNO = T_CUST.RSNO
> and T_POV2.KNO = T_CUST.KNO
> and T_POV2.GSTSEQ = T_CUST.GSTSEQ
> )
> select *
> from TBL_RES
> left outer join t_pov3 on TBL_RES.RSNO = T_POV3.RSNO2
> and TBL_RES.KNO = T_POV3.KNO
>
> Il giorno lun 5 giu 2023 alle ore 12:06 gzh  ha scritto:
>
>> Thank you very much for taking the time to reply to my question.
>>
>> I followed your suggestion and rewrote the SQL using Common Table
>> Expression (CTE).
>>
>> Unfortunately, there was no significant improvement in performance.
>>
>>
>>
>> At 2023-06-05 17:47:25, "Lorusso Domenico" 
>> wrote:
>>
>> Hello,
>> In many case a formal writing and usage of with statement could solve the
>> issue.
>> If you need join, use always join:
>> where T_POV2.RSNO = T_CUST.RSNO
>> and T_POV2.KNO = T_CUST.KNO
>> and T_POV2.GSTSEQ = T_CUST.GSTSEQ)
>> this is an inner join.
>>
>> I mean something like this
>> with t_pov2 as (
>> select T_CUST.RSNO ,
>> T_CUST.KNO ,
>> MIN(T_CUST.GSTSEQ) GSTSEQ
>> from TBL_CUST T_CUST ,
>> TBL_POV T_POV ,
>> TBL_RES T_RES
>> where T_CUST.STSFLG = 'T'
>> and T_CUST.DISPSEQ <> 
>> AND T_CUST.KFIX = '0'
>> and T_POV.CRSNO = T_RES.CRSNO
>> and T_RES.RSNO = T_CUST.RSNO
>> group by T_CUST.RSNO , T_CUST.KNO
>> ), t_pov3 as (
>> select T_CUST.RSNO RSNO2 ,
>> T_CUST.KNO ,
>> T_CUST.AGE ,
>> T_CUST.GST
>> from TBL_CUST T_CUST
>> inner join t_pov2 on T_POV2.RSNO = T_CUST.RSNO
>> and T_POV2.KNO = T_CUST.KNO
>> and T_POV2.GSTSEQ = T_CUST.GSTSEQ
>> )
>>
>> select *
>> from TBL_RES
>> left outer join t_pov3 on TBL_RES.RSNO = T_POV3.RSNO2
>> and TBL_RES.KNO = T_POV3.KNO
>> where TBL_RES.CID >= to_date('2022/07/01', '/MM/DD')
>> and TBL_RES.CID <= to_da

Re: Composite type: Primary Key and validation

2023-06-05 Thread Lorusso Domenico
thank's Adrian, my problem is I've to use CloudSql, so I can't install
extensions not verified by google...

Il giorno lun 5 giu 2023 alle ore 17:17 Adrian Klaver <
adrian.kla...@aklaver.com> ha scritto:

> On 6/5/23 08:06, Lorusso Domenico wrote:
> > Thank's, you are right, I've the same doubts.
> > A composite type is useful because I've to add all these information on
> > many tables and because it more easy to pass all these informations to
> > functions that have to ensure the right format and evaluation.
>
> I do that with a Sqitch(https://sqitch.org/) template, the relevant part
> being:
>
> ...
>
> CREATE TABLE [% IF schema %][% schema %][% ELSE %]schema_name[% END
> %].[% IF table %][% table %][% ELSE %]table_name[% END %](
>  line_id integer GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
>
>  ts_insert   timestamptz NOT NULL DEFAULT now(),
>  ts_update   timestamptz,
>  user_insert varchar(20) NOT NULL DEFAULT "session_user"(),
>  user_update varchar(20)
> );
>
> ...
>
> I then add the table specific fields after Sqitch creates the deploy
> script from the template.
>
> >
> > Talking about first point I could use the "table inheritance", but I've
> > to reset all constraints for each table :-(
> >
>
> > --
> > Domenico L.
> >
> > per stupire mezz'ora basta un libro di storia,
> > io cercai di imparare la Treccani a memoria... [F.d.A.]
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>

-- 
Domenico L.

per stupire mezz'ora basta un libro di storia,
io cercai di imparare la Treccani a memoria... [F.d.A.]


Re: Composite type: Primary Key and validation

2023-06-05 Thread Lorusso Domenico
Thank's, you are right, I've the same doubts.
A composite type is useful because I've to add all these information on
many tables and because it more easy to pass all these informations to
functions that have to ensure the right format and evaluation.

Talking about first point I could use the "table inheritance", but I've to
reset all constraints for each table :-(

Il giorno lun 5 giu 2023 alle ore 16:02 Laurenz Albe <
laurenz.a...@cybertec.at> ha scritto:

> On Mon, 2023-06-05 at 11:49 +0200, Lorusso Domenico wrote:
> > I've a couple of questions about composite type.
> > Suppose this composite type:
> > CREATE TYPE my_type AS (
> > user_ts_start My_start_timestamp,
> > user_ts_end My_end_timestamp,
> > db_ts_start My_start_timestamp,
> > db_ts_end My_end_timestamp,
> > audit_record jsonb
> > );
> > My_start_timestamp is a domain of timestamp with default as now().
> > My_end_timestamp is a domain of timestamp with default as infinite
> >
> >1. May I use user_ts_start and/or db_ts_start has part of Primary Key
> of a table that contains a field of my_type?
> >2. to add an overall check constraint on the entire composite type,
> could be a valid
> >   approach to create a domain based on my_type and add a custom
> function to validate it? (check_my_type(VALUE)).
> >   In this way I've a dominan of composite type that contain others
> domain... what do you think?
>
> Avoid using composite types as data types for a table column.
> It adds complexity for no clear gain.
>
> Yours,
> Laurenz Albe
>


-- 
Domenico L.

per stupire mezz'ora basta un libro di storia,
io cercai di imparare la Treccani a memoria... [F.d.A.]


Re: Composite type: Primary Key and validation

2023-06-05 Thread Lorusso Domenico
Thank's a lot, I'll take care of it.

Il giorno lun 5 giu 2023 alle ore 16:18 Ron  ha
scritto:

> On 6/5/23 09:02, Laurenz Albe wrote:
> > On Mon, 2023-06-05 at 11:49 +0200, Lorusso Domenico wrote:
> >> I've a couple of questions about composite type.
> >> Suppose this composite type:
> >> CREATE TYPE my_type AS (
> >>  user_ts_start My_start_timestamp,
> >>  user_ts_end My_end_timestamp,
> >>  db_ts_start My_start_timestamp,
> >>  db_ts_end My_end_timestamp,
> >>  audit_record jsonb
> >> );
> >> My_start_timestamp is a domain of timestamp with default as now().
> >> My_end_timestamp is a domain of timestamp with default as infinite
> >>
> >> 1. May I use user_ts_start and/or db_ts_start has part of Primary
> Key of a table that contains a field of my_type?
> >> 2. to add an overall check constraint on the entire composite type,
> could be a valid
> >>approach to create a domain based on my_type and add a custom
> function to validate it? (check_my_type(VALUE)).
> >>In this way I've a dominan of composite type that contain others
> domain... what do you think?
> > Avoid using composite types as data types for a table column.
> > It adds complexity for no clear gain.
>
> Isn't now() also a bad idea, since it's the "now" at the start of the
> transaction?
>
> Better to use clock_timestamp().
>
> --
> Born in Arizona, moved to Babylonia.
>
>
>

-- 
Domenico L.

per stupire mezz'ora basta un libro di storia,
io cercai di imparare la Treccani a memoria... [F.d.A.]


Re: Is there any good optimization solution to improve the query efficiency?

2023-06-05 Thread Lorusso Domenico
try this (there is some comment)

with t_res as (
select RSNO,  KNO
from TBL_RES
where TBL_RES.CID >= to_date('2022/07/01', '/MM/DD')
and TBL_RES.CID <= to_date('2022/07/31', '/MM/DD')
and TBL_RES.COD >= to_date('2022/07/01', '/MM/DD')
and TBL_RES.COD <= to_date('2022/07/31', '/MM/DD')
), t_pov2 as (
select T_CUST.RSNO ,
T_CUST.KNO ,
MIN(T_CUST.GSTSEQ) GSTSEQ
from T_RES -- this is tbl_res already filter by date
inner join TBL_CUST T_CUST on T_RES.RSNO = T_CUST.RSNO
inner join TBL_POV T_POV on T_POV.CRSNO = T_RES.CRSNO -- why you use this
table? it doesn't seem to be used to extract data. Are you trying to
extract data from T_RES that have at least a record in T_POV? in this case
could work better move this join in the first with (using distinct or group
by to ensure there will be just a record for rsno and kno)
where T_CUST.STSFLG = 'T'
and T_CUST.DISPSEQ <> 
AND T_CUST.KFIX = '0'
group by T_CUST.RSNO , T_CUST.KNO
), t_pov3 as (
select T_CUST.RSNO RSNO2 ,
T_CUST.KNO ,
T_CUST.AGE ,
T_CUST.GST
from TBL_CUST T_CUST
inner join t_pov2 on T_POV2.RSNO = T_CUST.RSNO
and T_POV2.KNO = T_CUST.KNO
and T_POV2.GSTSEQ = T_CUST.GSTSEQ
)
select *
from TBL_RES
left outer join t_pov3 on TBL_RES.RSNO = T_POV3.RSNO2
and TBL_RES.KNO = T_POV3.KNO

Il giorno lun 5 giu 2023 alle ore 12:06 gzh  ha scritto:

> Thank you very much for taking the time to reply to my question.
>
> I followed your suggestion and rewrote the SQL using Common Table
> Expression (CTE).
>
> Unfortunately, there was no significant improvement in performance.
>
>
>
> At 2023-06-05 17:47:25, "Lorusso Domenico"  wrote:
>
> Hello,
> In many case a formal writing and usage of with statement could solve the
> issue.
> If you need join, use always join:
> where T_POV2.RSNO = T_CUST.RSNO
> and T_POV2.KNO = T_CUST.KNO
> and T_POV2.GSTSEQ = T_CUST.GSTSEQ)
> this is an inner join.
>
> I mean something like this
> with t_pov2 as (
> select T_CUST.RSNO ,
> T_CUST.KNO ,
> MIN(T_CUST.GSTSEQ) GSTSEQ
> from TBL_CUST T_CUST ,
> TBL_POV T_POV ,
> TBL_RES T_RES
> where T_CUST.STSFLG = 'T'
> and T_CUST.DISPSEQ <> 
> AND T_CUST.KFIX = '0'
> and T_POV.CRSNO = T_RES.CRSNO
> and T_RES.RSNO = T_CUST.RSNO
> group by T_CUST.RSNO , T_CUST.KNO
> ), t_pov3 as (
> select T_CUST.RSNO RSNO2 ,
> T_CUST.KNO ,
> T_CUST.AGE ,
> T_CUST.GST
> from TBL_CUST T_CUST
> inner join t_pov2 on T_POV2.RSNO = T_CUST.RSNO
> and T_POV2.KNO = T_CUST.KNO
> and T_POV2.GSTSEQ = T_CUST.GSTSEQ
> )
>
> select *
> from TBL_RES
> left outer join t_pov3 on TBL_RES.RSNO = T_POV3.RSNO2
> and TBL_RES.KNO = T_POV3.KNO
> where TBL_RES.CID >= to_date('2022/07/01', '/MM/DD')
> and TBL_RES.CID <= to_date('2022/07/31', '/MM/DD')
> and TBL_RES.COD >= to_date('2022/07/01', '/MM/DD')
> and TBL_RES.COD <= to_date('2022/07/31', '/MM/DD')
>
> but if tbl_res contain lessere record a good idea is start from this table
> and use in join with other
>
>
> Il giorno lun 5 giu 2023 alle ore 08:57 gzh  ha scritto:
>
>> Hi everyone,
>> I'm running into some performance issues with my SQL query.
>> The following SQL query is taking a long time to execute.
>>
>> Execution Plan:
>> explain analyse
>> select * from TBL_RES
>> left outer join(select T_CUST.RSNO RSNO2 ,
>> T_CUST.KNO ,
>> T_CUST.AGE ,
>> T_CUST.GST
>> from TBL_CUST T_CUST ,
>> (select T_CUST.RSNO ,
>> T_CUST.KNO ,
>> MIN(T_CUST.GSTSEQ) GSTSEQ
>> from TBL_CUST T_CUST ,
>> TBL_POV T_POV ,
>> TBL_RES T_RES
>> where T_CUST.STSFLG = 'T'
>> and T_CUST.DISPSEQ <> 
>> AND T_CUST.KFIX = '0'
>> and T_POV.CRSNO = T_RES.CRSNO
>> and T_RES.RSNO = T_CUST.RSNO
>> group by T_CUST.RSNO , T_CUST.KNO) T_POV2
>> where T_POV2.RSNO = T_CUST.RSNO
>> and T_POV2.KNO = T_CUST.KNO
>> and T_POV2.GSTSEQ = T_CUST.GSTSEQ) T_POV3 on TBL_RES.RSNO = T_POV3.RSNO2
>> and TBL_RES.KNO = T_POV3.KNO
>> where TBL_RES.CID >= to_date('2022/07/01', '/MM/DD')
>> and TBL_RES.CID <= to_date('2022/07/31', '/MM/DD')
>> and TBL_RES.COD >= to_date('2022/07/01', '/MM/DD')
>> and TBL_RES.COD <= to_date('2022/07/31', '/MM/DD')
>> - Execution Plan -
>> Nested Loop Left Join  (cost=254388.44..452544.70 rows=473 width=3545)
>> (actual time=3077.312..996048.714 rows=15123 loops=1)
>>   Join Filter: ((TBL_RES.RSN

Composite type: Primary Key and validation

2023-06-05 Thread Lorusso Domenico
Hello guys,
I've a couple of questions about composite type.
Suppose this composite type:
CREATE TYPE my_type AS (
user_ts_start My_start_timestamp,
user_ts_end My_end_timestamp,
db_ts_start My_start_timestamp,
db_ts_end My_end_timestamp,
audit_record jsonb
);
My_start_timestamp is a domain of timestamp with default as now().
My_end_timestamp is a domain of timestamp with default as infinite


   1. May I use user_ts_start and/or db_ts_start has part of Primary Key of
   a table that contains a field of my_type?
   2. to add an overall check constraint on the entire composite type,
   could be a valid approach to create a domain based on my_type and add a
   custom function to validate it? (check_my_type(VALUE)). In this way I've a
   dominan of composite type that contain others domain... what do you think?


thank's in advance

-- 
Domenico L.

per stupire mezz'ora basta un libro di storia,
io cercai di imparare la Treccani a memoria... [F.d.A.]


Re: Is there any good optimization solution to improve the query efficiency?

2023-06-05 Thread Lorusso Domenico
Hello,
In many case a formal writing and usage of with statement could solve the
issue.
If you need join, use always join:
where T_POV2.RSNO = T_CUST.RSNO
and T_POV2.KNO = T_CUST.KNO
and T_POV2.GSTSEQ = T_CUST.GSTSEQ)
this is an inner join.

I mean something like this
with t_pov2 as (
select T_CUST.RSNO ,
T_CUST.KNO ,
MIN(T_CUST.GSTSEQ) GSTSEQ
from TBL_CUST T_CUST ,
TBL_POV T_POV ,
TBL_RES T_RES
where T_CUST.STSFLG = 'T'
and T_CUST.DISPSEQ <> 
AND T_CUST.KFIX = '0'
and T_POV.CRSNO = T_RES.CRSNO
and T_RES.RSNO = T_CUST.RSNO
group by T_CUST.RSNO , T_CUST.KNO
), t_pov3 as (
select T_CUST.RSNO RSNO2 ,
T_CUST.KNO ,
T_CUST.AGE ,
T_CUST.GST
from TBL_CUST T_CUST
inner join t_pov2 on T_POV2.RSNO = T_CUST.RSNO
and T_POV2.KNO = T_CUST.KNO
and T_POV2.GSTSEQ = T_CUST.GSTSEQ
)

select *
from TBL_RES
left outer join t_pov3 on TBL_RES.RSNO = T_POV3.RSNO2
and TBL_RES.KNO = T_POV3.KNO
where TBL_RES.CID >= to_date('2022/07/01', '/MM/DD')
and TBL_RES.CID <= to_date('2022/07/31', '/MM/DD')
and TBL_RES.COD >= to_date('2022/07/01', '/MM/DD')
and TBL_RES.COD <= to_date('2022/07/31', '/MM/DD')

but if tbl_res contain lessere record a good idea is start from this table
and use in join with other


Il giorno lun 5 giu 2023 alle ore 08:57 gzh  ha scritto:

> Hi everyone,
> I'm running into some performance issues with my SQL query.
> The following SQL query is taking a long time to execute.
>
> Execution Plan:
> explain analyse
> select * from TBL_RES
> left outer join(select T_CUST.RSNO RSNO2 ,
> T_CUST.KNO ,
> T_CUST.AGE ,
> T_CUST.GST
> from TBL_CUST T_CUST ,
> (select T_CUST.RSNO ,
> T_CUST.KNO ,
> MIN(T_CUST.GSTSEQ) GSTSEQ
> from TBL_CUST T_CUST ,
> TBL_POV T_POV ,
> TBL_RES T_RES
> where T_CUST.STSFLG = 'T'
> and T_CUST.DISPSEQ <> 
> AND T_CUST.KFIX = '0'
> and T_POV.CRSNO = T_RES.CRSNO
> and T_RES.RSNO = T_CUST.RSNO
> group by T_CUST.RSNO , T_CUST.KNO) T_POV2
> where T_POV2.RSNO = T_CUST.RSNO
> and T_POV2.KNO = T_CUST.KNO
> and T_POV2.GSTSEQ = T_CUST.GSTSEQ) T_POV3 on TBL_RES.RSNO = T_POV3.RSNO2
> and TBL_RES.KNO = T_POV3.KNO
> where TBL_RES.CID >= to_date('2022/07/01', '/MM/DD')
> and TBL_RES.CID <= to_date('2022/07/31', '/MM/DD')
> and TBL_RES.COD >= to_date('2022/07/01', '/MM/DD')
> and TBL_RES.COD <= to_date('2022/07/31', '/MM/DD')
> - Execution Plan -
> Nested Loop Left Join  (cost=254388.44..452544.70 rows=473 width=3545)
> (actual time=3077.312..996048.714 rows=15123 loops=1)
>   Join Filter: ((TBL_RES.RSNO = T_CUST.RSNO) AND ((TBL_RES.KNO)::text =
> (T_CUST.KNO)::text))
>   Rows Removed by Join Filter: 4992268642
>   ->  Gather  (cost=1000.00..58424.35 rows=473 width=3489) (actual
> time=0.684..14.158 rows=15123 loops=1)
> Workers Planned: 2
> Workers Launched: 2
> ->  Parallel Seq Scan on TBL_RES  (cost=0.00..57377.05 rows=197
> width=3489) (actual time=0.096..279.504 rows=5041 loops=3)
>   Filter: ((CID >= to_date('2022/07/01'::text,
> '/MM/DD'::text)) AND (CID <= to_date('2022/07/31'::text,
> '/MM/DD'::text)) AND (COD >= to_date('2022/07/01'::text,
> '/MM/DD'::text)) AND (COD <= to_date('2022/07/31'::text,
> '/MM/DD'::text)))
>   Rows Removed by Filter: 161714
>   ->  Materialize  (cost=253388.44..394112.08 rows=1 width=56) (actual
> time=0.081..26.426 rows=330111 loops=15123)
> ->  Hash Join  (cost=253388.44..394112.07 rows=1 width=56) (actual
> time=1197.484..2954.084 rows=330111 loops=1)
>   Hash Cond: ((T_CUST.RSNO = T_CUST_1.RSNO) AND
> ((T_CUST.KNO)::text = (T_CUST_1.KNO)::text) AND (T_CUST.gstseq =
> (min(T_CUST_1.gstseq
>   ->  Seq Scan on TBL_CUST T_CUST  (cost=0.00..79431.15
> rows=2000315 width=61) (actual time=0.015..561.005 rows=2000752 loops=1)
>   ->  Hash  (cost=246230.90..246230.90 rows=262488 width=50)
> (actual time=1197.025..1209.957 rows=330111 loops=1)
> Buckets: 65536  Batches: 8  Memory Usage: 2773kB
> ->  Finalize GroupAggregate
> (cost=205244.84..243606.02 rows=262488 width=50) (actual
> time=788.552..1116.074 rows=330111 loops=1)
>   Group Key: T_CUST_1.RSNO, T_CUST_1.KNO
>   ->  Gather Merge  (cost=205244.84..238964.80
> rows=268846 width=50) (actual time=788.547..982.479 rows=330111 loops=1)
> Workers Planned: 2
> Workers Launched: 1
> ->  Partial GroupAggregate
> (cost=204244.81..206933.27 rows=134423 width=50) (actual
> time=784.032..900.979 rows=165056 loops=2)
>   Group Key: T_CUST_1.RSNO,
> T_CUST_1.KNO
>   ->  Sort  (cost=204244.81..204580.87
> rows=134423 width=23) (actual time=784.019..833.791 rows=165061 loops=2)
> Sort Key: T_CUST_1.RSNO,
> T_CUST_1.KNO
> Sort Method: external merge