Re: Index creation

2022-06-20 Thread Дмитрий Иванов
Yes, you are right. The presented index usage data is caused by recursive
queries, which check the integrity of hierarchical structures from the
bottom up. Your explanation has clarified what is going on. Thank you.
My experiments with indexes are caused by the appearance of significant
variance (1-180ms) in these operations, which appeared when I switched from
version 12 to 14, which increased the checking time by ~250% with the
existing implementation and ~40% after I rewrote the functions to run as
dynamic SQL. This decision was due to the obvious correlation between the
level of variance and the primary dataset obtained when the non-dynamic
function was first called. I don't think my communication experience will
allow me to properly describe the problem, but the information I received
was useful. Thank you.
--
Regards, Dmitry!


пн, 20 июн. 2022 г. в 23:23, Jeff Janes :

> On Mon, Jun 20, 2022 at 1:17 AM Дмитрий Иванов 
> wrote:
>
>> Your statement seems obvious to me. But what I see doesn't seem like a
>> conscious choice. It turns out that it is better to have a lighter
>> general-purpose index than to strive to create a target covering index for
>> a certain kind of operation.
>>
>
> If both indexes are expected to be hit only once in the query and return
> only one row, their expected costs will be the same.  In this case, the tie
> is broken arbitrarily, and that often means the most-recently created index
> will get chosen.
>
> As the expected number of leaf page accesses in a given query goes up, the
> smaller index will start to look less expensive.
>
> Cheers,
>
> Jeff
>
>>


Re: Index creation

2022-06-19 Thread Дмитрий Иванов
Your statement seems obvious to me. But what I see doesn't seem like a
conscious choice. It turns out that it is better to have a lighter
general-purpose index than to strive to create a target covering index for
a certain kind of operation.

DROP INDEX IF EXISTS bpd.index_class_prop_id_prop_inherit;
CREATE INDEX IF NOT EXISTS index_class_prop_id_prop_inherit
ON bpd.class_prop USING btree
(id_prop_inherit ASC NULLS LAST, timestamp_class_inherit ASC NULLS LAST)
TABLESPACE pg_default;



DROP INDEX IF EXISTS bpd.index_class_prop_id_prop_inherit_covering;
CREATE INDEX IF NOT EXISTS index_class_prop_id_prop_inherit_covering
ON bpd.class_prop USING btree
(id_prop_inherit ASC NULLS LAST, timestamp_class_inherit ASC NULLS LAST)
INCLUDE(id, id_class, inheritance)
TABLESPACE pg_default;

Uchet=# SELECT relid, indexrelid, schemaname, relname, indexrelname,
idx_scan, idx_tup_read, idx_tup_fetch
Uchet-#
Uchet-# FROM bpd.cfg_v_stat_all_indexes WHERE indexrelname LIKE
'index_class_prop_id_prop_inherit%';
 relid | indexrelid | schemaname |  relname   |   indexrelname
   | idx_scan | idx_tup_read | idx_tup_fetch
---++++---+--+--+---
 17572 |  40036 | bpd| class_prop |
index_class_prop_id_prop_inherit  |0 |0 |
  0
 17572 |  40037 | bpd| class_prop |
index_class_prop_id_prop_inherit_covering | 7026 | 7026 |
  0
(2 rows)


DROP INDEX IF EXISTS bpd.index_class_prop_id_prop_inherit_covering;

CREATE INDEX IF NOT EXISTS index_class_prop_id_prop_inherit_covering
ON bpd.class_prop USING btree
(id_prop_inherit ASC NULLS LAST, timestamp_class_inherit ASC NULLS LAST)
INCLUDE(id, id_class, inheritance)
TABLESPACE pg_default;

DROP INDEX IF EXISTS bpd.index_class_prop_id_prop_inherit;

CREATE INDEX IF NOT EXISTS index_class_prop_id_prop_inherit
ON bpd.class_prop USING btree
(id_prop_inherit ASC NULLS LAST, timestamp_class_inherit ASC NULLS LAST)
TABLESPACE pg_default;

Uchet=# SELECT relid, indexrelid, schemaname, relname, indexrelname,
idx_scan, idx_tup_read, idx_tup_fetch FROM bpd.cfg_v_stat_all_indexes WHERE
indexrelname LIK
E 'index_class_prop_id_prop_inherit%';
 relid | indexrelid | schemaname |  relname   |   indexrelname
   | idx_scan | idx_tup_read | idx_tup_fetch
---++++---+--+--+---
 17572 |  40049 | bpd| class_prop |
index_class_prop_id_prop_inherit  | 6356 | 6356 |
  0
 17572 |  40048 | bpd| class_prop |
index_class_prop_id_prop_inherit_covering |0 |0 |
  0
(2 rows)
--
Regards, Dmitry!


пн, 20 июн. 2022 г. в 00:08, David G. Johnston :

> On Sun, Jun 19, 2022 at 12:06 PM Дмитрий Иванов 
> wrote:
>
>> Good afternoon.
>> I have a query parser question. If there are two kinds of queries using
>> an indexed field. In this case, one view is limited to this field, the
>> second one uses a number of fields included in the index by the include
>> directive. It makes sense to have two indexes, lightweight and containing
>> include. Or will the plan rely on the nearest suitable index without
>> considering its weight?
>>
>>
> The system should avoid the larger sized index unless it will sufficiently
> benefit from the Index Only Scan that such a larger covering index is
> supposed to facilitate.
>
> David J.
>


Index creation

2022-06-19 Thread Дмитрий Иванов
Good afternoon.
I have a query parser question. If there are two kinds of queries using an
indexed field. In this case, one view is limited to this field, the second
one uses a number of fields included in the index by the include directive.
It makes sense to have two indexes, lightweight and containing include. Or
will the plan rely on the nearest suitable index without considering its
weight?
--
Regards, Dmitry!


Re: Microsoft Report Builder

2022-02-11 Thread Дмитрий Иванов
I use Valentina Studio for some tasks.
https://www.valentina-db.com/ru/valentina-studio-overview
--
Regards, Dmitry!


сб, 12 февр. 2022 г. в 08:23, Kim Foltz :

> I am trying to configure Microsoft Report Builder to run against a
> mainframe PostgreSQL database. The ODBC connection tests as good and
> manually entered SQL Select statements properly return data. The problem is
> the Query Designer in the software doesn't support the normal graphical
> display of available tables and fields with PostgreSQL. All I find in
> Microsoft's documentation is the statement some data sources don't support
> graphical tools in Report Builder.
>
> More than likely there are better report generators than Microsoft's tool
> but our agency is very fond of Microsoft solutions. Anyone know of a way to
> get the graphical tools working in Report Builder? In the alternative, is
> there an alternative Windows report generator to try if Microsoft's
> software isn't repairable?
>


Re: Two academic questions

2022-02-01 Thread Дмитрий Иванов
Thank you, I learned everything I needed to know.
--
Regards, Dmitry!


ср, 2 февр. 2022 г. в 11:36, David G. Johnston :

>
>
> On Tuesday, February 1, 2022, Дмитрий Иванов 
> wrote:
>
>>
>>>>
>>>> DEFAULT VALUE <> GENERATED ALWAYS AS STORED
>>>> the DEFAULT value is not recalculated when the reference column are
>>>> changed. is not an identical construction GENERATED ALWAYS AS STORED
>>>>
>>>> *reply:*
>>>> They are in a separate list “Methods”.
>>>> This approach was used for other databases *long before* generated
>>>> columns were implemented in PostgreSQL.
>>>> 
>>>>
>>>>
>>>  It has to do with the syntax of generated and that you can generate
>>> data in different ways.  Calling those ways “methods” seems reasonable.
>>>
>>> It's hard to argue with that. However, it is not quite clear to me, what
>> origin are we talking about? I have worked with MS products and currently
>> with PostgreSQL and have not encountered this interpretation.
>>
>
> “Origin”? The documentation, and my internalizing of it based on my
> personal experiences.
>
> Anyway, I did my best given the unclear (to me at least) and limited
> information you provided, and the fact I don’t really understand the
> question.
>
> David J.
>
>


Re: Two academic questions

2022-02-01 Thread Дмитрий Иванов
>
>
>>
>> DEFAULT VALUE <> GENERATED ALWAYS AS STORED
>> the DEFAULT value is not recalculated when the reference column are
>> changed. is not an identical construction GENERATED ALWAYS AS STORED
>>
>> *reply:*
>> They are in a separate list “Methods”.
>> This approach was used for other databases *long before* generated
>> columns were implemented in PostgreSQL.
>> 
>>
>>
>  It has to do with the syntax of generated and that you can generate data
> in different ways.  Calling those ways “methods” seems reasonable.
>
> It's hard to argue with that. However, it is not quite clear to me, what
origin are we talking about? I have worked with MS products and currently
with PostgreSQL and have not encountered this interpretation.

>
> The Shema-data thing is called the Boogyman pattern because teachers use
> it to scare students and illustrate what not to do when designing a data
> model.  You can also find it abbreviated “EAV anti-pattern” where EAV
> stands for entity-attribute-value.
>

I think that's what I need, thank you.

> --
> Regards, Dmitry!
>
>


Re: Two academic questions

2022-02-01 Thread Дмитрий Иванов
ср, 2 февр. 2022 г. в 09:19, David G. Johnston :

> On Tue, Feb 1, 2022 at 8:15 PM Дмитрий Иванов 
> wrote:
>
>> Why are COLUMN GENERATED ALWAYS AS called "methods" and singled out as a
>> separate field type?
>>
>
> This sounds like you are describing something written.  Can you provide a
> link to where that is?
>
> Is there a common name in the community for the approach in which the data
>> schema is presented as data?
>>
>
It is possible to give a link, but it would require authorization:
![Topic](http://valentina-db.com/bt/view.php?id=8943)
Here are the contents of the topic:

Actual code include ALWAYS AS column:
CREATE TABLE bpd.schedules_calendar
(
id bigint NOT NULL DEFAULT
nextval('bpd.work_calendar_id_seq'::regclass),
work_date date NOT NULL,
work_year integer NOT NULL GENERATED ALWAYS AS (date_part('year'::text,
work_date)) STORED,
work_month integer NOT NULL GENERATED ALWAYS AS
(date_part('month'::text, work_date)) STORED,
work_day integer NOT NULL GENERATED ALWAYS AS (date_part('day'::text,
work_date)) STORED,
day_type bpd.day_type NOT NULL,
name_holiday character varying(100) COLLATE pg_catalog."default" NOT
NULL DEFAULT 'будний день'::character varying,
week40_day interval NOT NULL,
week40_month interval NOT NULL,
week36_day interval NOT NULL,
week36_month interval NOT NULL,
week35_day interval NOT NULL,
week35_month interval NOT NULL,
week24_day interval NOT NULL,
week24_month interval NOT NULL,
CONSTRAINT work_calendar_pkey PRIMARY KEY (id)
)
The code suggested in the studio:
CREATE TABLE "bpd"."schedules_calendar" (
"id" BigInt DEFAULT nextval('bpd.work_calendar_id_seq'::regclass) NOT
NULL,
"work_date" Date NOT NULL,
"work_year" Integer DEFAULT date_part('year'::text, work_date) NOT NULL,
"work_month" Integer DEFAULT date_part('month'::text, work_date) NOT
NULL,
"work_day" Integer DEFAULT date_part('day'::text, work_date) NOT NULL,
"day_type" "bpd"."day_type" NOT NULL,
"name_holiday" Character Varying( 100 ) DEFAULT 'будний
день'::character varying NOT NULL,
"week40_day" Interval NOT NULL,
"week40_month" Interval NOT NULL,
"week36_day" Interval NOT NULL,
"week36_month" Interval NOT NULL,
"week35_day" Interval NOT NULL,
"week35_month" Interval NOT NULL,
"week24_day" Interval NOT NULL,
"week24_month" Interval NOT NULL,
PRIMARY KEY ( "id" ) );

DEFAULT VALUE <> GENERATED ALWAYS AS STORED
the DEFAULT value is not recalculated when the reference column are
changed. is not an identical construction GENERATED ALWAYS AS STORED

*reply:*
They are in a separate list “Methods”.
This approach was used for other databases *long before* generated columns
were implemented in PostgreSQL.



>
> Which community?  There are many ways in which a "data schema [can be]
> presented as data".  In PostgreSQL there is only a single source of truth
> for what the data schema is - the "System Catalogs" [1].  Those tables are
> made available to the user in the pg_catalog schema.
>
> David J.
>
>
I came across a long acronym defining the name of the approach, then I
decided that to come up with something fundamentally new is difficult
enough. That everything already has a formal name. But I can't find it
anymore. I need it to position my solution.
Shema - shema
Table Entity
id | propery1| property2| property3

Shema-data

Table Entity
id| name

Table Property Entity
id | id_entity | name| val

--
Regards, Dmitry!


Two academic questions

2022-02-01 Thread Дмитрий Иванов
Good afternoon.
There are a couple of questions that I come back to from time to time:
Why are COLUMN GENERATED ALWAYS AS called "methods" and singled out as a
separate field type?
Is there a common name in the community for the approach in which the data
schema is presented as data?
--
Regards, Dmitry!


Re: [Extern] Re: postgres event trigger workaround

2022-01-15 Thread Дмитрий Иванов
The goal was to limit access to the schema to users who manage roles, in
all likelihood to a specific group of administrators. This can be done with
this solution. I have no problem with managing roles through plpgsql
functions. There has been no need to create databases until now. In my
solution, the schema is a data element, for flexibility.
--
Regards, Dmitry!


сб, 15 янв. 2022 г. в 10:01, Julien Rouhaud :

> Hi,
>
> On Sat, Jan 15, 2022 at 08:36:21AM +0500, Дмитрий Иванов wrote:
> > In my solution, all users don't need direct access to the schema because
> > you have to use the functional API to access it. If you can manage users
> > with functions, you can close the schema in the same way.
> > Usually the function is executed with the permissions of the calling
> user,
> > which requires permissions for all affected entities. However, if you
> > specify the "SECURITY DEFINER" parameter at creation, the function will
> be
> > executed with the owner's permissions. The owner of the function has no
> > login permissions but has permissions on the affected entities. In this
> way
> > you will close the schema from the roles that have rights to the role
> > management functions.
>
> Sure you can solve most problems with that.  But you can't create a
> database
> (or a tablespace) from a function so this approach wouldn't cover all of
> OP's
> needs, as different approach would be needed for role and db creation.
>


Re: [Extern] Re: postgres event trigger workaround

2022-01-14 Thread Дмитрий Иванов
Hi
In my solution, all users don't need direct access to the schema because
you have to use the functional API to access it. If you can manage users
with functions, you can close the schema in the same way.
Usually the function is executed with the permissions of the calling user,
which requires permissions for all affected entities. However, if you
specify the "SECURITY DEFINER" parameter at creation, the function will be
executed with the owner's permissions. The owner of the function has no
login permissions but has permissions on the affected entities. In this way
you will close the schema from the roles that have rights to the role
management functions.
--
Regards, Dmitry!


пт, 14 янв. 2022 г. в 15:24, Julien Rouhaud :

> Hi,
>
> On Fri, Jan 14, 2022 at 09:01:12AM +, Zwettler Markus (OIZ) wrote:
> >
> > We have the need to separate user (role) management from infrastructure
> (database) management.
> >
> > Granting CREATEROLE to any role also allows this role to create other
> roles having CREATEDB privileges and therefore also getting CREATEDB
> privileges.
> >
> > My use case would have been to grant CREATEROLE to any role while still
> restricting "create database".
>
> I see, that's indeed a problem.  You could probably enforce that using some
> custom module to enforce additional rules on top of CREATE ROLE
> processing, but
> it would have to be written in C.
>
>
>


Re: How to read an external pdf file from postgres?

2022-01-12 Thread Дмитрий Иванов
What are you going to do with the data?
If you want to analyze it in some way, I can't think of a better option
with a Python function. Or do you just want to transfer them? There are
options here too, but in this case I like Python better.
--
Regards, Dmitry!


ср, 12 янв. 2022 г. в 16:16, Amine Tengilimoglu :

>   Hi;
>
>  I want to read an external pdf file from postgres. pdf file will
> exist on the disk. postgres only know the disk full path as metadata. Is
> there any software or extension that can be used for this? Or do we have to
> develop software for it?  Or what is the best approach for this? I'd
> appreciate it if anyone with experience could make suggestions.
>
>  Thanks.
>


Re: The postgres server don't work

2022-01-09 Thread Дмитрий Иванов
Рад слышать это!
--
С уважением, Дмитрий!


вс, 9 янв. 2022 г. в 18:42, Mohammed falih :

> It worked !
> thank you so much , all of you
> for your efforts and time
> best wishes to you
>
> On Sun, 9 Jan 2022, 3:51 pm Дмитрий Иванов,  wrote:
>
>> You need to open the folder containing the file and (you must be an
>> administrator)
>>  Open the file in an editor running as an administrator.
>> OR
>> Right click on the file "C:\Program
>> Files\PostgreSQL\12\scripts\runpsql.bat" -> Properties-> Security. Select
>> the "Users" group, click "Change" and give write access, click apply.
>> Starting with Windows 10, the "Program Files" folder is not writable for
>> regular users and administrators, not in administrator mode.
>> If all this is difficult for you, just copy the file to your desktop,
>> edit and run, it will be the same.
>> --
>> Regards, Dmitry!
>>
>>
>> вс, 9 янв. 2022 г. в 15:16, Mohammed falih :
>>
>>> thank you for your efforts
>>> I added chcp 1252 line
>>> but when I try to save the changes . an error pop out says "you don't
>>> have the permission to open this file ,see the owner of the file or an
>>> administrator to obtain permission"
>>> even though I went to properties/security/mylaptopaccount and made it
>>> full control
>>>
>>>
>>>


Re: The postgres server don't work

2022-01-09 Thread Дмитрий Иванов
You need to open the folder containing the file and (you must be an
administrator)
 Open the file in an editor running as an administrator.
OR
Right click on the file "C:\Program
Files\PostgreSQL\12\scripts\runpsql.bat" -> Properties-> Security. Select
the "Users" group, click "Change" and give write access, click apply.
Starting with Windows 10, the "Program Files" folder is not writable for
regular users and administrators, not in administrator mode.
If all this is difficult for you, just copy the file to your desktop, edit
and run, it will be the same.
--
Regards, Dmitry!


вс, 9 янв. 2022 г. в 15:16, Mohammed falih :

> thank you for your efforts
> I added chcp 1252 line
> but when I try to save the changes . an error pop out says "you don't have
> the permission to open this file ,see the owner of the file or an
> administrator to obtain permission"
> even though I went to properties/security/mylaptopaccount and made it full
> control
>
>
>


Re: The postgres server don't work

2022-01-08 Thread Дмитрий Иванов
Such gross changes should not be made to the registry. You changed the
settings of the whole system for the needs of one console. This will affect
many services and console applications.
You need to find a shortcut in the start menu.
"C:\ProgramData\Microsoft\Windows\Start Menu\Programs\PostgreSQL 12\SQL
Shell (psql).lnk"
By the right click, select "Change" and add to the batch file an indication
of the need to change the code page, save.
Here is an example of a standard script:

@echo off
*chcp 1251*
REM Copyright (c) 2012-2020, EnterpriseDB Corporation.  All rights reserved
REM PostgreSQL server psql runner script for Windows

SET server=localhost
SET /P server="Server [%server%]: "

SET database=postgres
SET /P database="Database [%database%]: "

SET port=5999
SET /P port="Port [%port%]: "

SET username=postgres
SET /P username="Username [%username%]: "

for /f "delims=" %%a in ('chcp ^|find /c "932"') do @ SET
CLIENTENCODING_JP=%%a
if "%CLIENTENCODING_JP%"=="1" SET PGCLIENTENCODING=SJIS
if "%CLIENTENCODING_JP%"=="1" SET /P PGCLIENTENCODING="Client Encoding
[%PGCLIENTENCODING%]: "

REM Run psql
"C:\Program Files\PostgreSQL\12\bin\psql.exe" -h %server% -U %username% -d
%database% -p %port%

pause

--
Regards, Dmitry!


вс, 9 янв. 2022 г. в 05:19, Mohammed falih :

> yes I think the service doesn't even start
>
> I have only the default protection that comes with the operating system
>
> wow the postgres started working!! but the problem came back that talks
> about console code " WARNING: Console code page (437) differs from
> Windows code page (1252)
>
>  8-bit characters might not work correctly. See psql reference
>  page "Notes for Windows users" for details.
>
>


Re: storing zipped SQLite inside PG ?

2021-12-23 Thread Дмитрий Иванов
Or, if you want to extend this theme, you can use a PostgreSQL-based
"SQLite file player" with
PostgreSQL + Python[sqlite3] extension.This way you can provide direct
access to SQLite files without duplicating data in PostgreSQL cluster
tables.
PS: It may seem that this will reduce performance. When I started my
project, I had some preconceptions about Python. But analyzing projects
like Patroni changed my mind.
--
Regards, Dmitry!


ср, 22 дек. 2021 г. в 10:24, David G. Johnston :

> On Tue, Dec 21, 2021 at 10:06 PM David Gauthier 
> wrote:
>
>> I'll have to read more about sqlite_fdw. Thanks for that Steve !
>>
>> Each SQLite isn't that big (billions of records), more like 30K records
>> or so.  But there are lots and lots of these SQLite DBs which add up over
>> time to perhaps billions of records.
>>
>> This is for a big corp with an IT dept.  Maybe I can get them to upgrade
>> the DB itself.
>> Thank You too David !
>>
>>>
>>>
> So, more similar to the image storage question than I first thought, but
> still large enough where the specific usage patterns and needs end up being
> the deciding factor (keeping in mind you can pick multiple solutions - so
> that really old data, ideally on a partition, can be removed from the DB
> while still remaining accessible if just more slowly or laboriously).
>
> One possibility to consider - ditch the SQLite dependency and just store
> CSV (but maybe with a funky delimiter sequence).  You can then us
> "string_to_table(...)" on that delimiter to materialize a table out of the
> data right in a query.
>
> David J.
>
>


Re: Best Strategy for Large Number of Images

2021-12-15 Thread Дмитрий Иванов
PostgreSQL, thanks to extensions, allows a third non-obvious file strategy.
Since many developers when working with small images don't want to overload
their solutions with the complexities of interacting with file servers, and
I didn't want to store files in a database, I found and tried a third
option. I turned PostgreSQL into a file server that provides page bytes
exchange with the client using Bytea virtual tables. Roughly it looks like
this: client-(access library-asynchronous page
exchange)-PostgreSQL-Python-file system. I've set a limit of 2Gb, but
working with such large files with this strategy doesn't seem reasonable to
me. But the system for storing fiscal documents turned out pretty good
(<10Mb).
--
Regards, Dmitry!


ср, 15 дек. 2021 г. в 23:18, Estevan Rech :

> I think about using it inside the database to facilitate the select and
> generation of reports...
>
>>


Re: Difference in execution plans pg12 vs pg14

2021-12-11 Thread Дмитрий Иванов
Thanks to Imre Samu's help, I found out that this is an unwarranted
interference of the JIT compilation. When it is disabled, the short queries
work stably. Before the problem started, I purposely increased the amount
of surrogate data to evaluate performance. Perhaps the logic for enabling
JIT compilation is different in different versions of Postgres. It didn't
show up as clearly on long queries because they were rewritten without JOIN
VIEW and provide filtering before aggregation and linking. I want to make
rougher JIT compiler settings (I think disabling it is fundamentally wrong)
and rewrite all queries similar to long queries.  Thanks.
--
Regards, Dmitry!


сб, 11 дек. 2021 г. в 16:18, Peter J. Holzer :

> Is this repeatable or did it just occur once?
>
> 32 µs to retrieve a single row via index probably means that it was
> already cached in RAM
> 842796 µs to retrieve a single row via index doesn't even look realistic
> for a completely cold database on a slow rotating hard disk. If this
> happened only once I suspect that something else interfered (maybe
> another I/O intensive process, if this is on a VM maybe even on another
> guest). If it is repeatable, something very weird is going on.
>
> hp
>
> --
>_  | Peter J. Holzer| Story must make more sense than reality.
> |_|_) ||
> | |   | h...@hjp.at |-- Charles Stross, "Creative writing
> __/   | http://www.hjp.at/ |   challenge!"
>


Re: Difference in execution plans pg12 vs pg14

2021-12-11 Thread Дмитрий Иванов
Ok, I will check.
Now I have even more questions. This behavior appeared, it was not there in
the first place. jit-compilation down my long aggregation queries by about
half on average. How do I adjust this correctly?

#jit_provider = 'llvmjit' # JIT library to use
#jit_above_cost = 10 # perform JIT compilation if available
# and query more expensive than this;
# -1 disables
#jit_inline_above_cost = 50 # inline small functions if query is
# more expensive than this; -1 disables
#jit_optimize_above_cost = 50 # use expensive JIT optimizations if
# query is more expensive than this;
# -1 disables
jit = off # allow JIT compilation
--
Regards, Dmitry!


сб, 11 дек. 2021 г. в 09:12, Imre Samu :

> Hi Dmitry,
>
> pg12:
> > Execution Time: 44.123 ms
>
> pg14:
> > JIT:
> >   Functions: 167
> >   Options: Inlining true, Optimization true, Expressions true, Deforming
> true
> >   Timing: Generation 9.468 ms, Inlining 55.237 ms, Optimization 507.548
> ms, Emission 347.932 ms, Total 920.185 ms
> > Execution Time: 963.258 ms
>
> please check the JIT settings ;
> and test with:   jit = off;
>
> regards,
>  Imre
>
>
> Дмитрий Иванов  ezt írta (időpont: 2021. dec. 11.,
> Szo, 2:01):
>
>> Afternoon. I was able to make the necessary changes to my base needed to
>> migrate win_pg12 to debian pg14.
>> But there is a new problem, which was not there at the initial stage so I
>> checked:
>>
>> win_pg12:
>> ->  Index Scan using index_class_tree_full on class c  (cost=0.28..2.50
>> rows=1 width=235) (actual time=0.030..0.032 rows=1 loops=1)
>>  Index Cond: (id = 650)
>> debian_pg14:
>> ->  Index Scan using index_class_tree_full on class c  (cost=0.28..2.50
>> rows=1 width=235) (actual time=842.795..842.796 rows=1 loops=1)
>>  Index Cond: (id = 650)
>>
>> I'm wondering where to dig next...
>> --
>> Best regards, Dmitry!
>>
>
Uchet=# EXPLAIN ANALYZE SELECT * FROM bpd.vclass WHERE "id" = 650;

 QUERY PLAN
-
 Nested Loop Left Join  (cost=26599.87..856768.45 rows=4348 width=311) (actual 
time=38.159..38.182 rows=1 loops=1)
   Join Filter: (c_2.id = c.id)
   ->  Nested Loop Left Join  (cost=2.82..96.06 rows=1 width=244) (actual 
time=3.612..3.621 rows=1 loops=1)
 Join Filter: (con.id = c.id_con)
 Rows Removed by Join Filter: 1
 ->  Nested Loop Left Join  (cost=2.82..94.68 rows=1 width=236) (actual 
time=3.606..3.614 rows=1 loops=1)
   Join Filter: (c_1.id = c.id)
   ->  Index Scan using index_class_tree_full on class c  
(cost=0.28..2.50 rows=1 width=235) (actual time=0.014..0.015 rows=1 loops=1)
 Index Cond: (id = 650)
   ->  GroupAggregate  (cost=2.54..92.15 rows=1 width=9) (actual 
time=3.590..3.595 rows=1 loops=1)
 Group Key: c_1.id
 ->  Nested Loop Left Join  (cost=2.54..91.82 rows=1 
width=233) (actual time=0.270..3.576 rows=20 loops=1)
   ->  Nested Loop Left Join  (cost=2.26..91.52 rows=1 
width=233) (actual time=0.262..3.545 rows=20 loops=1)
 ->  Nested Loop Left Join  (cost=1.97..23.18 
rows=1 width=225) (actual time=0.057..0.252 rows=20 loops=1)
   ->  Nested Loop Left Join  
(cost=1.70..21.60 rows=1 width=213) (actual time=0.050..0.209 rows=20 loops=1)
 ->  Nested Loop Left Join  
(cost=1.42..21.30 rows=1 width=208) (actual time=0.045..0.185 rows=20 loops=1)
   ->  Nested Loop Left Join  
(cost=1.13..18.89 rows=1 width=192) (actual time=0.035..0.134 rows=20 loops=1)
 ->  Nested Loop Left 
Join  (cost=0.86..17.68 rows=1 width=131) (actual time=0.028..0.095 rows=20 
loops=1)
   ->  Nested Loop 
Left Join  (cost=0.57..15.18 rows=1 width=26) (actual time=0.019..0.037 rows=20 
loops=1)
 Join 
Filter: (c_1.id = cp.id_class)
 ->  Index 
Scan using index_class_tree_full on class c_1  (cost=0.28..2.50 rows=1 width=9) 
(actual time=0.008..0.008 rows=1 loops=1)
 

Re: Difference in execution plans pg12 vs pg14

2021-12-10 Thread Дмитрий Иванов
Yes, I did.
Step1
sudo /usr/lib/postgresql/14/bin/pg_dump --file
"/home/dismay/uchet/Uchet.backup" --host "server" --port "5999" --username
"back" --no-password --verbose --format=c --quote-all-identifiers --blobs
 --disable-triggers  --encoding="UTF8" "Uchet"
Step2
Manual DROP/CREATE BASE from template 0 (We have to do this because of the
difference in locales Win EDB 'Russian_Russia.1251' <> Debian 'ru_RU.UTF-8')
Step3
sudo /usr/lib/postgresql/14/bin/pg_restore --host "127.0.0.1" --port "5998"
--username "back" --no-password --dbname "Uchet" --disable-triggers
--format=c --verbose "/home/dismay/uchet/Uchet.backup" 2>
"/home/dismay/uchet/uchet_all.log"
--
Regards, Dmitry!


сб, 11 дек. 2021 г. в 06:59, Adrian Klaver :

> On 12/10/21 17:51, Дмитрий Иванов wrote:
> > Yes, I did.
> > I reset table statistics, did (VACUUM) ANALYZE, recreated index. Nothing
> > changes.
> > I've deleted the database many times, dozens of times. Maybe something
> > is broken?
>
> How did you do the upgrade?
>
> > --
> > Regards, Dmitry!
> >
> >
> > сб, 11 дек. 2021 г. в 06:13, Adrian Klaver  > <mailto:adrian.kla...@aklaver.com>>:
> >
> > On 12/10/21 17:00, Дмитрий Иванов wrote:
> >  > Afternoon. I was able to make the necessary changes to my base
> > needed to
> >  > migrate win_pg12 to debian pg14.
> >  > But there is a new problem, which was not there at the initial
> > stage so
> >  > I checked:
> >  >
> >  > win_pg12:
> >  > ->  Index Scan using index_class_tree_full on class c
> >   (cost=0.28..2.50
> >  > rows=1 width=235) (actual time=0.030..0.032 rows=1 loops=1)
> >  >   Index Cond: (id = 650)
> >  > debian_pg14:
> >  > ->  Index Scan using index_class_tree_full on class c
> >   (cost=0.28..2.50
> >  > rows=1 width=235) (actual time=842.795..842.796 rows=1 loops=1)
> >  >   Index Cond: (id = 650)
> >  >
> >  > I'm wondering where to dig next...
> >
> > Did you run ANALYZE(updates table stats) on the 14 instance after
> doing
> > the upgrade?
> >
> >  > --
> >  > Best regards, Dmitry!
> >
> >
> > --
> > Adrian Klaver
> > adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Difference in execution plans pg12 vs pg14

2021-12-10 Thread Дмитрий Иванов
Yes, I did.
I reset table statistics, did (VACUUM) ANALYZE, recreated index. Nothing
changes.
I've deleted the database many times, dozens of times. Maybe something is
broken?
--
Regards, Dmitry!


сб, 11 дек. 2021 г. в 06:13, Adrian Klaver :

> On 12/10/21 17:00, Дмитрий Иванов wrote:
> > Afternoon. I was able to make the necessary changes to my base needed to
> > migrate win_pg12 to debian pg14.
> > But there is a new problem, which was not there at the initial stage so
> > I checked:
> >
> > win_pg12:
> > ->  Index Scan using index_class_tree_full on class c  (cost=0.28..2.50
> > rows=1 width=235) (actual time=0.030..0.032 rows=1 loops=1)
> >   Index Cond: (id = 650)
> > debian_pg14:
> > ->  Index Scan using index_class_tree_full on class c  (cost=0.28..2.50
> > rows=1 width=235) (actual time=842.795..842.796 rows=1 loops=1)
> >   Index Cond: (id = 650)
> >
> > I'm wondering where to dig next...
>
> Did you run ANALYZE(updates table stats) on the 14 instance after doing
> the upgrade?
>
> > --
> > Best regards, Dmitry!
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Difference in execution plans pg12 vs pg14

2021-12-10 Thread Дмитрий Иванов
Afternoon. I was able to make the necessary changes to my base needed to
migrate win_pg12 to debian pg14.
But there is a new problem, which was not there at the initial stage so I
checked:

win_pg12:
->  Index Scan using index_class_tree_full on class c  (cost=0.28..2.50
rows=1 width=235) (actual time=0.030..0.032 rows=1 loops=1)
 Index Cond: (id = 650)
debian_pg14:
->  Index Scan using index_class_tree_full on class c  (cost=0.28..2.50
rows=1 width=235) (actual time=842.795..842.796 rows=1 loops=1)
 Index Cond: (id = 650)

I'm wondering where to dig next...
--
Best regards, Dmitry!

 QUERY PLAN
-
 Nested Loop Left Join  (cost=26599.87..856768.44 rows=4348 width=311) (actual 
time=931.755..931.773 rows=1 loops=1)
   Join Filter: (c_2.id = c.id)
   ->  Nested Loop Left Join  (cost=2.82..96.05 rows=1 width=244) (actual 
time=844.751..844.757 rows=1 loops=1)
 Join Filter: (con.id = c.id_con)
 Rows Removed by Join Filter: 1
 ->  Nested Loop Left Join  (cost=2.82..94.66 rows=1 width=236) (actual 
time=844.743..844.749 rows=1 loops=1)
   Join Filter: (c_1.id = c.id)
   ->  Index Scan using index_class_tree_full on class c  
(cost=0.28..2.50 rows=1 width=235) (actual time=842.795..842.796 rows=1 loops=1)
 Index Cond: (id = 650)
   ->  GroupAggregate  (cost=2.54..92.14 rows=1 width=9) (actual 
time=1.943..1.948 rows=1 loops=1)
 Group Key: c_1.id
 ->  Nested Loop Left Join  (cost=2.54..91.81 rows=1 
width=233) (actual time=0.174..1.939 rows=20 loops=1)
   ->  Nested Loop Left Join  (cost=2.26..91.51 rows=1 
width=233) (actual time=0.171..1.917 rows=20 loops=1)
 ->  Nested Loop Left Join  (cost=1.97..23.17 
rows=1 width=225) (actual time=0.063..0.197 rows=20 loops=1)
   ->  Nested Loop Left Join  
(cost=1.70..21.59 rows=1 width=213) (actual time=0.059..0.173 rows=20 loops=1)
 ->  Nested Loop Left Join  
(cost=1.42..21.29 rows=1 width=208) (actual time=0.057..0.154 rows=20 loops=1)
   ->  Nested Loop Left Join  
(cost=1.13..18.88 rows=1 width=192) (actual time=0.053..0.121 rows=20 loops=1)
 ->  Nested Loop Left 
Join  (cost=0.86..17.67 rows=1 width=131) (actual time=0.048..0.096 rows=20 
loops=1)
   ->  Nested Loop 
Left Join  (cost=0.57..15.17 rows=1 width=26) (actual time=0.037..0.049 rows=20 
loops=1)
 Join 
Filter: (c_1.id = cp.id_class)
 ->  Index 
Scan using index_class_tree_full on class c_1  (cost=0.28..2.50 rows=1 width=9) 
(actual time=0.020..0.020 rows=1 loops=1)
   
Index Cond: (id = 650)
 ->  Index 
Scan using index_class_prop_class on class_prop cp  (cost=0.29..12.52 rows=12 
width=25) (actual time=0.014..0.021 rows=20 loops=1)
   
Index Cond: (id_class = 650)
   ->  Index Scan 
using class_prop_small_val_pkey on class_prop_user_small_val sv  
(cost=0.29..2.51 rows=1 width=113) (actual time=0.002..0.002 rows=0 loops=20)
 Index 
Cond: (id_class_prop = cp.id)
 ->  Index Scan using 
class_prop_big_val_pkey on class_prop_user_big_val bv  (cost=0.27..1.21 rows=1 
width=69) (actual time=0.001..0.001 rows=0 loops=20)
   Index Cond: 
(id_class_prop = cp.id)
   ->  Index Scan using 
class_prop_enum_val_pkey on class_prop_enum_val ev  (cost=0.29..2.41 rows=1 
width=24) (actual time=0.001..0.001 rows=0 loops=20)
 Index Cond: 
(id_class_prop = cp.id)
 ->  Index Scan using 
prop_enum_val_pkey on prop_enum_val pev  (cost=0.28..0.30 rows=1 width=21) 
(actual time=0.001..0.001 rows=0 loops=20)
   Index Cond: (id = 
ev.id_prop_enum_val)
   -> 

Re: CTE Materialization

2021-12-08 Thread Дмитрий Иванов
Спасибо!
--
С уважением, Дмитрий!


ср, 8 дек. 2021 г. в 22:58, Paul van der Linden :

> This one quite nicely explains it:
> https://stackoverflow.com/questions/14897816/how-can-i-prevent-postgres-from-inlining-a-subquery
>
> On Wed, Dec 8, 2021 at 3:14 AM David G. Johnston <
> david.g.johns...@gmail.com> wrote:
>
>> On Tue, Dec 7, 2021 at 6:40 PM Дмитрий Иванов 
>> wrote:
>>
>>> I beg your pardon.
>>> The problem is more or less clear to me, but the solution is not. What
>>> does the "hack is to add an "offset 0" to the query" suggest? Thank you.
>>>
>>>
>> A subquery with a LIMIT clause cannot have where clause expressions in
>> upper parts of the query tree pushed down it without changing the overall
>> query result - something the planner is not allowed to do.  For the hack,
>> since adding an actual LIMIT clause doesn't make sense you omit it, but
>> still add the related OFFSET clause so the planner still treats the
>> subquery as a LIMIT subquery.  And since you don't want to skip any rows
>> you specify 0 for the offset.
>>
>> David J.
>>
>>


Re: CTE Materialization

2021-12-07 Thread Дмитрий Иванов
I beg your pardon.
The problem is more or less clear to me, but the solution is not. What does
the "hack is to add an "offset 0" to the query" suggest? Thank you.
--
Regards, Dmitry!


вт, 7 дек. 2021 г. в 10:20, Paul van der Linden :

> It did indeed work as expected.
> Took the query down from over 18 hours to 20 minutes, so a huge win!
>
> Paul
>
> On Thu, Dec 2, 2021 at 11:34 PM David G. Johnston <
> david.g.johns...@gmail.com> wrote:
>
>> On Thursday, December 2, 2021, Paul van der Linden <
>> paul.doskabou...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> when switching to postgres 14 (from 11) I'm having some slow queries
>>> because of inlining of CTE's.
>>> I know I can get the same result as with PG11 when adding MATERIALIZED
>>> to the cte, but the same application also needs to be able to run on older
>>> postgres versions, so that is a no-go.
>>> Is there any other way that I can have materialized cte's in PG14 while
>>> still be compatible with older PG versions?
>>> Much appreciated,
>>>
>>
>> The usual anti-inlining hack is to add an “offset 0” to the query.
>> Haven’t tried it in 14 myself though.
>>
>> David J.
>>
>>
>


Re: PQexecParams, placeholders and variable lists of params

2021-11-23 Thread Дмитрий Иванов
Hi
A function cannot have an undefined signature, but can accept an array of
arguments:
CREATE OR REPLACE FUNCTION bpd.object_del_by_id_array(
object_array bigint[])
RETURNS SETOF bpd.errarg_action
LANGUAGE 'plpgsql'
COST 100
VOLATILE SECURITY DEFINER PARALLEL SAFE
ROWS 1000
SET search_path=bpd
AS $BODY$
DECLARE
cfg_desc "bpd"."cfg_action"%ROWTYPE;
action_entity RECORD;

action_result RECORD;
result "bpd"."errarg_action"%ROWTYPE;
BEGIN
SELECT * INTO cfg_desc FROM "bpd"."cfg_action" WHERE id = 'delete';

FOR action_entity IN SELECT id, "name" FROM bpd.object WHERE id =
ANY(object_array)
LOOP
action_result = "bpd"."object_del"(action_entity.id);
result."err_id" = action_result.outresult;
result."errdesc" = action_result.outdesc;
result."entity_id" = 20;
result."entity_instance_id" = action_entity.id;
result."entity_instance_name" = action_entity.name;
result."action_id" = cfg_desc."actid";
result."action_desc" = cfg_desc.desc;
RETURN NEXT result;
END LOOP;
END;
$BODY$;
--
Regards, Dmitry!


вт, 23 нояб. 2021 г. в 16:37, :

> Hi,
>
> PQexecParams expects a query string with "$1", "$2"... placeholders,
> which refer to as many params in the param list. This keeps SQL
> injection at bay.
>
> Is there a way to express "variable length" lists? IOW, if I want to do
> a query like
>
>   "SELECT * FROM customers WHERE id IN ($1, $2) AND name like $3;"
>
> is there a way to do that without knowing beforehand how many values go
> into the IN list?
>
> It would be very welcome for you to rub my nose against the place in The
> Fine Manual where I could have found that :-)
>
> Thanks & cheers
>  - tomás
>


Re: pg_dump insert column GENERATED

2021-11-22 Thread Дмитрий Иванов
Thank you, there is a clear logic to it.
--
Regards, Dmitry!


пн, 22 нояб. 2021 г. в 21:11, Adrian Klaver :

> On 11/22/21 03:32, Дмитрий Иванов wrote:
> > Got it.
> >
> >
> >  >You are going need to provide more information about the above:
> >  >1) Define regular results.
> >  >2) The query used.
> >  >3) The outcome vs what you expected.
> >
> > I will continue to experiment. The question is which option is better
> > (this would reduce the number of options):
> > pd_dump is the source server;
> > pg_restore - receiver server;
> > or
> > pd_dump - receiver server;
> > pg_restore - server-receiver;
>
> pg_dump is backwards compatible not forwards.
>
> In the explanations below Postgres versioning(major/minor) is determined
> as:
>
> Pre-version 10:
>
> X.x.x
>
> 10+:
>
> X.x
>
> Where  X is major and x is minor.
>
> This means if are moving forwards in Postgres major version then:
>
> 1) Use pg_dump from newer version of Postgres to dump from older version
> of Postgres. In your case pg_dump(v14) dump Postgres server v12.
>
> 2) To restore use the version of pg_restore for the Postgres version you
> are restoring to. In your case pg_restore(v14).
>
> Staying on the same version:
>
> 1) Use the pg_dump/pg_restore for the version your are on.
>
> 2) If you are moving from one minor release to another then it would be
> better to use the latest minor release version to get any bug fixes.
>
> Going backwards from newer version to older version:
>
> This is not supported.
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: insert column monetary type ver 2

2021-11-22 Thread Дмитрий Иванов
About the locale, I wanted to add a couple of lines. I noticed a strange
behavior of the currency symbol.
The EDB installer, when specifying the Russian locale, sets the following
locale parameters:
(PostgreSQL 12.9, compiled by Visual C++ build 1914, 64-bit)
(Encoding UTF8)
lc_messages = 'Russian_Russia.1251' # locale for system error message
# strings
lc_monetary = 'Russian_Russia.1251' # locale for monetary formatting
lc_numeric = 'Russian_Russia.1251' # locale for number formatting
lc_time = 'Russian_Russia.1251' # locale for time formatting
In this case, the data of the financial type is not correctly displayed:
1 238,23 ?

If you set the locale:
lc_messages = 'ru_RU.UTF-8' # locale for system error message
# strings
lc_monetary = 'ru_RU.UTF-8' # locale for monetary formatting
lc_numeric = 'ru_RU.UTF-8' # locale for number formatting
lc_time = 'ru_RU.UTF-8' # locale for time formatting
The data of the financial type is displayed correctly:
1 238,23 ₽
--
Regards, Dmitry!


пн, 22 нояб. 2021 г. в 20:17, Tom Lane :

> Ron  writes:
> > On 11/21/21 9:35 PM, Rob Sargent wrote:
> >> Some of us like it as an easy formatter trick for reports
>
> > to_char() can do the same, no?
>
> to_char doesn't have adequate logic for locale-specific monetary
> formatting.
> You can get it to emit a locale-specific currency symbol, but it has no
> clue whether that should go before or after the value.  It knows nothing
> of other locale-specific details, such as possibly using parens in place
> of a minus sign.  Also, the POSIX API allows monetary decimal point and
> thousands separators to be different from the numeric ones that to_char
> knows about.  (I have no idea which locales use that, but I doubt they'd
> have put in that complication without need.)
>
> regards, tom lane
>
>
>


Re: pg_dump insert column GENERATED

2021-11-22 Thread Дмитрий Иванов
It seems to me that you are right all round (I gave up COPY because of
problems with the MONEY type):

sudo /usr/lib/postgresql/14/bin/pg_dump --file
"/home/dismay/uchet/object.sql" --host "server" --port "5999" --username
"back" --no-password --verbose --format=p --quote-all-identifiers
--encoding="UTF8" --table "bpd".object --dbname "Uchet"
--
-- PostgreSQL database dump
--

-- Dumped from database version 12.9
-- Dumped by pg_dump version 14.1 (Debian 14.1-1.pgdg110+1)

-- Started on 2021-11-22 20:21:59 +05

SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET xmloption = content;
SET client_min_messages = warning;
SET row_security = off;

SET default_tablespace = '';

SET default_table_access_method = "heap";

--
-- TOC entry 353 (class 1259 OID 94786)
-- Name: object; Type: TABLE; Schema: bpd; Owner: IvanovDU
--

CREATE TABLE "bpd"."object" (
"id" bigint DEFAULT
"nextval"('"bpd"."object_general_id_seq"'::"regclass") NOT NULL,
"id_class" bigint NOT NULL,
"id_position" bigint DEFAULT '-1'::integer NOT NULL,
"bquantity" numeric NOT NULL,
"id_position_root" bigint NOT NULL,
"id_conception" bigint NOT NULL,
"barcode_unit" bigint DEFAULT 0 NOT NULL,
"id_unit_conversion_rule" integer NOT NULL,
"timestamp" timestamp without time zone DEFAULT LOCALTIMESTAMP NOT NULL,
"on_freeze" boolean DEFAULT false NOT NULL,
"timestamp_class" timestamp without time zone DEFAULT LOCALTIMESTAMP(3)
NOT NULL,
"name" character varying(255) NOT NULL,
"id_class_root" bigint NOT NULL,
"id_group" bigint NOT NULL,
"id_group_root" bigint NOT NULL,
"id_object_carrier" bigint DEFAULT '-1'::integer NOT NULL,
"desc" character varying(2044) DEFAULT 'н/д'::character varying NOT
NULL,
"id_class_prop_object_carrier" bigint DEFAULT '-1'::integer NOT NULL,
"id_pos_temp_prop" bigint DEFAULT '-1'::integer NOT NULL,
"mc" numeric DEFAULT 0 NOT NULL,
"is_inside" boolean GENERATED ALWAYS AS ((("id_object_carrier" > 0) OR
("id_pos_temp_prop" >= 0))) STORED,
CONSTRAINT "check_self_integration" CHECK (("id" <>
"id_object_carrier"))
);
ALTER TABLE "bpd"."object" OWNER TO "IvanovDU";
--
-- TOC entry 5225 (class 0 OID 94786)
-- Dependencies: 353
-- Data for Name: object; Type: TABLE DATA; Schema: bpd; Owner: IvanovDU
--
COPY "bpd"."object" ("id", "id_class", "id_position", "bquantity",
"id_position_root", "id_conception", "barcode_unit",
"id_unit_conversion_rule", "timestamp", "on_freeze", "timestamp_class",
"name", "id_class_root", "id_group", "id_group_root", "id_object_carrier",
"desc", "id_class_prop_object_carrier", "id_pos_temp_prop", "mc") FROM
stdin;
51253 1015 461 1 461 84 202512530 14 2021-11-14 08:40:31.381 f
2021-02-19 11:01:28.402 NFC метка самоклеющаяся 1013 138 138 -1 -1 -1
1

*
sudo /usr/lib/postgresql/14/bin/pg_dump --file
"/home/dismay/uchet/object2.sql" --host "server" --port "5999" --username
"back" --no-password --verbose --format=p --quote-all-identifiers
--column-inserts --inserts --encoding="UTF8" --table "bpd".object --dbname
"Uchet"
--
-- PostgreSQL database dump
--

-- Dumped from database version 12.9
-- Dumped by pg_dump version 14.1 (Debian 14.1-1.pgdg110+1)

-- Started on 2021-11-22 20:24:31 +05

SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET xmloption = content;
SET client_min_messages = warning;
SET row_security = off;

SET default_tablespace = '';

SET default_table_access_method = "heap";

--
-- TOC entry 353 (class 1259 OID 94786)
-- Name: object; Type: TABLE; Schema: bpd; Owner: IvanovDU
--

CREATE TABLE "bpd"."object" (
"id" bigint DEFAULT
"nextval"('"bpd"."object_general_id_seq"'::"regclass") NOT NULL,
"id_class" bigint NOT NULL,
"id_position" bigint DEFAULT '-1'::integer NOT NULL,
"bquantity" numeric NOT NULL,
"id_position_root" bigint NOT NULL,
"id_conception" bigint NOT NULL,
"barcode_unit" bigint DEFAULT 0 NOT NULL,
"id_unit_conversion_rule" integer NOT NULL,
"timestamp" timestamp without time zone DEFAULT LOCALTIMESTAMP NOT NULL,
"on_freeze" boolean DEFAULT false NOT NULL,
"timestamp_class" timestamp without time zone DEFAULT LOCALTIMESTAMP(3)
NOT NULL,
"name" character varying(255) NOT NULL,
"id_class_root" bigint NOT NULL,
"id_group" bigint NOT NULL,
"id_group_root" bigint NOT NULL,
"id_object_carrier" bigint DEFAULT '-1'::integer NOT NULL,
"desc" 

Re: pg_dump insert column GENERATED

2021-11-22 Thread Дмитрий Иванов
Рад помочь!
--
С уважением, Дмитрий!


пн, 22 нояб. 2021 г. в 19:55, Tom Lane :

> =?UTF-8?B?0JTQvNC40YLRgNC40Lkg0JjQstCw0L3QvtCy?= 
> writes:
> > Uchet=# select attname, attgenerated from pg_attribute
> > Uchet-# where attrelid = 'bpd.object'::regclass and attnum > 0;
> > attname| attgenerated
> > ---+--
> >  id|
> >  id_class  |
> >  id_position   |
> >  pg.dropped.4  |
> >  pg.dropped.5  |
> >  pg.dropped.6  |
> >  bquantity |
> >  pg.dropped.8  |
> >  pg.dropped.9  |
> >  id_position_root  |
> >  id_conception |
> >  barcode_unit  |
> >  id_unit_conversion_rule   |
> >  pg.dropped.14 |
> >  timestamp |
> >  on_freeze |
> >  timestamp_class   |
> >  name  |
> >  id_class_root |
> >  id_group  |
> >  id_group_root |
> >  id_object_carrier |
> >  desc  |
> >  pg.dropped.24 |
> >  pg.dropped.25 |
> >  pg.dropped.26 |
> >  id_class_prop_object_carrier  |
> >  id_pos_temp_prop  |
> >  pg.dropped.29 |
> >  mc|
> >  is_inside | s
>
> Hah ... that gave me the clue I needed.  If there are dropped
> column(s) before the GENERATED one, pg_dump gets it wrong ---
> but only in --inserts mode, not the default COPY mode, which
> no doubt explains why nobody noticed.  There is code in there
> to inject DEFAULT, but it must be indexing the flag array wrong.
>
> Will fix, thanks for the report!
>
> regards, tom lane
>


Re: pg_dump insert column GENERATED

2021-11-22 Thread Дмитрий Иванов
Got it.

>
> >You are going need to provide more information about the above:
> >1) Define regular results.
> >2) The query used.
> >3) The outcome vs what you expected.
>
> I will continue to experiment. The question is which option is better
(this would reduce the number of options):
pd_dump is the source server;
pg_restore - receiver server;
or
pd_dump - receiver server;
pg_restore - server-receiver;


Re: pg_dump insert column GENERATED

2021-11-21 Thread Дмитрий Иванов
Ok, I'll take a look.
I recreated the is_inside column yesterday, but it didn't make any
difference.
Uchet=# select attname, attgenerated from pg_attribute
Uchet-# where attrelid = 'bpd.object'::regclass and attnum > 0;
attname| attgenerated
---+--
 id|
 id_class  |
 id_position   |
 pg.dropped.4  |
 pg.dropped.5  |
 pg.dropped.6  |
 bquantity |
 pg.dropped.8  |
 pg.dropped.9  |
 id_position_root  |
 id_conception |
 barcode_unit  |
 id_unit_conversion_rule   |
 pg.dropped.14 |
 timestamp |
 on_freeze |
 timestamp_class   |
 name  |
 id_class_root |
 id_group  |
 id_group_root |
 id_object_carrier |
 desc  |
 pg.dropped.24 |
 pg.dropped.25 |
 pg.dropped.26 |
 id_class_prop_object_carrier  |
 id_pos_temp_prop  |
 pg.dropped.29 |
 mc|
 is_inside | s

вс, 21 нояб. 2021 г. в 21:24, Tom Lane :

> =?UTF-8?B?0JTQvNC40YLRgNC40Lkg0JjQstCw0L3QvtCy?= 
> writes:
> >> So did you do an ALTER TABLE ... GENERATED ALWAYS AS on bpg.object in
> >> the Postgres 12 version of the database?
>
> > I think you're right, I added later.
>
> Hmm, that's suggestive, but I tried making a table that way and still did
> not see any pg_dump misbehavior.
>
> It might be useful to see the results of this query on the v12 server:
>
> select attname, attgenerated from pg_attribute
> where attrelid = 'bpd.object'::regclass and attnum > 0;
>
> We should see attgenerated = 's' for the generated column, but
> maybe we don't?
>
> regards, tom lane
>


Re: insert column monetary type ver 2

2021-11-21 Thread Дмитрий Иванов
Thank you, I'm talking about this solution.

вс, 21 нояб. 2021 г. в 22:46, vincent.vey...@libremen.org <
vincent.vey...@libremen.org>:

> On Sat, 20 Nov 2021 19:32:54 +0500
> Дмитрий Иванов  wrote:
>
> > I saw it. I’m not tied down hard on this type. It's just that it exists
> and
> > I decided to include it in my development. Thanks.
> >
>
>
> Consider storing integers; it's easy and fast, and all you need to do is
> display your numbers divided by 100, using to_char() :
>
> select to_char(11/100::numeric(10,2), '999G999D99');
>to_char
> -
> 1 000,01
>
>
> --
> https://marica.fr
> Logiciel de gestion des contentieux juridiques, des contrats et des
> sinistres d'assurance
> Bien à vous, Vincent Veyron
>
>
>


Re: pg_dump insert column GENERATED

2021-11-21 Thread Дмитрий Иванов
Yes and yes.
I don't get regular results in my experiments. I filled in some of the
tables this morning, but now I can't do it again.
> INSERT INTO bpd.plan_calendar (create in version 12)
> OK
I can't do it again.

>If I am following the bpg.object table was originally created in
>Postgres 10, correct?
Yes

bpd.plan_calendar was created in the Postgres 12 instance of the
database with GENERATED ALWAYS AS, correct?
Yes

>Postgres 10 did not have GENERATED ALWAYS AS, that appeared in Postgres 12.
>So did you do an ALTER TABLE ... GENERATED ALWAYS AS on bpg.object in
>the Postgres 12 version of the database?
I think you're right, I added later.

>Means was the instance the standby in a replication setup that was then
>moved up(promoted) to the primary.

wal_level = replica # minimal, replica, or logical
Never been in a replication relationship.

Are the below from the bpd.sql file?
Yes
Why does pg_dump insert data into the calculated columns?
I entered the data manually via export/import, excluding the problematic
field because its value will be calculated when inserted.

>
>


Re: insert column monetary type ver 2

2021-11-20 Thread Дмитрий Иванов
I think I would love to discuss this topic, but my English won't allow it.
I understand this type is there but the best way to avoid mistakes when
working with it is not to work with it. thank you.

вс, 21 нояб. 2021 г. в 09:02, Ron :

> On 11/20/21 11:01 AM, Tom Lane wrote:
> > =?UTF-8?B?0JTQvNC40YLRgNC40Lkg0JjQstCw0L3QvtCy?= 
> writes:
> >> I can't figure out the problem.  Digit group group separator causes an
> >> insertion error, what should I do?
> >> lc_monetary = 'ru_RU.UTF-8'
> > On my RHEL8 (moderately recent glibc) platform, that locale's
> > mon_thousands_sep symbol is not a plain space but "\342\200\257":
> >
> > p *lconvert
> > $3 = {decimal_point = 0x23ef120 ".", thousands_sep = 0x23ef140 "",
> >grouping = 0x23ef160 "", int_curr_symbol = 0x23ef1a0 "RUB ",
> >currency_symbol = 0x23ef1c0 "\342\202\275",
> >mon_decimal_point = 0x23ef1e0 ",",
> >mon_thousands_sep = 0x23ef200 "\342\200\257",
> >mon_grouping = 0x23ef220 "\003\003", positive_sign = 0x23ef240 "",
> >negative_sign = 0x23ef260 "-", int_frac_digits = 2 '\002',
> >frac_digits = 2 '\002', p_cs_precedes = 0 '\000', p_sep_by_space = 1
> '\001',
> >n_cs_precedes = 0 '\000', n_sep_by_space = 1 '\001', p_sign_posn = 1
> '\001',
> >n_sign_posn = 1 '\001', int_p_cs_precedes = 0 '\000',
> >int_p_sep_by_space = 0 '\000', int_n_cs_precedes = 0 '\000',
> >int_n_sep_by_space = 0 '\000', int_p_sign_posn = 0 '\000',
> >int_n_sign_posn = 0 '\000'}
> >
> > A quick lookup later, that's U+202F or "narrow no-break space".
> >
> > cash_in is picky about this, and won't take plain ASCII space as
> > a substitute.  Not sure if it should.
>
> It probably should, based on the Robustness Principle: "be conservative in
> what you send, be liberal in what you accept".
>
> --
> Angular momentum makes the world go 'round.
>
>
>


Re: pg_dump insert column GENERATED

2021-11-20 Thread Дмитрий Иванов
WHEN 1 THEN 'ПН'::character varying
WHEN 2 THEN 'ВТ'::character varying
WHEN 3 THEN 'СР'::character varying
WHEN 4 THEN 'ЧТ'::character varying
WHEN 5 THEN 'ПН'::character varying
WHEN 6 THEN 'СБ'::character varying
WHEN 7 THEN 'ВС'::character varying
ELSE 'ПН'::character varying
END) STORED,
work_week_day_name_full character varying COLLATE pg_catalog."default"
GENERATED ALWAYS AS (
CASE date_part('isodow'::text, work_date)
WHEN 1 THEN 'Понедельник'::character varying
WHEN 2 THEN 'Вторник'::character varying
WHEN 3 THEN 'Среда'::character varying
WHEN 4 THEN 'Четверг'::character varying
WHEN 5 THEN 'Пятница'::character varying
WHEN 6 THEN 'Суббота'::character varying
WHEN 7 THEN 'Воскресенье'::character varying
ELSE 'Понедельник'::character varying
END) STORED,
work_year_week integer GENERATED ALWAYS AS (date_part('week'::text,
work_date)) STORED,
week40_week numeric,
week39_week numeric,
week36_week numeric,
week35_week numeric,
week33_week numeric,
week30_week numeric,
week24_week numeric,
week20_week numeric,
week18_week numeric,
CONSTRAINT plan_calendar_pkey PRIMARY KEY (id),
CONSTRAINT unique_plan_calendar UNIQUE (work_date)
)

TABLESPACE pg_default;

вс, 21 нояб. 2021 г. в 06:38, Adrian Klaver :

> On 11/20/21 17:11, Дмитрий Иванов wrote:
> > Yes and yes.
> > I ended up using the pg_dump of the receiving server.
> >
> > sudo /usr/lib/postgresql/14/bin/pg_dump --file
> > "/home/dismay/uchet/Uchet.backup" --host "server" --port "5999"
> > --username "back" --no-password --verbose --format=c
> > --quote-all-identifiers --blobs  --column-inserts --inserts --create
> > --disable-triggers  --encoding="UTF8" "Uchet"
> >
> > sudo /usr/lib/postgresql/14/bin/pg_restore --host "127.0.0.1" --port
> > "5432" --username "back" --no-password --dbname "Uchet"
> > --disable-triggers --format=c --create --verbose
> > "/home/dismay/uchet/Uchet.backup"
> >
> > sudo /usr/lib/postgresql/14/bin/pg_restore --host "127.0.0.1" --port
> > "5432" --username "back" --no-password --dbname "Uchet"
> > --disable-triggers --table="bpd.object" --format=c --verbose
> > "/home/dismay/uchet/Uchet.backup"
> >
> > Receiving server:
> > PostgreSQL 14.1 (Debian 14.1-1.pgdg110+1) on x86_64-pc-linux-gnu,
> > compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
> >
> > Server source:
> > PostgreSQL 12.9, compiled by Visual C++ build 1914, 64-bit
> > EDB assembly installed from "Application Stack Builder"
> >
> > вс, 21 нояб. 2021 г. в 00:06, Adrian Klaver 
> Hmm. I cannot replicate, though in my case both servers(12.9, 14.1) are
> one same Linux machine.
>
> What is the history of the database in the 12.0 instance?
>
> Was it upgraded from another instance?
>
> If so dump/restore or pg_upgrade?
>
> Is it a promoted replica?
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: pg_dump insert column GENERATED

2021-11-20 Thread Дмитрий Иванов
Yes and yes.
I ended up using the pg_dump of the receiving server.

sudo /usr/lib/postgresql/14/bin/pg_dump --file
"/home/dismay/uchet/Uchet.backup" --host "server" --port "5999" --username
"back" --no-password --verbose --format=c --quote-all-identifiers --blobs
 --column-inserts --inserts --create --disable-triggers  --encoding="UTF8"
"Uchet"

sudo /usr/lib/postgresql/14/bin/pg_restore --host "127.0.0.1" --port "5432"
--username "back" --no-password --dbname "Uchet" --disable-triggers
--format=c --create --verbose "/home/dismay/uchet/Uchet.backup"

sudo /usr/lib/postgresql/14/bin/pg_restore --host "127.0.0.1" --port "5432"
--username "back" --no-password --dbname "Uchet" --disable-triggers
--table="bpd.object" --format=c --verbose "/home/dismay/uchet/Uchet.backup"

Receiving server:
PostgreSQL 14.1 (Debian 14.1-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled
by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit

Server source:
PostgreSQL 12.9, compiled by Visual C++ build 1914, 64-bit
EDB assembly installed from "Application Stack Builder"

вс, 21 нояб. 2021 г. в 00:06, Adrian Klaver :

> On 11/20/21 10:33, Дмитрий Иванов wrote:
> > I don't know.
> > sudo /usr/lib/postgresql/14/bin/pg_dump --file
> > "/home/dismay/uchet/Uchet.backup" --host "server" --port "5999"
> > --username "back" --no-password --verbose --format=c
> > --quote-all-identifiers --blobs  --column-inserts --inserts --create
> > --disable-triggers  --encoding="UTF8" "Uchet"
> > I used the pg_dump version of the receiving server, but the pg_dump of
> > the source server initially gave almost the same result, a COPY error of
> > the empty table
> > PostgreSQL 12.9, compiled by Visual C++ build 1914, 64-bit
> > to
> > PostgreSQL 14.1 (Debian 14.1-1.pgdg110+1) on x86_64-pc-linux-gnu,
> > compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
>
> To be clear you used the Postgres 14 version of pg_dump to dump from a
> Postgres 12 version database, correct?
>
> What version of pg_restore did you use to restore to the Postgres 14
> database?
>
> Where did you install the Postgres 12.9 version package from?
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: pg_dump insert column GENERATED

2021-11-20 Thread Дмитрий Иванов
I don't know.
sudo /usr/lib/postgresql/14/bin/pg_dump --file
"/home/dismay/uchet/Uchet.backup" --host "server" --port "5999" --username
"back" --no-password --verbose --format=c --quote-all-identifiers --blobs
 --column-inserts --inserts --create --disable-triggers  --encoding="UTF8"
"Uchet"
I used the pg_dump version of the receiving server, but the pg_dump of the
source server initially gave almost the same result, a COPY error of the
empty table
PostgreSQL 12.9, compiled by Visual C++ build 1914, 64-bit
to
PostgreSQL 14.1 (Debian 14.1-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled
by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit



сб, 20 нояб. 2021 г. в 22:00, Adrian Klaver :

> On 11/20/21 04:27, Дмитрий Иванов wrote:
> > Good afternoon.
> > Why does pg_dump generate an insertion script in the generated columns?
> > This causes insertion errors.
> > PostgreSQL 14.1 (Debian 14.1-1.pgdg110+1) on x86_64-pc-linux-gnu,
> > compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
> > is_inside - column should be excluded how to do it?
>
> Actually I think it should be DEFAULT.
>
> What is your pg_dump version?
>
> It shouldn't unless you are specifying --inserts.
>
> What is the full pg_dump command you are using?
>
> What are the errors?
>
> >
> > INSERT INTO bpd.object (id, id_class, id_position, bquantity,
> > id_position_root, id_conception, barcode_unit, id_unit_conversion_rule,
> > "timestamp", on_freeze, timestamp_class, name, id_class_root, id_group,
> > id_group_root, id_object_carrier, "desc", id_class_prop_object_carrier,
> > id_pos_temp_prop, is_inside, mc) VALUES (51253, 1015, 461, 1, 461, 84,
> > 202512530, 14, '2021-11-14 08:40:31.381', false, '2021-02-19
> > 11:01:28.402', 'NFC метка самоклеющаяся', 1013, 138, 138, -1, '', -1,
> > -1, false, 1);
> >
>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: pg_dump insert column GENERATED

2021-11-20 Thread Дмитрий Иванов
Ok, I see.
This is actually an interesting question. I don't understand which approach
to use. I am using pd_dump and pg_restore of the receiving server. It might
make sense to use pg_dump source, pg_restore sink


сб, 20 нояб. 2021 г. в 22:33, Tom Lane :

> Adrian Klaver  writes:
> > On 11/20/21 04:27, Дмитрий Иванов wrote:
> >> Why does pg_dump generate an insertion script in the generated columns?
>
> > Actually I think it should be DEFAULT.
>
> It should be, and it is when I try this example.  I get output like
>
> --
> -- Data for Name: object; Type: TABLE DATA; Schema: bpd; Owner: postgres
> --
>
> INSERT INTO bpd.object (id, id_class, id_position, bquantity,
> id_position_root, id_conception, barcode_unit, id_unit_conversion_rule,
> "timestamp", on_freeze, timestamp_class, name, id_class_root, id_group,
> id_group_root, id_object_carrier, "desc", id_class_prop_object_carrier,
> id_pos_temp_prop, is_inside, mc) VALUES (51253, 1015, 461, 1, 461, 84,
> 202512530, 14, '2021-11-14 08:40:31.381', false, '2021-02-19
> 11:01:28.402', 'NFC метка самоклеющаяся', 1013, 138, 138, -1, '', -1, -1,
> DEFAULT, 1);
>
> I wonder what version of pg_dump is actually being used there.
>
> regards, tom lane
>


Re: insert column monetary type ver 2

2021-11-20 Thread Дмитрий Иванов
I saw it. I’m not tied down hard on this type. It's just that it exists and
I decided to include it in my development. Thanks.

сб, 20 нояб. 2021 г. в 19:18, Josef Šimánek :

> so 20. 11. 2021 v 13:45 odesílatel Дмитрий Иванов
>  napsal:
> >
> > Good afternoon.
> > Sorry about the first example.
> > I can't figure out the problem.  Digit group group separator causes an
> insertion error, what should I do?
> > lc_monetary = 'ru_RU.UTF-8'
> > Digit group group separator is a space.
> > Financial type format: 7,649.00 ₽
> > DELETE FROM ONLY bpd.class_prop_user_small_val
> > WHERE id_class_prop = 74502;
> > Initial version of the data:
> > INSERT INTO bpd. class_prop_user_small_val (id_class_prop,
> timestamp_class, val_int, val_boolean, val_varchar, val_real, val_numeric,
> val_date, val_time, val_interval, val_timestamp, val_money, val_double,
> max_val, round, id_class, id_data_type, inheritance, val_bigint, min_val,
> max_on, min_on, round_on) VALUES (74502, '2021-09-08 10: 05:12. 618', NULL,
> NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, '7 649.00 ₽', NULL, -1, 2,
> 6161, 6, false, 0, -1, false, false, true);
> > --
> > ERROR: invalid input syntax for type money: "7 649,00 ₽" LINE 6: ...,
> NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, '7 649,00 ...
> > Corrected
> > INSERT INTO bpd. class_prop_user_small_val (id_class_prop,
> timestamp_class, val_int, val_boolean, val_varchar, val_real, val_numeric,
> val_date, val_time, val_interval, val_timestamp, val_money, val_double,
> max_val, round, id_class, id_data_type, inheritance, val_bigint, min_val,
> max_on, min_on, round_on) VALUES (74502, '2021-09-08 10: 05:12. 618', NULL,
> NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, '7649.00 ₽', NULL, -1, 2,
> 6161, 6, false, 0, -1, false, false, true);
> > ---
> > INSERT 0 1
> >
> > Why does a financial type conversion with a legal regional digit group
> separator cause an error?
>
> I'm not sure what's the problem on your side. But definitely check
> this link https://wiki.postgresql.org/wiki/Don't_Do_This#Don.27t_use_money
> explaining why money is not recommended to be used. Maybe you're
> facing one of the known problems.
>


insert column monetary type ver 2

2021-11-20 Thread Дмитрий Иванов
Good afternoon.
Sorry about the first example.
I can't figure out the problem.  Digit group group separator causes an
insertion error, what should I do?
lc_monetary = 'ru_RU.UTF-8'
Digit group group separator is a space.
Financial type format: 7,649.00 ₽
DELETE FROM ONLY bpd.class_prop_user_small_val
WHERE id_class_prop = 74502;
Initial version of the data:
INSERT INTO bpd. class_prop_user_small_val (id_class_prop, timestamp_class,
val_int, val_boolean, val_varchar, val_real, val_numeric, val_date,
val_time, val_interval, val_timestamp, val_money, val_double, max_val,
round, id_class, id_data_type, inheritance, val_bigint, min_val, max_on,
min_on, round_on) VALUES (74502, '2021-09-08 10: 05:12. 618', NULL, NULL,
NULL, NULL, NULL, NULL, NULL, NULL, NULL, '7 649.00 ₽', NULL, -1, 2, 6161,
6, false, 0, -1, false, false, true);
--
ERROR: invalid input syntax for type money: "7 649,00 ₽" LINE 6: ..., NULL,
NULL, NULL, NULL, NULL, NULL, NULL, NULL, '7 649,00 ...
Corrected
INSERT INTO bpd. class_prop_user_small_val (id_class_prop, timestamp_class,
val_int, val_boolean, val_varchar, val_real, val_numeric, val_date,
val_time, val_interval, val_timestamp, val_money, val_double, max_val,
round, id_class, id_data_type, inheritance, val_bigint, min_val, max_on,
min_on, round_on) VALUES (74502, '2021-09-08 10: 05:12. 618', NULL, NULL,
NULL, NULL, NULL, NULL, NULL, NULL, NULL, '7649.00 ₽', NULL, -1, 2, 6161,
6, false, 0, -1, false, false, true);
---
INSERT 0 1

Why does a financial type conversion with a legal regional digit group
separator cause an error?


pg_dump insert column GENERATED

2021-11-20 Thread Дмитрий Иванов
Good afternoon.
Why does pg_dump generate an insertion script in the generated columns?
This causes insertion errors.
PostgreSQL 14.1 (Debian 14.1-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled
by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
is_inside - column should be excluded how to do it?

INSERT INTO bpd.object (id, id_class, id_position, bquantity,
id_position_root, id_conception, barcode_unit, id_unit_conversion_rule,
"timestamp", on_freeze, timestamp_class, name, id_class_root, id_group,
id_group_root, id_object_carrier, "desc", id_class_prop_object_carrier,
id_pos_temp_prop, is_inside, mc) VALUES (51253, 1015, 461, 1, 461, 84,
202512530, 14, '2021-11-14 08:40:31.381', false, '2021-02-19
11:01:28.402', 'NFC метка самоклеющаяся', 1013, 138, 138, -1, '', -1, -1,
false, 1);

Table:
CREATE TABLE IF NOT EXISTS bpd.object
(
id bigint NOT NULL DEFAULT
nextval('bpd.object_general_id_seq'::regclass),
id_class bigint NOT NULL,
id_position bigint NOT NULL DEFAULT '-1'::integer,
bquantity numeric NOT NULL,
id_position_root bigint NOT NULL,
id_conception bigint NOT NULL,
barcode_unit bigint NOT NULL DEFAULT 0,
id_unit_conversion_rule integer NOT NULL,
"timestamp" timestamp without time zone NOT NULL DEFAULT LOCALTIMESTAMP,
on_freeze boolean NOT NULL DEFAULT false,
timestamp_class timestamp without time zone NOT NULL DEFAULT
LOCALTIMESTAMP(3),
name character varying(255) COLLATE pg_catalog."default" NOT NULL,
id_class_root bigint NOT NULL,
id_group bigint NOT NULL,
id_group_root bigint NOT NULL,
id_object_carrier bigint NOT NULL DEFAULT '-1'::integer,
"desc" character varying(2044) COLLATE pg_catalog."default" NOT NULL
DEFAULT 'н/д'::character varying,
id_class_prop_object_carrier bigint NOT NULL DEFAULT '-1'::integer,
id_pos_temp_prop bigint NOT NULL DEFAULT '-1'::integer,
is_inside boolean GENERATED ALWAYS AS (((id_object_carrier > 0) OR
(id_pos_temp_prop > 0))) STORED,
mc numeric NOT NULL DEFAULT 0,
CONSTRAINT object_pkey PRIMARY KEY (id),
CONSTRAINT unique_id_object_id_object_prop UNIQUE (id,
id_class_prop_object_carrier),
CONSTRAINT lnk_class_snapshot_object FOREIGN KEY (id_class,
timestamp_class)
REFERENCES bpd.class_snapshot (id, "timestamp") MATCH FULL
ON UPDATE CASCADE
ON DELETE CASCADE,
CONSTRAINT lnk_conception_object FOREIGN KEY (id_conception)
REFERENCES bpd.conception (id) MATCH FULL
ON UPDATE CASCADE
ON DELETE CASCADE,
CONSTRAINT lnk_position_object FOREIGN KEY (id_position)
REFERENCES bpd."position" (id) MATCH FULL
ON UPDATE CASCADE
ON DELETE CASCADE,
CONSTRAINT check_self_integration CHECK (id <> id_object_carrier)
)


insert column monetary type

2021-11-20 Thread Дмитрий Иванов
Good afternoon.
I can't figure out the problem.  Digit group group separator causes an
insertion error, what should I do?
lc_monetary = 'ru_RU.UTF-8'
Digit group group separator is a space.
Financial type format: 7,649.00 ₽
DELETE FROM ONLY bpd.class_prop_user_small_val
WHERE id_class_prop = 74502;
Initial version of the data:
INSERT INTO bpd. class_prop_user_small_val (id_class_prop, timestamp_class,
val_int, val_boolean, val_varchar, val_real, val_numeric, val_date,
val_time, val_interval, val_timestamp, val_money, val_double, max_val,
round, id_class, id_data_type, inheritance, val_bigint, min_val, max_on,
min_on, round_on) VALUES (74502, '2021-09-08 10: 05:12. 618', NULL, NULL,
NULL, NULL, NULL, NULL, NULL, NULL, NULL, '7,649.00 ₽', NULL, -1, 2, 6161,
6, false, 0, -1, false, false, true);
--
ERROR: invalid input syntax for type money: "7,649.00 ₽".
LINE 6: ..., NULL, NULL, NULL, NULL, NULL, NULL, NULL, '7,649.00 ...

Corrected
INSERT INTO bpd. class_prop_user_small_val (id_class_prop, timestamp_class,
val_int, val_boolean, val_varchar, val_real, val_numeric, val_date,
val_time, val_interval, val_timestamp, val_money, val_double, max_val,
round, id_class, id_data_type, inheritance, val_bigint, min_val, max_on,
min_on, round_on) VALUES (74502, '2021-09-08 10: 05:12. 618', NULL, NULL,
NULL, NULL, NULL, NULL, NULL, NULL, NULL, '7649.00 ₽', NULL, -1, 2, 6161,
6, false, 0, -1, false, false, true);
---
INSERT 0 1

Why does a financial type conversion with a legal regional digit group
separator cause an error?


Re: pg_restore depending on user functions

2021-11-15 Thread Дмитрий Иванов
Yes, it is.
I continue to extract data as promised, but I think I see some pattern.
"chicken or egg"
To work with the NPGSQL library, I created a cast. They are created after
the views in which I use them.
Here is the order
279: CREATE TYPE bpd.cclass_prop
4646: CREATE VIEW bpd.vclass_prop
4784: CREATE FUNCTION bpd.int_cast_vclass_prop_to_cclass_prop(ivclass_prop
bpd.vclass_prop) RETURNS bpd.cclass_prop
4803: CREATE VIEW bpd.int_class_ext AS
SELECT cp.id_class AS id,
array_agg((cp.*)::bpd.cclass_prop ORDER BY cp.sort) AS property_list
FROM bpd.vclass_prop cp
GROUP BY cp.id_class;
89428: CREATE CAST (bpd.vclass_prop AS bpd.cclass_prop) WITH FUNCTION
bpd.int_cast_vclass_prop_to_cclass_prop(bpd.vclass_prop);

lost views:
"int_class_ext"
"int_doc_category_ext"
"int_doc_file_ext"
"int_doc_link_ext"
"int_object_ext"
"vclass_ext"
"vdocument_ext"
"vobject_general_ext"

пн, 15 нояб. 2021 г. в 20:49, Tom Lane :

> =?UTF-8?B?0JTQvNC40YLRgNC40Lkg0JjQstCw0L3QvtCy?= 
> writes:
> > Is it ok to attach a 5MB data schema or is it not possible? Copy one by
> one
> > to a letter?
>
> Is it smaller if you omit the data (-s switch)?  Shouldn't be relevant
> here.
>
> regards, tom lane
>


Re: pg_restore depending on user functions

2021-11-15 Thread Дмитрий Иванов
Yes, it is.
I'll leave behind the lost features, views, table definitions, and
dependent objects. It will take some time.
thanks, for the help.

пн, 15 нояб. 2021 г. в 20:49, Tom Lane :

> =?UTF-8?B?0JTQvNC40YLRgNC40Lkg0JjQstCw0L3QvtCy?= 
> writes:
> > Is it ok to attach a 5MB data schema or is it not possible? Copy one by
> one
> > to a letter?
>
> Is it smaller if you omit the data (-s switch)?  Shouldn't be relevant
> here.
>
> regards, tom lane
>


Re: pg_restore depending on user functions

2021-11-15 Thread Дмитрий Иванов
Yes, it is.
I did so (--schema-only). Removing unnecessary definitions.
That is, you do not need to attach files?

пн, 15 нояб. 2021 г. в 20:49, Tom Lane :

> =?UTF-8?B?0JTQvNC40YLRgNC40Lkg0JjQstCw0L3QvtCy?= 
> writes:
> > Is it ok to attach a 5MB data schema or is it not possible? Copy one by
> one
> > to a letter?
>
> Is it smaller if you omit the data (-s switch)?  Shouldn't be relevant
> here.
>
> regards, tom lane
>


Re: pg_restore depending on user functions

2021-11-15 Thread Дмитрий Иванов
Thanks for the feedback!
Is it ok to attach a 5MB data schema or is it not possible? Copy one by one
to a letter?
It would be nice if in the end I could rebuild the base without hindrance.
Thank you in advance.
Restore Windows 10 PostgreSQL 12.9, compiled by Visual C++ build 1914,
64-bit -> Debian 11 PostgreSQL 14.1 (Debian 14.1-1.pgdg110+1) on
x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110,
64-bit

sudo /usr/lib/postgresql/14/bin/pg_dump --file
"/home/dismay/uchet/Uchet.backup" --host "server" --port "5999" --username
"back" --no-password --verbose --format=c --quote-all-identifiers --blobs
 --column-inserts --inserts --create --disable-triggers  --encoding="UTF8"
"Uchet"
no error/

Drop base

CREATE DATABASE "Uchet"
WITH
OWNER = funcowner
TEMPLATE = template0
ENCODING = 'UTF8'
LC_COLLATE = 'ru_RU.UTF-8'
LC_CTYPE = 'ru_RU.UTF-8'
CONNECTION LIMIT = -1;
SET check_function_bodies = false;

sudo /usr/lib/postgresql/14/bin/pg_restore --host "127.0.0.1" --port "5432"
--username "back" --no-password --dbname "Uchet" --disable-triggers
--format=c --create --verbose "/home/dismay/uchet/Uchet.backup"
error:
"bpd"."class_prop_user_smal_val" error syntax money
"bpd"."class_prop_user_small_val_snapshot"  error syntax money
"bpd"."object error insert column is_inside"
ignored error 26603

visible differences
 base sourcebase receiver
count function 711 649
count views 125 117
count rows bpd.object 25769 0
count rows bpd.class_prop_user_small_val 28295 28182
count rows bpd.class_prop_user_small_val_snapshot 33550 33491

lost views:
"int_class_ext"
"int_doc_category_ext"
"int_doc_file_ext"
"int_doc_link_ext"
"int_object_ext"
"vclass_ext"
"vdocument_ext"
"vobject_general_ext"

lost function:
"class_act_base_ext_allowed_by_id_group"
"class_act_ext_allowed_rl1_by_id_position"
"class_act_ext_by_id"
"class_act_ext_by_id_conception_msk_name"
"class_act_ext_by_id_global_prop"
"class_act_ext_by_id_group"
"class_act_ext_by_id_group_msk_name"
"class_act_ext_by_id_parent"
"class_act_ext_by_id_parent_msk_name"
"class_act_ext_by_id_parent_strict_name"
"class_act_ext_by_msk_global_prop"
"class_act_ext_by_msk_global_prop_from_class"
"class_act_ext_by_msk_global_prop_from_group"
"class_act_real_ext_allowed_by_id_group"
"class_act_real_ext_by_id_group"
"class_snapshot_base_ext_by_id_position"
"class_snapshot_ext_by_id"
"class_snapshot_ext_by_id_class"
"class_snapshot_ext_by_id_parent_snapshot"
"class_snapshot_ext_on_object_by_id_parent_snapshot_parent_pos"
"document_ext_by_id"
"document_ext_by_id_category"
"document_ext_by_id_class_prop"
"document_ext_by_id_conception"
"document_ext_by_id_group"
"document_ext_by_id_object"
"document_ext_by_id_object_prop"
"document_ext_by_id_parent"
"document_ext_by_id_pos_temp"
"document_ext_by_id_pos_temp_prop"
"document_ext_by_id_position"
"document_ext_by_id_position_prop"
"document_ext_by_id_user"
"document_ext_by_msk_name_from_category"
"document_ext_by_msk_name_from_conception"
"int_class_ext_prop_by_id_class_array"
"int_object_ext_prop_by_id_object_array"
"object_carrier_ext_by_object_class_full"
"object_ext_by_id"
"object_ext_by_id_class_act"
"object_ext_by_id_class_full"
"object_ext_by_id_class_id_pos"
"object_ext_by_id_class_root"
"object_ext_by_id_class_snapshot"
"object_ext_by_id_class_snapshot_id_pos"
"object_ext_by_id_group"
"object_ext_by_id_group_root"
"object_ext_by_id_object_carrier"
"object_ext_by_id_position"
"object_ext_by_id_position_full"
"object_ext_by_id_prop_data_type"
"object_ext_by_id_prop_enum"
"object_ext_by_id_prop_enum_val"
"object_ext_by_id_unit_conversion_rule"
"object_ext_by_link_object"
"object_ext_by_msk_global_prop"
"object_ext_by_msk_global_prop_from_pos"
"object_ext_by_name"
"object_ext_by_name_id_pos"
"object_ext_carrier_by_msk_global_prop"
"object_object_prop_by_id_object_carrier_ext"
"object_object_prop_by_id_position_carrier_ext"

пн, 15 нояб. 2021 г. в 05:06, Tom Lane :

> =?UTF-8?B?0JTQvNC40YLRgNC40Lkg0JjQstCw0L3QvtCy?= 
> writes:
> > вс, 14 нояб. 2021 г. в 23:46, Tom Lane :
> >> This is not "details", this is an evidence-free assertion.  Please show
> >> a concrete example of problematic functions.
>
> > Ok, I'll do it.
> > [ example ]
>
> This is pretty far from being a self-contained example; I had to guess
> at the definitions of several types that you didn't provide.  Having
> done so, though, the set of functions and types seem to dump and restore
> just fine.  So I still don't see any problem here.
>
> Possibly worth noting is that pg_dump is careful to include this in
> its output script:
>
> SET check_function_bodies = false;
>
> Without that, I can believe that you'd have some trouble restoring
> these functions, since pg_dump doesn't know anything about the
> cross-references appearing in the function bodies.  But with that,
> there don't appear to be any troublesome cross-references here.
>
> regards, tom lane
>


Re: pg_restore depending on user functions

2021-11-14 Thread Дмитрий Иванов
Ok, I'll do it.
> Сreating function A before function B results in a compilation error.
On my part, this is an incorrect assumption. There are no compilation
errors here. she just didn't recover from the first pass.

вс, 14 нояб. 2021 г. в 23:46, Tom Lane :

> =?UTF-8?B?0JTQvNC40YLRgNC40Lkg0JjQstCw0L3QvtCy?= 
> writes:
> > вс, 14 нояб. 2021 г. в 22:31, Tom Lane :
> >> Usually this is caused by being careless about search_path assumptions
> >> in your functions ... but with no details, it's impossible to say
> >> anything with certainty.
>
> > No, in this case it is not:
> > Function A using function B.
> > Сreating function A before function B results in a compilation error.
> > Function B has no dependencies and is generated without errors. The
> second
> > run of the circuit creates function A.
> > If I could specify a function dependency, it would change the order of
> > recovery
>
> This is not "details", this is an evidence-free assertion.  Please show
> a concrete example of problematic functions.
>
> >> ...  What minor release are you using?
>
> >  PostgreSQL 14.1 (Debian 14.1-1.pgdg110+1) on x86_64-pc-linux-gnu,
> compiled
> > by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
> > pg_restote, pg_dump from this build
>
> Ok, so you're up to date all right.  But again, you didn't say what
> concrete problem you were having with a dump/restore of an identity
> column.  It works fine for me.
>
> regards, tom lane
>


Re: pg_restore depending on user functions

2021-11-14 Thread Дмитрий Иванов
Ok, I'll do it.

вс, 14 нояб. 2021 г. в 23:46, Tom Lane :

> =?UTF-8?B?0JTQvNC40YLRgNC40Lkg0JjQstCw0L3QvtCy?= 
> writes:
> > вс, 14 нояб. 2021 г. в 22:31, Tom Lane :
> >> Usually this is caused by being careless about search_path assumptions
> >> in your functions ... but with no details, it's impossible to say
> >> anything with certainty.
>
> > No, in this case it is not:
> > Function A using function B.
> > Сreating function A before function B results in a compilation error.
> > Function B has no dependencies and is generated without errors. The
> second
> > run of the circuit creates function A.
> > If I could specify a function dependency, it would change the order of
> > recovery
>
> This is not "details", this is an evidence-free assertion.  Please show
> a concrete example of problematic functions.
>

Function A (not restore) first action:
CREATE OR REPLACE FUNCTION bpd.class_act_ext_by_id_parent(
iid_parent bigint)
RETURNS SETOF bpd.vclass_ext
LANGUAGE 'plpgsql'
COST 100
STABLE SECURITY DEFINER PARALLEL SAFE
ROWS 1000

SET search_path=bpd
AS $BODY$
DECLARE
class_array BIGINT[]; --Массив объектов
BEGIN
class_array = (SELECT array_agg(c.id) FROM ONLY bpd.class c WHERE
c.id_parent = iid_parent);

RETURN QUERY
SELECT
c.id,
c."timestamp",
true AS has_active,
c.timestamp_child_change,
c.id_con,
c.id_group,
c.id_group_root,
c.id_parent,
c.timestamp_parent,
c.id_root,
c.timestamp_root,
c.level,
(c.level = 0) AS is_root,
((c.tableoid)::regclass)::character varying AS tablename,
c.name,
c.name_format,
c.quantity_show,
c."desc",
c."on",
c.on_extensible,
c.on_abstraction,
c.id_unit,
c.id_unit_conversion_rule,
c.barcode_manufacturer,
c.barcode_local,
(EXISTS ( SELECT 1
   FROM ONLY bpd.class cc
  WHERE (cc.id_parent = c.id))) AS include_child_class,
(EXISTS ( SELECT 1
   FROM bpd.object co
  WHERE ((co.id_class = c.id) AND (co.timestamp_class =
c."timestamp" AS include_child_object,
(EXISTS ( SELECT 1
   FROM ONLY bpd.class cc
  WHERE ((cc.id_parent = c.id) AND (NOT cc.on_abstraction AS
include_child_real_class,
(EXISTS ( SELECT 1
   FROM ONLY bpd.class cc
  WHERE ((cc.id_parent = c.id) AND cc.on_abstraction))) AS
include_child_abstract_class,
( SELECT count(1) AS count
   FROM ONLY bpd.class cc
  WHERE (cc.id_parent = c.id)) AS child_count,
(con.group_recycle = c.id_group_root) AS in_recycle,
c.on_freeze,
cp_list.property_list,
c_ready.ready,
c_path.path
   FROM ONLY bpd.class c
 LEFT JOIN bpd.conception con ON ((con.id = c.id_con)))
 LEFT JOIN "bpd"."int_class_ext_prop_by_id_class_array"(class_array)
cp_list ON ((cp_list.id = c.id)))
 LEFT JOIN "bpd"."int_class_ready_by_id_class_array"(class_array)
c_ready ON ((c_ready.id = c.id)))
 LEFT JOIN "bpd"."int_class_path_by_id_class_array"(class_array) c_path
ON ((c_path.id = c.id)))
WHERE c.id = ANY(class_array)
ORDER BY "name";
END;
$BODY$;

Function B:
CREATE OR REPLACE FUNCTION bpd.int_class_ext_prop_by_id_class_array(
class_array bigint[])
RETURNS SETOF bpd.int_class_ext
LANGUAGE 'sql'
COST 100
STABLE PARALLEL SAFE
ROWS 1000
AS $BODY$
SELECT cp.id_class AS id,
array_agg((cp.*)::bpd.cclass_prop ORDER BY cp.sort) AS property_list
FROM bpd.vclass_prop cp
WHERE (cp.id_class = ANY(class_array))
GROUP BY cp.id_class;
$BODY$;

CREATE OR REPLACE FUNCTION bpd.int_class_ready_by_id_class_array(
class_array bigint[])
RETURNS SETOF bpd.int_class_ready
LANGUAGE 'sql'
COST 100
STABLE PARALLEL SAFE
ROWS 1000

AS $BODY$
SELECT
c.id,
CASE c.on_abstraction
WHEN false THEN
CASE bpd.int_class_format_check(c.id)
WHEN true THEN
CASE (count(cp.id) > 0)
WHEN true THEN bool_and(
CASE cp.id_prop_type
WHEN 1 THEN (
CASE cp.id_data_type
WHEN 1 THEN (sv.val_varchar IS NOT NULL)
WHEN 2 THEN (sv.val_int IS NOT NULL)
WHEN 3 THEN (sv.val_numeric IS NOT NULL)
WHEN 4 THEN (sv.val_real IS NOT NULL)
WHEN 5 THEN (sv.val_double IS NOT NULL)
WHEN 6 THEN (sv.val_money IS NOT NULL)
WHEN 7 THEN (bv.val_text IS NOT NULL)
WHEN 8 THEN (bv.val_bytea IS NOT NULL)
WHEN 9 THEN (sv.val_boolean IS NOT NULL)
WHEN 10 THEN (sv.val_date IS NOT NULL)
WHEN 11 THEN (sv.val_time IS NOT NULL)
WHEN 12 THEN (sv.val_interval IS NOT NULL)
WHEN 13 THEN 

Re: pg_restore depending on user functions

2021-11-14 Thread Дмитрий Иванов
Спасибо за ваш отзыв.

вс, 14 нояб. 2021 г. в 22:31, Tom Lane :

> =?UTF-8?B?0JTQvNC40YLRgNC40Lkg0JjQstCw0L3QvtCy?= 
> writes:
> > 1. I can determine the dependencies of the custom functions when creating
> > the pg_dump? I did not find anything suitable for SUPPORT, probably this
> is
> > not the case?
> > To recreate all of the functionality, I had to run schema recovery twice.
> > (--schema-only)
>
> Usually this is caused by being careless about search_path assumptions
> in your functions ... but with no details, it's impossible to say
> anything with certainty.
>
No, in this case it is not:
Function A using function B.
Сreating function A before function B results in a compilation error.
Function B has no dependencies and is generated without errors. The second
run of the circuit creates function A.
If I could specify a function dependency, it would change the order of
recovery



> > 2. Why is pg_restore trying to set the value of fields of type GENERATED
> > ALWAYS?
>
> Otherwise it couldn't guarantee that the rows were restored with the same
> values of the identity column that they had before.  I recall that we
> had some bugs with GENERATED ALWAYS in early v10 versions, but when I try
> it now, dump/restore of a table with an identity column seems to work
> as expected.  What minor release are you using?
>
 PostgreSQL 14.1 (Debian 14.1-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled
by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
pg_restote, pg_dump from this build
sudo /usr/lib/postgresql/14/bin/pg_dump --file
"/home/dismay/uchet/Uchet.backup" --host "server" --port "5999" --username
"back" --no-password --verbose --format=c --quote-all-identifiers --blobs
 --column-inserts --inserts --clean --create --if-exists --disable-triggers
 --encoding="UTF8" "Uchet"
sudo /usr/lib/postgresql/14/bin/pg_restore --host "127.0.0.1" --port "5432"
--username "back" --no-password --dbname "Uchet" --disable-triggers
--schema-only --format=c --verbose "/home/dismay/uchet/Uchet.backup"
sudo /usr/lib/postgresql/14/bin/pg_restore --host "127.0.0.1" --port "5432"
--username "back" --no-password --dbname "Uchet" --disable-triggers
--format=c --verbose "/home/dismay/uchet/Uchet.backup"

>
> regards, tom lane
>
PS:  Sorry for my english...


pg_restore depending on user functions

2021-11-14 Thread Дмитрий Иванов
Good day!
Deploying a cluster: from PostgreSQL 12 Windows 10 (EDB) to PostgreSQL 14
Debian 11.
In general, I succeeded, but there are a number of questions:
1. I can determine the dependencies of the custom functions when creating
the pg_dump? I did not find anything suitable for SUPPORT, probably this is
not the case?
To recreate all of the functionality, I had to run schema recovery twice.
(--schema-only)
2. Why is pg_restore trying to set the value of fields of type GENERATED
ALWAYS?
COPY mode also does not work with INSERT. I had to do a COPY by manually
deleting the fields type GENERATED ALWAYS
Can this behavior be influenced?


Re: E.1.2. Migration to Version 14

2021-11-10 Thread Дмитрий Иванов
Хорошо спасибо!
This does not give an understanding of the need to work with all the
functions (Table 9.52) of arrays or specified in the release.
I am working on version 12 of the main kernel. I didn’t see what I’ll get,
other than being up-to-date when upgrading to version 14. I faced
performance degradation when using the array aggregation functions.
I have improved performance by using functions that take arrays as
arguments.
CTEs give the same result, but are difficult to maintain.
If the declarative query structure were built with filtering before
aggregation, this trick would not be needed, but this is not the case in
version 12.
But this is not the case.
Based on the setting of the topic:
Re: Max array size
<https://www.postgresql.org/message-id/4799.1214857781%40sss.pgh.pa.us>
I have created functions that take an array for later aggregation of
subordinate tuples.
This gave an exponential dependence of performance on selectivity.
I want to understand what I will get from version 14 and at the moment the
benefits are illusory.




ср, 10 нояб. 2021 г. в 21:11, Bruce Momjian :

> On Wed, Nov 10, 2021 at 12:30:37PM +0500, Дмитрий Иванов wrote:
> > Hello!
> > E.1.2 Migration to Version 14
> >  array_append(), array_prepend(), array_cat(), array_position(),
> > array_positions(), array_remove(), array_replace(), and width_bucket()
> >
> > Is this a complete list of functions? Or you need to remove all
> functions:
> > Table 9.52. Array Functions
> >
> > Do the functions need to be deleted and restored, or can they be
> recreated?
> > I have a lot of them and they are called from other functions:
> >
> > Uchet=# SELECT COUNT(pg_proc.proname) AS "COUNT"
> > Uchet-#FROM pg_proc
> > Uchet-#   WHERE pg_proc.prosrc ~~ '%array%'::text AND
> pg_proc.pronamespace =
> > 16443::oid;
> >  COUNT
> > ---
> > 89
> > (1 row)
> > Trying to estimate labor costs and migration opportunities.
> > Thanks!
>
> Uh, I think they can just be recreated on the new server.  I think we
> suggested delete then create so that the functions would not be called
> accidentally on the new server before being recreated.
>
> --
>   Bruce Momjian  https://momjian.us
>   EDB  https://enterprisedb.com
>
>   If only the physical world exists, free will is an illusion.
>
>


E.1.2. Migration to Version 14

2021-11-09 Thread Дмитрий Иванов
Hello!
E.1.2 Migration to Version 14

 array_append() ,
array_prepend(), array_cat(), array_position(), array_positions(),
array_remove(), array_replace(), and width_bucket()


Is this a complete list of functions? Or you need to remove all functions:
Table 9.52. Array Functions


Do the functions need to be deleted and restored, or can they be recreated?
I have a lot of them and they are called from other functions:

Uchet=# SELECT COUNT(pg_proc.proname) AS "COUNT"
Uchet-#FROM pg_proc
Uchet-#   WHERE pg_proc.prosrc ~~ '%array%'::text AND pg_proc.pronamespace
= 16443::oid;
 COUNT
---
89
(1 row)
Trying to estimate labor costs and migration opportunities.
Thanks!


Re: Python3 for PostgreSQL 14

2021-10-23 Thread Дмитрий Иванов
Yes, it does.
>If you used the EDB installer for the Windows install of Postgres then
>you need to use their Language Pack to match up with what the plpython
>expects.
It was obvious, but it didn't work. This is not a problem at the moment.

>This is the Debian project packages, not something the Postgres project
>controls.
>The project does control this:
>https://www.postgresql.org/download/linux/debian/
You may be right, but it didn't help me find the package
postgresql-plpython3-14.
Yes everything works, thanks!

вс, 24 окт. 2021 г. в 01:40, Adrian Klaver :

> On 10/23/21 13:00, Дмитрий Иванов wrote:
> > Да.
> > Python extension issues occurred on Windows Server 2012 and Windows 10
> > Pro. Experimentally, when installing version 3.7.4, everything worked.
>
> If you used the EDB installer for the Windows install of Postgres then
> you need to use their Language Pack to match up with what the plpython
> expects.
>
> > This is my first time deploying Debian, and since I didn't want to
> > deploy multiple versions, I asked a question. I have installed Python
> > 3.10 x64. when faced with a lack of information on installing an
> > extension pack, it would be a good idea to highlight this in the help.
> >
> https://debian.pkgs.org/11/postgresql-amd64/postgresql-plpython3-14_14.0-1.pgdg110+1_amd64.deb.html
> > <
> https://debian.pkgs.org/11/postgresql-amd64/postgresql-plpython3-14_14.0-1.pgdg110+1_amd64.deb.html
> >:
>
> This is the Debian project packages, not something the Postgres project
> controls.
>
> The project does control this:
>
> https://www.postgresql.org/download/linux/debian/
>
> Which has the section
>
> "Included in distribution"
>
>
> that points out you may need to install additional packages.
>
> This is also pointed out here:
>
> https://www.postgresql.org/docs/current/plpython.html
>
> "
> Note
>
> Users of source packages must specially enable the build of PL/Python
> during the installation process. (Refer to the installation instructions
> for more information.) Users of binary packages might find PL/Python in
> a separate subpackage.
> "
>
>
> > After that, I have successfully installed the extension and executed the
> > audit function
>
> Does this mean everything is working now?
>
> >
> > сб, 23 окт. 2021 г. в 09:04, Adrian Klaver  > <mailto:adrian.kla...@aklaver.com>>:
> >
> > On 10/22/21 19:58, Дмитрий Иванов wrote:
> >  > Good afternoon. Tell me what version of Python3 PostgreSQL 14 is
> >  > targeting. I had problems with pairing version 12, working on
> 3.7.4
> >
> > 1) Where did you install Postgres from?
> >
> > 2) Define problems.
> >
> > 3) To be clear you are trying to use plpython3u, correct?
> >
> >
> > --
> > Adrian Klaver
> > adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Python3 for PostgreSQL 14

2021-10-23 Thread Дмитрий Иванов
Да.
Python extension issues occurred on Windows Server 2012 and Windows 10 Pro.
Experimentally, when installing version 3.7.4, everything worked. This is
my first time deploying Debian, and since I didn't want to deploy multiple
versions, I asked a question. I have installed Python 3.10 x64. when faced
with a lack of information on installing an extension pack, it would be a
good idea to highlight this in the help.
https://debian.pkgs.org/11/postgresql-amd64/postgresql-plpython3-14_14.0-1.pgdg110+1_amd64.deb.html
:

# sudo apt-get install postgresql-plpython3-14

After that, I have successfully installed the extension and executed the
audit function

сб, 23 окт. 2021 г. в 09:04, Adrian Klaver :

> On 10/22/21 19:58, Дмитрий Иванов wrote:
> > Good afternoon. Tell me what version of Python3 PostgreSQL 14 is
> > targeting. I had problems with pairing version 12, working on 3.7.4
>
> 1) Where did you install Postgres from?
>
> 2) Define problems.
>
> 3) To be clear you are trying to use plpython3u, correct?
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Python3 for PostgreSQL 14

2021-10-22 Thread Дмитрий Иванов
Good afternoon. Tell me what version of Python3 PostgreSQL 14 is targeting.
I had problems with pairing version 12, working on 3.7.4


Re: OSX: migrating Postgres db from one Mac to another

2021-06-24 Thread Дмитрий Иванов
# -*- coding: utf-8 -*-

##
#
# pgAdmin 4 - PostgreSQL Tools
#
# Copyright (C) 2013 - 2021, The pgAdmin Development Team
# This software is released under the PostgreSQL Licence
#
# config.py - Core application configuration settings
#
##
.
##
# Master password is used to encrypt/decrypt saved server passwords
# Applicable for desktop mode only
##
MASTER_PASSWORD_REQUIRED = True -> False



чт, 24 июн. 2021 г. в 18:44, Carlo Poso :

> Hello,
> my Mac unfortunately crashed. I have a Time Machine backup.
> Using the Migration Assistant on a new Mac allowed me to get all my
> documents back, but unfortunately Postgres is not able to start.
>
> So I was trying to manually set-up my Postgres db.
>
> Here are the steps:
>
>- install a fresh copy of Postgres (v11)
>- replace the /Library/PostgreSQL/11/data folder with the one I dig
>out from my dead Mac, where I assume my data is stored
>- check/fix owner of folders (must be 'postgres', I assume)
>- start pgAdmin
>- enter the PW for postgres user
>- click on "PostgreSQL 11" server to show databases
>
> This is where I have problems. I am prompted with the same pop-up asking
> me to enter the password for the user 'postgres'. I type it and I get the
> following:
> [image: Screen Shot 2021-06-24 at 15.25.03.png]
>
>
> *Carlo*
>