How to split normal and overtime hours

2022-02-13 Thread Andrus

Hi!

Hours table contains working hours for jobs:

    create table hours (
    jobid integer primary key, -- job done, unique for person
    personid char(10) not null, -- person who did job
    hours numeric(5,2) not null -- hours worked for job
    )

Hours more than 120 are overtime hours.

How to split regular and overtime hours into different columns using 
running total by job id and partition by person id?


For example, if John did 3 jobs 1, 2,3 with 90, 50 and 40 hours (total 
180 hours) for each job correspondingly, result table should be:


    personid    jobid  normal_hours   overtime_hours
    john    1 90   0
    john    2 30  20
    john    3  0  40

sum on normal_hours column should not be greater than 120 per person.

sum of normal_hours and overtime_hours columns must be same as sum of 
hours column in hours table for every person.


Note that since hours running total becomes greater than 120 in job 2, 
job 2 hours should appear in both hours columns.


Maybe window functions can used.

Andrus.


Re: How to split normal and overtime hours

2022-02-14 Thread Andrus

Hi!

Thank you. In this result, regular and overtime columns contain running 
totals.


How to fix this so that those columns contain just hours for each job?

sum on regular column should not be greater than 120 per person.

sum of regular and overtime  columns must be same as sum of hours column 
in hours table for every person.


Andrus.

13.02.2022 14:46 Torsten Förtsch kirjutas:

something like

SELECT *
     , least(sum(hours) OVER w, 120) AS regular
     , greatest(sum(hours) OVER w - 120, 0) AS overtime
  FROM hours
WINDOW w AS (PARTITION BY person ORDER BY job_id);

 job_id | person | hours | regular | overtime
++---+-+--
      2 | bill   |    10 |  10 |    0
      5 | bill   |    40 |  50 |    0
      8 | bill   |    10 |  60 |    0
     10 | bill   |    70 | 120 |   10
     11 | bill   |    30 | 120 |   40
     13 | bill   |    40 | 120 |   80
     15 | bill   |    10 | 120 |   90
      4 | hugo   |    70 |  70 |    0
      7 | hugo   |   130 | 120 |   80
      1 | john   |    10 |  10 |    0
      3 | john   |    50 |  60 |    0
      6 | john   |    30 |  90 |    0
      9 | john   |    50 | 120 |   20
     12 | john   |    30 | 120 |   50
     14 | john   |    50 | 120 |  100


On Sun, Feb 13, 2022 at 12:47 PM Andrus  wrote:

Hi!

Hours table contains working hours for jobs:

    create table hours (
    jobid integer primary key, -- job done, unique for person
    personid char(10) not null, -- person who did job
    hours numeric(5,2) not null -- hours worked for job
    )

Hours more than 120 are overtime hours.

How to split regular and overtime hours into different columns
using running total by job id and partition by person id?

For example, if John did 3 jobs 1, 2,3 with 90, 50 and 40 hours
(total 180 hours) for each job correspondingly, result table
should be:

    personid    jobid  normal_hours   overtime_hours
    john    1 90   0
    john    2 30  20
    john    3  0  40

sum on normal_hours column should not be greater than 120 per person.

sum of normal_hours and overtime_hours columns must be same as sum
of hours column in hours table for every person.

Note that since hours running total becomes greater than 120 in
job 2, job 2 hours should appear in both hours columns.

Maybe window functions can used.

Andrus.


Re: How to split normal and overtime hours

2022-02-14 Thread Andrus

Hi!

It worked.

Thank you very much.

Andrus.

13.02.2022 16:46 Torsten Förtsch kirjutas:

WITH x AS (
   SELECT *
, sum(hours) OVER w AS s
 FROM hours
   WINDOW w AS (PARTITION BY person ORDER BY job_id)
)
SELECT *
, greatest(least(s, 120) - coalesce(lag(s, 1) OVER w, 0), 0) AS 
regular
, hours - greatest(least(s, 120) - coalesce(lag(s, 1) OVER w, 0), 
0) AS overtime

 FROM x
WINDOW w AS (PARTITION BY person ORDER BY job_id)


On Sun, Feb 13, 2022 at 1:57 PM Andrus  wrote:

Hi!

Thank you. In this result, regular and overtime columns contain
running totals.

How to fix this so that those columns contain just hours for each job?

sum on regular column should not be greater than 120 per person.

sum of regular and overtime  columns must be same as sum of hours
column in hours table for every person.

Andrus.

13.02.2022 14:46 Torsten Förtsch kirjutas:

something like

SELECT *
     , least(sum(hours) OVER w, 120) AS regular
     , greatest(sum(hours) OVER w - 120, 0) AS overtime
  FROM hours
WINDOW w AS (PARTITION BY person ORDER BY job_id);

 job_id | person | hours | regular | overtime
++---+-+--
      2 | bill   |    10 |  10 |    0
      5 | bill   |    40 |  50 |    0
      8 | bill   |    10 |  60 |    0
     10 | bill   |    70 | 120 |   10
     11 | bill   |    30 | 120 |   40
     13 | bill   |    40 | 120 |   80
     15 | bill   |    10 | 120 |   90
      4 | hugo   |    70 |  70 |    0
      7 | hugo   |   130 | 120 |   80
      1 | john   |    10 |  10 |    0
      3 | john   |    50 |  60 |    0
      6 | john   |    30 |  90 |    0
      9 | john   |    50 | 120 |   20
     12 | john   |    30 | 120 |   50
     14 | john   |    50 | 120 |  100


On Sun, Feb 13, 2022 at 12:47 PM Andrus  wrote:

Hi!

Hours table contains working hours for jobs:

    create table hours (
    jobid integer primary key, -- job done, unique for person
    personid char(10) not null, -- person who did job
    hours numeric(5,2) not null -- hours worked for job
    )

Hours more than 120 are overtime hours.

How to split regular and overtime hours into different
columns using running total by job id and partition by person id?

For example, if John did 3 jobs 1, 2,3 with 90, 50 and 40
hours (total 180 hours) for each job correspondingly, result
table should be:

    personid    jobid  normal_hours overtime_hours
    john    1 90   0
    john    2 30  20
    john    3  0  40

sum on normal_hours column should not be greater than 120 per
person.

sum of normal_hours and overtime_hours columns must be same
as sum of hours column in hours table for every person.

Note that since hours running total becomes greater than 120
in job 2, job 2 hours should appear in both hours columns.

Maybe window functions can used.

Andrus.


How to get updated order data

2022-04-08 Thread Andrus

Hi!

Orders are in table

    create table order (
  dokumnr int primary key,
  packno char(10)
  );
   insert into order dokumnr values (123);

One user sets pack number using

    update order set packno='Pack1' where dokumnr=123

3 seconds later other user retrieves pack number using

    select packno from order where dokumnr=123

However, other user gets null value, not Pack1 as expected. After some 
time later, correct value Pack1 is returned.


How to get updated data from other user immediately?
3 seconds is long time, it is expected that select suld retrieve update 
data.


There are lot of transactions running concurrently. Maybe update command 
is not written to database if second user retrieves it.


How to flush orders table so that current results are returned for 
second user select ?


Using

PostgreSQL 13.2, compiled by Visual C++ build 1914, 64-bit

and psqlODBC driver.

Andrus.


Determine if range list contains specified integer

2022-05-27 Thread Andrus

Hi!


Product type table contains product types. Some ids may missing :

    create table artliik (liiginrlki char(3) primary key);
    insert into artliik values('1');
    insert into artliik values('3');
    insert into artliik values('4');
    ...
    insert into artliik values('999');

Property table contais comma separated list of types.

    create table strings ( id char(100) primary key, kirjeldLku chr(200) );
    insert into strings values ('item1', '1,4-5' );
    insert into strings values ('item2', '1,2,3,6-9,23-44,45' );

Type can specified as single integer, e.q 1,2,3 or as range like 6-9  or 
23-44

List can contain both of them.


How to all properties for given type.
Query

    select id
    from artliik
    join strings on ','||trim(strings.kirjeldLku)||',' like 
'%,'||trim(artliik.liiginrlki)||',%'


returns date for single integer list only.
How to change join so that type ranges in list like 6-9 are also returned?
Eq. f list contains 6-9, Type 6,7,8 and 9 shoud included in report.

Postgres 13 is used.

Posted also in

https://stackoverflow.com/questions/72404218/how-determine-if-range-list-contains-specified-integer

Andrus.


How to transfer databases form one server to other

2020-01-26 Thread Andrus

Hi!

VPS server has old Debian 6 Squeeze with Postgres 9.1
It has 24 databases.

Every night backup copies are created using pg_dump to /root/backups 
directory for every database.

This directory has 24 .backup files with total size 37 GB.

I installed new VPS server with Debian 10 and Postgres 12.

How to transfer those databases to new server ?

Both server have ssh and root user, postgres port 5432  open,  100 MB 
internet connection and fixed IP addresses. In night they are not used by 
users, can stopped during move.


Should I download .backup files and use pg_restore or use pipe to restore 
whole cluster.


Andrus. 






Re: How to transfer databases form one server to other

2020-01-27 Thread Andrus

Hi!

Before you do any of this I would check the Release Notes for the first 
release of each major release. Prior to version 10 that would be X.X.x 
where X is a major release. For 10+ that is X.x.  I would also test the 
upgrade before doing it on your production setup.


I want to create test transfer first, check applications work and after that 
final transfer.


Best practice if you are going the dump/restore route is to use the pg_dump 
binary from the new server(12) to dump the old server(9.1)


Postgres version 12 pg_dump probably cannot installed in old server (Debian 
Squeeze 9).
Running pg_dump in new server probably takes much more time since data is 
read from uncompressed form and dumping is time-consuming process.
(internet connection between those server is fast, SSH copy speed was 800 
Mbit (not 100 Mbit as I wrote), it took 5 minutes to copy 37 GB).


There are also some hundred of Postgresql login and group roles in old 
server used also in access rights in databases.

Those needs transferred also.

My plan is:

1. Use pg_dump 9.1 in old server to create 24 .backup files in custom 
format.
2. Use pgAdmin "backup globals" command to dump role definitions is old 
server to text file.
3. Manually edit role definitions to delete role postgres since it exists in 
new server.
4. Run edited role definitons script using pgadmin in new server to create 
roles

5. Use Midnight Commander to copy 24 .backup files from old to new server
6. Use Postgres 12 pg_restore with job count 4 to restore  those 24 
databases to new server sequentially.


To repeat transfer after testing:

1. Delete restored databases.
2. Delete imported roles in new server
3. Proceed 1-6 from plan again.


Questions:

1. pgAdmin allows only deletion roles one by one.
Deleting hundreds of roles is huge work.
How to invoke command like

DELETE ALL ROLES EXCEPT postgres

?
Is there some command, script or pgadmin GUI for this ?

2. Is it OK to restore from 9.1 backups or should I create backups using 
pg_dump from Postgres 12 ?

I have done some minor testing and havent found issues.

3. How to create shell script which reads all files from /root/backup 
directory from old server?

(I'm new to linux, this is not postgresql related question)

4. Are there some settings which can used to speed up restore process ? Will 
turning fsync off during restore speed up it ?
New server has 11 GB ram . No other applications are running during database 
transfer.

shared_buffer=1GB setting is currently used in postgresql.conf

5. Can this plan improved

Andrus.







Re: How to transfer databases form one server to other

2020-01-27 Thread Andrus

Hi!


3. Manually edit role definitions to delete role postgres since it exists
in new server.

No need, it will throw a harmless error message and continue on.


By my knowledge, pgAdmin executes script in single transaction and rolls it
back on error.
Should psql used or is there some option in pgadmin.


To repeat transfer after testing:

1. Delete restored databases.
2. Delete imported roles in new server

That will probaly not end well. I'm guessing there are objects that have a
dependency on the the roles.


If imported databases are dropped before, there will be hopefully no
dependencies.

Andrus. 






Re: How to transfer databases form one server to other

2020-01-27 Thread Andrus

Hi!


Postgres version 12 pg_dump probably cannot installed in old server
(Debian Squeeze 9).



I have it installed on my Debian 9 laptop, from the PostgreSQL apt repo:



 https://apt.postgresql.org


Oled server uses Debian Sqeeze whose version is 6.
I mistakenly typed Debian Squeeze 9, I'm sorry.

Andrus.





How to restore to empty database

2020-01-30 Thread Andrus

Hi!

I want to restore to new empty database using pg_restore .
pg_restore should create new empty database and restore into it.
If database exists, pg_restore should clean all data from it or drop and 
create new empty database before restore.


According to pg_restore doc, switches --clean --create --if-exists  should 
do this.


I tried

   PG_COLOR=auto
   PGHOST=localhost
   PGPASSWORD=mypass
   PGUSER=postgres
   export PGHOST  PGPASSWORD PG_COLOR PGUSER
   pg_restore --clean --create --if-exists --dbname=mydb  --jobs=4 --verbose 
"mydb.backup"


but got error

pg_restore: connecting to database for restore
pg_restore: error: connection to database "mydb" failed: FATAL:  database 
"mydb" does not exist


I also tried without --dbname=mydb but then got error

pg_restore: error: one of -d/--dbname and -f/--file must be specified

How to restore to database which does not exist to drop existing database 
before restore if it exists ?

Should I invoke

drop database if exists

and

create database

commands before calling pg_restore or can pg_restore do it itself ?

Using Postgres 12 on Debian 10

Andrus. 






Re: How to restore to empty database

2020-01-30 Thread Andrus

Hi!


You need to connect to a database that exists with --dbname, for
instance --dbname=postgres. Postgres will then use that connection to
create the new database, in your case mydb.


Thank you, this seems work.

There are total 24 databases, .backup files total size in 37GB , aprox 60 %
from this from bytea columns ( pdf documents, images).
Using VPS server, 4 cores, 11 GB RAM, used only for postgres.
Which is the fastest way to restore data from all of them to empty
databases. Should I run all commands in sequence like

pg_restore --clean --create --if-exists --verbose --dbname=postgres  --jobs=4
"database1.backup"
pg_restore --clean --create --if-exists --verbose --dbname=postgres  --jobs=4
"database2.backup"
...
pg_restore --clean --create --if-exists --verbose --dbname=postgres  --jobs=4
"database24.backup"

or run them all parallel without --jobs=4 like

pg_restore --clean --create --if-exists --verbose --dbname=postgres
"database1.backup" &
pg_restore --clean --create --if-exists --verbose --dbname=postgres
"database2.backup" &
...
pg_restore --clean --create --if-exists --verbose --dbname=postgres  --jobs=4
"database24.backup" &


or some balance between those ?
Is there some postgres or Debian setting which can used during restore time
to speed up restore ?
I use shared_buffers=1GB , other settings from debian installation.

Andrus. 






Re: How to restore to empty database

2020-01-30 Thread Andrus

Hi!

These days 37 GB is relatively small, so you maybe getting into the realm 
of premature optimization. Do the 24 databases represent an entire cluster 
you are trying to transfer?


Basically yes.
Cluster contains also  small test database which actually does not need 
transferred but I can delete it manually after transfer.
Also postgres, template0 and template1 are not used directly by applications 
and probably does not need to be transferred.



If so have you looked at pg_dumpall?:
https://www.postgresql.org/docs/12/app-pg-dumpall.html
It is a text based backup, but it will include all the databases and the 
globals.


Source cluster is in old Debian 6 Squeeze running Postgres 9.1
Should I create pipe using pg_dumpall and restore everything from old using 
pipe instead of pg_dump/pg_restore ?


Andrus.





How to restore roles without changing postgres password

2020-02-11 Thread Andrus

Hi!

How to create backup script which restores all roles and role memberships 
from other server without changing postgres user password.


I tried shell script

PGHOST=example.com
PGUSER=postgres
PGPASSWORD=mypass
export PGHOST  PGPASSWORD  PGUSER
pg_dumpall --roles-only --file=globals.sql
psql -f globals.sql postgres

but this changes user postgres  password also.
How to restore roles so that postgres user password is not changed on 
restore.


Script runs on Debian 10 with Postgres 12
Server from where it reads users runs on Debian Squeeze with Postgres 9.1

Andrus 






Re: How to restore roles without changing postgres password

2020-02-11 Thread Andrus
Hi!

Thank you.

>pg_dumpall creates an SQL file which is just a simple text file

>you can then edit sql removing postgres user from  the file
>This can be automated in a script that searches the generated sql file for the 
>postgres user  replacing it with a blank/empty line or adds -- to the bringing 
>of >the line which comments it out.  

This script creates cluster copy in every night. So this should be done 
automatically.
I have little experience with Linux. 
Can you provide example, how it should it be done using sed or other tool. 
There is also second user named dbandmin whose password  cannot changed also.

It would be best if  CREATE ROLE and ALTER ROLE  clauses for postgres and 
dbadmin users are removed for file.

Or if this is not reasonable, same passwords or different role names can used 
in both clusters.

Also I dont understand why GRANTED BY clauses appear in file. This looks like 
noice. 
GRANT documentation
https://www.postgresql.org/docs/current/sql-grant.html

does not contain GRANTED BY clause. It looks like pg_dumpall generates 
undocumented clause.

Andrus.


Re: How to restore roles without changing postgres password

2020-02-12 Thread Andrus
Hi!

>Not a bad idea,  would want to extend this to all the roles on the server not 
>just postgres  

>I've  edited the global dump many times  removing/editing table spaces, 
>comment old users, etc..  

Maybe it is easier to create plpgsql procedure which returns desired script as 
text.
Or it retrieves globals from other cluster using dblink and applies changes to 
new cluster.

This can be called instead of pq_dumpall and can edited for custom needs.
Editing plpgsql script is easier for postgres users than creating sed script to 
delete commands from sql file.

Andrus.

How to fix 0xC0000005 exception in Postgres 9.0

2020-02-20 Thread Andrus

Hi!

Every time when user tries to log on from same virtual computer where 
Postgreql resides, Postgres terminates with 0xC005 exception. Log is 
below.


About 65 users are accessing this server over VPN. They can use server 
normally.

Logging from localhost using pgadmin 4 with user postgres also works.

Only any attempt to log on from localhost with user dbadmin from psqlODBC 
client causes this exception.


It has worked normally for many years but now suddenly stopped working for 
localhost.


How to fix this ?

Server:

PostgreSQL 9.0.4, compiled by Visual C++ build 1500, 64-bit

OS:

Windows server 2008 R2
Version 6.1 Build 7601

Andrus.

Postgres log:

2020-02-20 15:44:51 EET   LOG:  server process (PID 3788) was terminated by 
exception 0xC005
2020-02-20 15:44:51 EET   HINT:  See C include file "ntstatus.h" for a 
description of the hexadecimal value.
2020-02-20 15:44:51 EET   LOG:  terminating any other active server 
processes
2020-02-20 15:44:51 EET andrus mydatabase WARNING:  terminating connection 
because of crash of another server process
2020-02-20 15:44:51 EET andrus mydatabase DETAIL:  The postmaster has 
commanded this server process to roll back the current transaction and exit, 
because another server process exited abnormally and possibly corrupted 
shared memory.
2020-02-20 15:44:51 EET andrus mydatabase HINT:  In a moment you should be 
able to reconnect to the database and repeat your command.
2020-02-20 15:44:51 EET kati mydatabase WARNING:  terminating connection 
because of crash of another server process
2020-02-20 15:44:51 EET kati mydatabase DETAIL:  The postmaster has 
commanded this server process to roll back the current transaction and exit, 
because another server process exited abnormally and possibly corrupted 
shared memory.
2020-02-20 15:44:51 EET kati mydatabase HINT:  In a moment you should be 
able to reconnect to the database and repeat your command.
2020-02-20 15:44:51 EET julia mydatabase WARNING:  terminating connection 
because of crash of another server process
2020-02-20 15:44:51 EET julia mydatabase DETAIL:  The postmaster has 
commanded this server process to roll back the current transaction and exit, 
because another server process exited abnormally and possibly corrupted 
shared memory.
2020-02-20 15:44:51 EET julia mydatabase HINT:  In a moment you should be 
able to reconnect to the database and repeat your command.
2020-02-20 15:44:51 EET reinpuu mydatabase WARNING:  terminating connection 
because of crash of another server process
2020-02-20 15:44:51 EET reinpuu mydatabase DETAIL:  The postmaster has 
commanded this server process to roll back the current transaction and exit, 
because another server process exited abnormally and possibly corrupted 
shared memory.
2020-02-20 15:44:51 EET reinpuu mydatabase HINT:  In a moment you should be 
able to reconnect to the database and repeat your command.
2020-02-20 15:44:51 EET kokka mydatabase WARNING:  terminating connection 
because of crash of another server process
2020-02-20 15:44:51 EET kokka mydatabase DETAIL:  The postmaster has 
commanded this server process to roll back the current transaction and exit, 
because another server process exited abnormally and possibly corrupted 
shared memory.
2020-02-20 15:44:51 EET kokka mydatabase HINT:  In a moment you should be 
able to reconnect to the database and repeat your command.
2020-02-20 15:44:51 EET myll mydatabase WARNING:  terminating connection 
because of crash of another server process
2020-02-20 15:44:51 EET myll mydatabase DETAIL:  The postmaster has 
commanded this server process to roll back the current transaction and exit, 
because another server process exited abnormally and possibly corrupted 
shared memory.
2020-02-20 15:44:51 EET myll mydatabase HINT:  In a moment you should be 
able to reconnect to the database and repeat your command.
2020-02-20 15:44:51 EET   WARNING:  terminating connection because of crash 
of another server process
2020-02-20 15:44:51 EET   DETAIL:  The postmaster has commanded this server 
process to roll back the current transaction and exit, because another 
server process exited abnormally and possibly corrupted shared memory.
2020-02-20 15:44:51 EET   HINT:  In a moment you should be able to reconnect 
to the database and repeat your command.
2020-02-20 15:44:51 EET dbadmin mydatabase FATAL:  the database system is in 
recovery mode
2020-02-20 15:44:51 EET   LOG:  all server processes terminated; 
reinitializing
2020-02-20 15:45:01 EET   FATAL:  pre-existing shared memory block is still 
in use
2020-02-20 15:45:01 EET   HINT:  Check if there are any old server processes 
still running, and terminate them. 






Re: How to fix 0xC0000005 exception in Postgres 9.0

2020-02-20 Thread Andrus

Hi!


Upgrade to a version of Postgres that is not 5 years past EOL?


Not possible at moment because requires lot of testing not to break existing 
applications.

Planned in future.

Andrus.





Re: How to fix 0xC0000005 exception in Postgres 9.0

2020-02-20 Thread Andrus

Hi!


Given this is localhost connection,  start looking at firewall or AV.


Windows firewall is turned off. It does not affect to local connections in 
same computer.

I turned windows antivirus off but problem persists.

Andrus.





Re: How to fix 0xC0000005 exception in Postgres 9.0

2020-02-21 Thread Andrus

Hi!


Realized I should have been clearer. By pre-libpq I meant this:
https://odbc.postgresql.org/docs/release.html
psqlODBC 09.05.0100 Release
Changes:
Use libpq for all communication with the server
Previously, libpq was only used for authentication. Using it for all 
communication lets us remove a lot of duplicated code. libpq is now 
required for building or using libpq.


I upgraded psqlodbc driver to 12.1 version but problem persists.
After server is manually started, application works.

I added log_statement = 'all' . Log before crash is:

2020-02-21 18:46:40 EET mydbadmin mydb LOG:  statement: SELECT 
drop_table('temptulemus')
2020-02-21 18:46:40 EET mydbadmin mydb LOG:  statement: create temp table 
temptulemus as SELECT * FROM andmetp  ;select * from temptulemus limit 0
2020-02-21 18:46:40 EET mydbadmin mydb LOG:  statement: select n.nspname, 
c.relname, a.attname, a.atttypid, t.typname, a.attnum, a.attlen, 
a.atttypmod, a.attnotnull, c.relhasrules, c.relkind, c.oid, 
pg_get_expr(d.adbin, d.adrelid), case t.typtype when 'd' then t.typbasetype 
else 0 end, t.typtypmod, c.relhasoids, '', c.relhassubclass from 
(((pg_catalog.pg_class c inner join pg_catalog.pg_namespace n on n.oid = 
c.relnamespace and c.oid = 28203181) inner join pg_catalog.pg_attribute a on 
(not a.attisdropped) and a.attnum > 0 and a.attrelid = c.oid) inner join 
pg_catalog.pg_type t on t.oid = a.atttypid) left outer join pg_attrdef d on 
a.atthasdef and d.adrelid = a.attrelid and d.adnum = a.attnum order by 
n.nspname, c.relname, attnum
2020-02-21 18:46:40 EET mydbadmin mydb LOG:  statement: select COUNT(*)::int 
from temptulemus
2020-02-21 18:46:40 EET mydbadmin mydb LOG:  statement: ;SELECT * FROM 
temptulemus offset 0 limit 900
2020-02-21 18:46:40 EET   LOG:  server process (PID 6000) was terminated by 
exception 0xC005
2020-02-21 18:46:40 EET   HINT:  See C include file "ntstatus.h" for a 
description of the hexadecimal value.
2020-02-21 18:46:40 EET   LOG:  terminating any other active server 
processes


So command which causes crash is

SELECT * FROM temptulemus offset 0 limit 900

As shown in lines before this is data from andmetp table. This table 
contains text type column. This column may contain data like 

Re: How to fix 0xC0000005 exception in Postgres 9.0

2020-02-21 Thread Andrus

Hi!

forgot to say publish the Linux logs it may have more details what is going 
on


Linux server has older application client which replicates all data one from 
andmetp table using select * from andmetp

I tried several times. Linux log contains

2020-02-21 16:18:50 EET mydbadmin mydb STATEMENT:  SELECT * FROM andmetp
2020-02-21 16:18:50 EET mydbadmin mydb FATAL:  connection to client lost
2020-02-21 16:18:50 EET mydbadmin mydb STATEMENT:  SELECT * FROM andmetp
2020-02-21 17:19:08 EET mydbadmin mydb LOG:  could not send data to client: 
Connection reset by peer

2020-02-21 17:19:08 EET mydbadmin mydb STATEMENT:  SELECT * FROM andmetp
2020-02-21 17:19:08 EET mydbadmin mydb FATAL:  connection to client lost
2020-02-21 17:19:08 EET mydbadmin mydb STATEMENT:  SELECT * FROM andmetp

It looks like only connection was terminated.  “dmesg -T” does not show 
postgres crash. Maybe postgres main process is not killed in Linux.



is this happening from any client or just a specific client running ODBC?


This happene if Windows 2008 server if logged in same same server from RDP 
client and in my development windows 10 workstation which logs to linux 
server over VPN



are the clients running AV if so are the AV versions the same?


In windows 2008 server antivirus was turned off like I wrote.  In my devel 
workstation I use Microsoft antivirus coming with windows 10.


Given this is killing a Linux server,  sounds like ODBC is sending  back 
garabage data to the server crashing it.


I can try publish odbc log if this helps.

Application replicateds some tables at startup to local disk.
For tables with 5000 or more rows odbc connection hangs if there is big TCP 
packet loss.

So we changed  replication command

select * from mytable

to commands

select drop_table(‘temptulemus’);
create temp table  temptulemus as select * from mytable;

select * from mytable offset 0 limit 900;
select * from mytable offset 900 limit 900;
select * from mytable offset 1800 limit 900;
...

etc.

In this case data was also retrieved on poor connections.
Maybe there is some better solution for this.

There are several settings in OBDC, to change how text columns are 
processed, play around with those settings see if that helps it

https://odbc.postgresql.org/docs/config.html


Application needs certain settings. Maybe it is better try to upgrade to 
Postgres 12.2 first.


As you have it narrowed down to a table,  try querying only a few records 
at a time to see if you can identify the specific Record(s) that may be the 
issue.
SELECT * FROM temptulemus where temptulemus.unique_id >1  offset 0 limit 
100
and try querying the columns that do not contain the suspect data that 
could be causing this


andmetp table contains 584 record in linux server. I tried script to read 
data from this table every time one more row


for i=1 to 600
? i
StartTextMerge()
TEXT TEXTMERGE NOSHOW
select * from andmetp limit <>
ENDTEXT
IF !TExec()
 RETURN .f.
 ENDIF
endfor

this worked without error.

Andrus. 






Re: How to fix 0xC0000005 exception in Postgres 9.0

2020-02-21 Thread Andrus

Hi!


Yes publish the ODBC logs


I send psqlodbc log from windows server to you.

I added and removed Debug=1 parameter from odbc conncetion string, ran 
application as administrator.

Other errors

2020-02-21 21:27:30 EET  ERROR:  invalid memory alloc request size 
4294967293
2020-02-21 21:27:30 EET STATEMENT:  ;SELECT * FROM temptulemus offset 0 
limit 900


and

2020-02-21 21:25:37 EET ERROR:  could not open relation with OID 538386464
2020-02-21 21:25:37 EET STATEMENT:  ;SELECT * FROM temptulemus offset 0 
limit 900


also occured.


Hello FOXPRO CODE

Yes.
Starting at psqlODBC 09.05.0100 when it uses libpq for all FoxPro does not 
show error message details anymore.

I posted it it

https://stackoverflow.com/questions/54978713/how-to-get-psqlodbc-error-details-in-visual-foxpro

Maybe anybody has some idea how to get postgres error message details using 
new psqlodbc drivers.
I looked into odbc description and havent found how error message details 
are returned.


Andrus. 






Re: How to fix 0xC0000005 exception in Postgres 9.0

2020-02-21 Thread Andrus

Hi!


In psql what does \d tempestuous show?
What relation is andmetp to tempestuous?


I'm sorry, this was typo. Correct commands issued from application are

select drop_table('temptulemus');
create temp table  temptulemus as select * from andmetp;
select * from temptulemus  offset 0 limit 900;

I thought you said you had problem with same table in Linux and Windows 
servers?


Yes.


From above what does the drop_table() function do in?:
select drop_table(‘temptulemus’);


drop_table is defined as

CREATE OR REPLACE FUNCTION drop_table(TEXT)
 RETURNS VOID STRICT LANGUAGE plpgsql AS $$
   BEGIN
   EXECUTE 'DROP TABLE ' || $1;
   EXCEPTION WHEN UNDEFINED_TABLE THEN
   RETURN;
   END;
   $$;


Andrus. 






Re: How to fix 0xC0000005 exception in Postgres 9.0

2020-02-21 Thread Andrus

Hi!


Alright so what does:
\d temptulemus


Did not find any relation named "temptulemus".


and/or
\d andmetp


  Table "public.andmetp"
  Column   | Type  | Collation | Nullable | Default
+---+---+--+-
andmetp| character(25) |   | not null |
klass  | character(1)  |   |  |
andmeklass | character(10) |   |  |
otsbaas| character(8)  |   |  |
kiirpref   | character(80) |   |  |
kiirnimi   | character(10) |   |  |
inlahte| character(10) |   |  |
vaateindks | character(10) |   |  |
tyhjakeeld | ebool |   |  |
whenting   | character(50) |   |  |
kiirindks  | character(10) |   |  |
validting  | text  |   |  |
valivaljad | character(10) |   |  |
valivali2  | character(10) |   |  |
userlang   | ebool |   |  |
valifilter | character(80) |   |  |
paring | character(60) |   |  |
allrows| ebool |   |  |
html   | text  |   |  |
specialwhe | text  |   |  |
pakuvalik  | text  |   |  |
klikkprots | text  |   |  |
valivali3  | character(10) |   |  |
Indexes:
   "andmetp_pkey" PRIMARY KEY, btree (andmetp)
Referenced by:
   TABLE "desktop" CONSTRAINT "desktop_alamklass_fkey" FOREIGN KEY 
(alamklass)

REFERENCES andmetp(andmetp) ON UPDATE CASCADE DEFERRABLE
Triggers:
   andmetp_trig BEFORE INSERT OR DELETE OR UPDATE ON andmetp FOR EACH 
STATEMENT

EXECUTE PROCEDURE setlastchange()

Andrus.





Re: How to fix 0xC0000005 exception in Postgres 9.0

2020-02-21 Thread Andrus

Hi!


To me the relevant part of the log is below. Not sure what it means though:
[3604-0.187] execute.c[Exec_with_parameters_resolved]444:
stmt_with_params = ';SELECT * FROM temptulemus offset 0 limit 900'
[3604-0.187] execute.c[Exec_with_parameters_resolved]449: about to begin
SC_execute
[3604-0.187]statement.[SC_execute]2037:   it's NOT a select
statement: stmt=005FE040


Maybe issue occurs when ODBC client sends command to odbc driver which 
contains semicolon as first character before SELECT and returned data

contains certain characters and server or client has certain configuration.
Maybe I will change application not to send semicolon before SELECT 
statement.


Andrus. 






How to get error message details from libpq based psqlODBC driver (regression)

2020-02-22 Thread Andrus

Hi!

I'm looking for a way to fix psqlODBC driver regression.

Starting at psqlODBC 09.05.0100 when psqlODBC driver uses libpq for all
operations (earlier versions used libpg only for authentication) ODBC client
does not show error message details.

For example, users got only generic error message like

   Connectivity error: ERROR: insert or update on table "mytable" violates
   foreign key constraint  "mytable_myfield_fkey

Error message details like

   Key (myfield)=(mykeyvalue) is not present in table "mymastertable".;

are no more returned.

How to detailed error message or at least only bad key value "mykeyvalue"
which is returned in error message details ?

Some ideas:

1. First versions of libpq based ODBC drivers returned same error message 
"no resource found" for all errors. Hiroshi has fixed it it later versions. 
Maybe psqlODBC code can fixed to restore pre-libpq behaviour.


2. Maybe analyzing odbc logs from pre and after 09.05.0100  drivers may 
provide solution. I can provide ODBC log files.


3. psqlODBC allows to pass parameters to libpq from connection string. Maybe 
some libpq parameter can fix this.


4.  Maybe some Postgres query, postgres extension  or direct odbc or libpq 
call can used to return last error message details like Windows API 
GetLastError() or Unix global errno.


5. Maybe it is possible to to create method which returns error message 
detals from postgres log file.


Postgres 12.2 and latest psqlODBC driver 12.01. are used.
psqlODBC is called from Visual FoxPro

Andrus. 






Re: How to fix 0xC0000005 exception in Postgres 9.0

2020-02-22 Thread Andrus

Hi!


I have no idea.


I changed application not to send ; before select.
This solves issues in both servers.

So using ; as first character before select in ODBC command like

;SELECT * from mytable

Causes C5 is Postgres 9.0 and ODBC client hangup with "connection reset by
peer" message in log file in Postgres 9.6
It was great help and especially great ODBC log analyzing in this list,
thanks.


For psqlODBC issues I would suggest asking here:
https://www.postgresql.org/list/pgsql-odbc/
Chances are better that there will be someone there that could answer you
questions.


I posted error message details issue long time ago in this pgsql-odbc list 
but havent got solution.

I posted it as separate message here and in

https://stackoverflow.com/questions/60357505/how-to-fix-psqlodbc-driver-regression-to-get-error-message-details

Andrus. 






Re: How to get error message details from libpq based psqlODBC driver (regression)

2020-02-23 Thread Andrus

Hi!


What does the log_error_verbosity setting in postgresql.conf show?


It is not set. postgresql.conf contains it default value from installation:

#log_error_verbosity = default# terse, default, or verbose messages

I changed it to

log_error_verbosity = verbose

but problem persists.

postgres log file contains

2020-02-23 09:02:27.646 GMT [11252] ERROR:  23503: insert or update on table 
"rid" violates foreign key constraint "rid_yhik_fkey"
2020-02-23 09:02:27.646 GMT [11252] DETAIL:  Key (yhik)=(xx) is not 
present in table "mootyhik".
2020-02-23 09:02:27.646 GMT [11252] LOCATION:  ri_ReportViolation, 
d:\pginstaller_12.auto\postgres.windows-x64\src\backend\utils\adt\ri_triggers.c:2474
2020-02-23 09:02:27.646 GMT [11252] STATEMENT:  insert into rid (dokumnr, 
yhik) values (2065, 'xx')



but application shows only

ERROR: insert or update on table "rid" violates foreign key constraint 
"rid_yhik_fkey"


Andrus. 






Re: How to get error message details from libpq based psqlODBC driver (regression)

2020-02-23 Thread Andrus

Hi!


I don't have an answer for you. There maybe someone else on this list that
could help, though I think your best bet would be to ask the question again
on the pgsql-odbc list.


I posted it in pgsql-odbc list.

In pgsql odbc source code file connection.c: line 866

contains:

errprimary = PQresultErrorField(pgres, PG_DIAG_MESSAGE_PRIMARY);

which probably gets only primary error message.
To get error message detail,  PG_DIAG_MESSAGE_DETAIL should used according 
to https://www.postgresql.org/docs/current/libpq-exec.html.

Unfortunately PG_DIAG_MESSAGE_DETAIL  is not used in pgsql-odbc source code.

Andrus. 






Postgres 12 backup in 32 bit windows client

2020-03-28 Thread Andrus

Hi!

Postgres 12 is installed in Debian 10 server.
Client computer is running 32 bit windows.

pg_dump.exe which can downloaded as part of Postgres 12 windows installation 
is only 64 bit version.


How to make backup from Postgres 12  in 32 bit Microsoft windows computer ?
psqlodbc is used for data access.
Maybe there is 32 -bit version of pg_dump.exe.

Andrus. 






Re: Postgres 12 backup in 32 bit windows client

2020-03-28 Thread Andrus

Hi!


What are you going to do with this backup?


If server disk crashes, it can used to restore data.

Do you have shell access to the Debian machine in order to do the backup 
there?


I have but my application user in 32bit windows does not have.
I want to allow 32 bit windows user to make backup from 5432 port directly 
to her computer C: drive.


If Postgres 12 32bit pg_dump is not available maybe it is possible to create 
stored procedure which invokes pg_dump in server and returns .backup file as 
bytea value to client:


select run('pg_dump mydatabase')

Andrus.


--
Adrian Klaver
adrian.kla...@aklaver.com 






could not determine encoding for locale "et_EE.UTF-8": codeset is "CPUTF-8" in pg_restore

2020-03-28 Thread Andrus

Hi!

Postgres 12 database dump is created in Debian 10 using pg_dump .

Trying to restore it in Windows 10  using

pg_restore --clean --create --if-exists --dbname=postgres --jobs=8 --no-password 
mydb.backup


produces strange message

pg_restore: WARNING:  could not determine encoding for locale "et_EE.UTF-8": 
codeset is "CPUTF-8"


How to fix this ?

Debian and Windows computer have same settings:

Latest Postgres 12  is used
OS and database locales are  Estonian
Database encoding is UTF-8

Andrus. 






Re: could not determine encoding for locale "et_EE.UTF-8": codeset is "CPUTF-8" in pg_restore

2020-03-28 Thread Andrus

Hi!

In the Debian Postgres instance in psql what does \l show for the 
databases?


#psql namm postgres
psql (12.2 (Debian 12.2-2.pgdg100+1))
Type "help" for help.

namm=# \l namm
 List of databases
Name |   Owner| Encoding |   Collate   |Ctype| Access 
privileges

--++--+-+-+---
namm | namm_owner | UTF8 | et_EE.UTF-8 | et_EE.UTF-8 | =Tc/namm_owner 
+
 ||  | | | 
namm_owner=CTc/namm_owner

(1 row)


In the Windows 10 command prompt what does systeminfo show?


Host Name: SERVER2
OS Name:   Microsoft Windows 10 Pro
OS Version:10.0.18363 N/A Build 18363
OS Manufacturer:   Microsoft Corporation
OS Configuration:  Standalone Workstation
OS Build Type: Multiprocessor Free
Registered Owner:  Windows User
Registered Organization:
Product ID:00330-70008-16217-AAOEM
Original Install Date: 05.09.2019, 9:16:41
System Boot Time:  28.03.2020, 11:05:23
System Manufacturer:   Gigabyte Technology Co., Ltd.
System Model:  Q270M-D3H
System Type:   x64-based PC
Processor(s):  1 Processor(s) Installed.
  [01]: Intel64 Family 6 Model 158 Stepping 9 
GenuineIntel ~3601 Mhz

BIOS Version:  American Megatrends Inc. F1, 09.01.2017
Windows Directory: C:\WINDOWS
System Directory:  C:\WINDOWS\system32
Boot Device:   \Device\HarddiskVolume3
System Locale: et;Eesti
Input Locale:  et;Eesti
Time Zone: (UTC+02:00) Helsingi, Kiiev, Riia, Sofia, 
Tallinn, Vilnius

Total Physical Memory: 16 286 MB
Available Physical Memory: 12 032 MB
Virtual Memory: Max Size:  18 718 MB
Virtual Memory: Available: 14 867 MB
Virtual Memory: In Use:3 851 MB
Page File Location(s): C:\pagefile.sys
Domain:WORKGROUP
Logon Server:  \\SERVER2
Hotfix(s): 18 Hotfix(s) Installed.
  [01]: KB4534132
  [02]: KB4497165
  [03]: KB4498523
  [04]: KB4503308
  [05]: KB4515383
  [06]: KB4515530
  [07]: KB4516115
  [08]: KB4517245
  [09]: KB4520390
  [10]: KB4521863
  [11]: KB4524244
  [12]: KB4524569
  [13]: KB4528759
  [14]: KB4532441
  [15]: KB4537759
  [16]: KB4538674
  [17]: KB4541338
  [18]: KB4551762
Network Card(s):   2 NIC(s) Installed.
  [01]: TAP-Windows Adapter V9
Connection Name: Ethernet 4
Status:  Media disconnected
  [02]: Intel(R) Ethernet Connection (2) I219-LM
Connection Name: Ethernet 3
DHCP Enabled:Yes
DHCP Server: 192.168.91.1
IP address(es)
[01]: 192.168.91.154
[02]: fe80::94d:b1c:3945:bc8a
[03]: 
2001:7d0:4c83:4c80:257f:b077:e1f7:21e1

[04]: 2001:7d0:4c83:4c80:94d:b1c:3945:bc8a
Hyper-V Requirements:  VM Monitor Mode Extensions: Yes
  Virtualization Enabled In Firmware: Yes
  Second Level Address Translation: Yes
  Data Execution Prevention Available: Yes

Andrus.





Re: could not determine encoding for locale "et_EE.UTF-8": codeset is "CPUTF-8" in pg_restore

2020-03-29 Thread Andrus

Hi!


Generally speaking, locale names from Unix systems won't work at all on
Windows.  You need to create the database manually with whatever seems
to be the closest Windows locale match, and then restore its contents
without using --create.


This is unattended script running in every night from .bat file.

How to replace pg_restore --create  option with   psql and/or createdb calls
and specify proper locale for them ?

Currently everthing has "Estonian_Estonia.1257"  locale in windows.
Which locale name should be specified in Windows instead of this?
Or maybe creating new template with proper encoding or changing template0 
encoding helps?


Andrus. 






Re: could not determine encoding for locale "et_EE.UTF-8": codeset is "CPUTF-8" in pg_restore

2020-03-29 Thread Andrus

Hi!


System Locale: et;Eesti
Input Locale:  et;Eesti
Hmm, I was expecting to see et_EE though I will admit to not truly 
understanding how Windows does locales.
I should have asked earlier, in the Postgres instance on Windows what does 
\l show for template0?


"D:\Program Files\PostgreSQL\12\bin\psql"  postgres postgres

psql (12.2)
WARNING: Console code page (775) differs from Windows code page (1257)
8-bit characters might not work correctly. See psql reference
page "Notes for Windows users" for details.
Type "help" for help.

postgres=# \l template0
   List of databases
  Name|  Owner   | Encoding |Collate| Ctype 
|   Access privileges

---+--+--+---+---+---
template0 | postgres | UTF8 | Estonian_Estonia.1257 | 
Estonian_Estonia.1257 | =c/postgres  +
  |  |  |   | 
| postgres=CTc/postgres

(1 row)

Andrus. 






Hot standby from Debian to Windows

2020-03-29 Thread Andrus

Hi!

Postgres 12 server is running on Debian 10 and has number of databases.

How to mirror changes of those databases to Windows 10 workstation which 
runs also Postgres 12.


Changes in server databases in Debian should sent to Postgres 12 database in 
Windows over internet.


If Debian server goes down, users can change server address to Windows 
computer as temporary workaround and continue working.


Andrus. 






Re: could not determine encoding for locale "et_EE.UTF-8": codeset is "CPUTF-8" in pg_restore

2020-03-29 Thread Andrus

Hi!


There seems to a difference of opinion of what Baltic Code Page to use:
https://en.wikipedia.org/wiki/Code_page_775
https://en.wikipedia.org/wiki/Windows-1257
The post below shows a users method of dealing with this for another CP:
https://www.postgresql.org/message-id/549275CC.4010607%40gmail.com


Console code page 775 message appears since psql is console application 
running from command line.


It does not have any relation to pg_dump/pg_restore issue since console code 
page is not used in this case.


There is Estonian locale everywhere.
Maybe this warning is harmless since Linux code page is ignored and default 
collation is used.
All table definitions in restored database contain references to default 
collation:


CREATE TABLE firma1.acquirpo
(
   kassanr numeric(3,0) NOT NULL,
   policyid character(2) COLLATE pg_catalog."default" NOT NULL,
   trantype character(6) COLLATE pg_catalog."default",
   tacdefault character(10) COLLATE pg_catalog."default",
   tacdenial character(10) COLLATE pg_catalog."default",
   taconline character(10) COLLATE pg_catalog."default",
   floorlimit numeric(12,0),
   randselthr numeric(12,0),
   minrandper numeric(2,0),
   maxrandper numeric(2,0),
   CONSTRAINT acquirpo_pkey PRIMARY KEY (kassanr, policyid)
)

TABLESPACE pg_default;

Same warning appears two times. This command execute by pg_restore probably 
causes this (harmless?)  warning:


CREATE DATABASE mydb
   WITH
   OWNER = mydb_owner
   ENCODING = 'UTF8'
   LC_COLLATE = 'et_EE.UTF-8'
   LC_CTYPE = 'et_EE.UTF-8'
   TABLESPACE = pg_default
   CONNECTION LIMIT = -1;

If new database is created manually in windows cluster the following command 
is generated:



CREATE DATABASE mydbmanually
   WITH
   OWNER = mydbmanually_owner
   ENCODING = 'UTF8'
   LC_COLLATE = 'Estonian_Estonia.1257'
   LC_CTYPE = 'Estonian_Estonia.1257'
   TABLESPACE = pg_default
   CONNECTION LIMIT = -1;

Andrus.





Re: could not determine encoding for locale "et_EE.UTF-8": codeset is "CPUTF-8" in pg_restore

2020-03-29 Thread Andrus

Hi!


Per my previous post, you might try adding something like:
cmd.exe /c chcp 1257
to the top of the batch file. This idea came from here:
https://stackoverflow.com/questions/20794035/postgresql-warning-console-code-page-437-differs-from-windows-code-page-125
If I am following the error correctly then the issue is that the Postgres 
console programs are using CP755 and that is not something for which there 
is an automatic conversion:

https://www.postgresql.org/docs/12/multibyte.html#id-1.6.10.5.7
There is a conversion for 1257 clients so having your console run as 1257 
should solve the problem. Someone with more current experience on Windows 
will need to comment on whether that is the viable or best solution.


Both servers have UTF-8 encoding.

Non-unicode code page 755 referes only to command line applications like 
psql.


Postgres service, pg_dump and pg_restore do not use console codepages any 
way, they operate using only UTF-8 character set since both databases are in 
UTF-8


I think console code page warning message is not related to this issue.

Andrus.





Re: Hot standby from Debian to Windows

2020-03-29 Thread Andrus

Hi!

Since you are moving between different OSes you will need to use some form 
of logical replication as binary replication will not work.


I can use Hyper-V or something other to run Debian with Windows.

This hopefully will also allow to bypass Windows 20 connection limit so that 
more than 20 users can connect.


Given  that you are Postgres 12 you could use the builtin logical 
replication:

https://www.postgresql.org/docs/12/logical-replication.html


I see possibilities:

1. Use Hyper-V to run Debian container and Postgres binary replication
2. Use Hyper-V to run Debian container and Postgres logical replication
3. Use Windows and Postgres logical replication.
4. Use Hyper-V to run Debian container and backup utility  for whole disk 
block level backup



How that needs to managed is going to need more information. As a start:
1) Downtime allowed?


Yes. If main server stops, I can tell users to enter backup server address 
instead of main server.



2) All databases to be replicated or just some?


I can create separate cluster so that all databases and users will be 
replicated.
There are 30 databases with total size 70 GB. Size will increase slowly 
every day when new data is entered.
There are some test and demo databases whose replcation is not really 
required but those can also replicated if this



3) Permissible lag between servers?


Currently backups are created every night and restored in new server.
Backup of 67GB data takes 1 hour, transfer 1 hour, restore and analyze to 
new server 4 hours. Total 6 hours. So current lag in 6 .. 24 hours.


Goal is to decrease this lag.

4) How are you going to deal with the down server and how do you plan on 
bringing it up again?


VPS hosting company will bring it up again. I will then manually synchronize 
two clusters when users continue to enter data, this is not time critical.


Andrus. 






Re: could not determine encoding for locale "et_EE.UTF-8": codeset is "CPUTF-8" in pg_restore

2020-03-29 Thread Andrus

Hi!

Same warning appears two times. This command execute by pg_restore 
probably causes this (harmless?)  warning:

What warning?


pg_restore: WARNING:  could not determine encoding for locale "et_EE.UTF-8":
codeset is "CPUTF-8"

I cranked up a Windows 7 instance and tried to migrate a Postgres 11 
database from Ubuntu and it failed on the CREATE DATABASE step because of 
this line in the dump file:
CREATE DATABASE redmine WITH TEMPLATE = template0 ENCODING = 'UTF8' 
LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8';


I ran this statemnt it in Windows 10 with Postgres 12 successfully. Result 
was:


WARNING:  could not determine encoding for locale "en_US.UTF-8": codeset is 
"CPUTF-8"
WARNING:  could not determine encoding for locale "en_US.UTF-8": codeset is 
"CPUTF-8"

CREATE DATABASE

Query returned successfully in 1 secs 75 msec.

redmine database was created. I dont understand why it failed in your test.


When I manually changed it in the plain text version of the dump file to:
CREATE DATABASE redmine WITH TEMPLATE = template0 ENCODING = 'UTF8' 
LC_COLLATE = 'English_United States.1252' LC_CTYPE = 'English_United 
States.1252';


I verifed that data was restored using pg_restore without manually changing 
anything.


Andrus.





Re: Hot standby from Debian to Windows

2020-03-29 Thread Andrus

Hi!

Would it not be easier to just set up another Debian server, run binary 
replication


Breaks occurs rarely, 0-2 times per year.
I want try it first.


and put them behind something like pgpool?


Backup server will use single core and minimal RAM. It may be needed 0-2 
times per year.


pgpool should switch to use it for production work only if main server does 
not respond.

I havent found this feature in pgpool documentation (maybe missed).

Andrus. 






Re: Hot standby from Debian to Windows

2020-03-30 Thread Andrus

Hi!

Got it. Just thought it would be easier not to have to deal with cross OS 
issues.

Here is one example:
https://www.pgpool.net/docs/latest/en/html/example-watchdog.html


Hopefully Windows Hyper-V virtual network adapter will not check windows 20 
connection limit.
In this case using Debian+Hyper-V+ binary replication allows to connect more 
than 20 users and may be best solution.


Both have 64-bit OS. Will binary replication work in this case.

Andrus. 






Re: could not determine encoding for locale "et_EE.UTF-8": codeset is "CPUTF-8" in pg_restore

2020-03-30 Thread Andrus

Hi!


Not sure but:
1) I was on Windows 7
2) Using Postgres 11
3) My Windows skills have atrophied, especially with the Windows command 
line.
So was this the same for the database you originally posted about, it 
actually restored it just threw warnings?


Looks like it restored.  I havent checked restored data.

If so I misunderstood the situation and thought the database was not 
loading.


I tried

CREATE DATABASE redmine
   WITH
   ENCODING = 'UTF8'
   LC_COLLATE = 'foo'
   LC_CTYPE = 'bar' template template0

in Linux and in Windows using Postgres 12.2
In Linux it throws error

ERROR:  invalid locale name: "foo"

In Windows it creates database and throws warning only.
Without template template0 clause it throws error in Windows also.

In Linux

CREATE DATABASE redmine WITH TEMPLATE = template0 ENCODING = 'UTF8'
LC_COLLATE = 'English_United States.1252' LC_CTYPE = 'English_United
States.1252';


also throws error

ERROR:  invalid locale name: "English_United States.1252"

So it looks like pg_dump/pg_restore with --create works only from Linux to 
Windows and does not work from Windows to Linux.


I expect that it should work from Windows to Linux also.

Andrus.





Using compression on TCP transfer

2020-03-31 Thread Andrus

Hi!

Databases contain lot of char(n) type fields containing ascii characters.
Most of fields contain less characters than field width.
Changing them to varchar is not easy.

Database is accessed from Debian Postgres 12.2 over internet using psqlodbc 
with

TLS v1.3.
Mostly results seelct commands are sent and results are retrieved.
Clients have 10-20Mbit download speeds, and 5-20 Mbit upload speeds.

Will data compression increase speed ?
If yes, how to implement this ?

Andrus. 






Re: Using compression on TCP transfer

2020-03-31 Thread Andrus
Hi,

>See the section about sslcompression in 
>https://www.postgresql.org/docs/9.2/libpq-connect.html. It should be your 
>answer.

I added 

sslcompression=1

to psqlodbc connection string but log file shows that connection is still 
uncompressed:

LOG:  connection authorized: user=me database=mydb SSL enabled 
(protocol=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384, bits=256, compression=off)

Maybe because according to TLSv1.3 standard, compression is no more allowed or 
psqlodbc does not pass it to pglib.

How to compress ?

Andrus.

Re: Using compression on TCP transfer

2020-03-31 Thread Andrus
Hi!
>Updated doc reference:
>https://www.postgresql.org/docs/12/libpq-connect.html 
>They mention that compression is insecure and disabled by default. Taking this 
>into account, compression will require that both ODBC and PostgreSQL are set 
>>up with compression enabled. I could not figure out quickly whether this 
>requires also recompiling the code...

I added 

Pqopt={sslcompression=1};

to psqlodbc connection string but log file shows that it still not compressed.

I’m using Debian 10 and Postgres 12 installed from postgres repository.
How to check is will it support compression or not.

Andrus.

Re: Using compression on TCP transfer

2020-03-31 Thread Andrus

Hi!


It is a simple ALTER TABLE.


Client is Visual FoxPro application. It sends data with trailing spaces 
sometimes and sometimes not.
In case of varchar field values will appear in database sometimes with 
trailing spaces and sometimes without.
This requires major application re-design which much is more expensive than 
continuing using char fields.



You'd have to use an OpenSSL library with compression support enabled.


Should I change OpenSSL installed from standard repository in Debian server 
or can it changed only for PostgreSql.

How ?


But that will improve speed only if your workload is network bound,
not CPU bound (in which case performance will suffer).


Server has lot of cores. Top shows that CPU usage is small.

Brausers and web servers use compression widely. Apache and IIS enable 
static content compression by default.

Compression should be built in in Postgres.

Andrus. 






Fixing set-returning functions are not allowed in UPDATE after upgrade to Postgres 12

2020-04-02 Thread Andrus

Hi!

After upgrading to Postgres 12   statement

update temprid set
   ContactFirstName =unnest(xpath(

'/E-Document/Document/DocumentParties/BuyerParty/ContactData/ContactFirstName/text()',x))::text,
   yhik =unnest(xpath(
'/E-Document/Document/DocumentItem/ItemEntry/BaseUnit/text()',x))::text
from t

throws error

set-returning functions are not allowed in UPDATE

How to fix this ?


Maybe there is simply change which makes this statement work ?
Result should by any value of xpath expression in case if xpath returns 
multiple values

In Postgres 9.1 it worked.

Andrus. 






Re: Fixing set-returning functions are not allowed in UPDATE after upgrade to Postgres 12

2020-04-02 Thread Andrus

Hi!


Simply replace
  SET col = unnest(array_value)
with

  SET col = array_value[1]


I tried

update temprid set
   ContactFirstName =xpath(

'/E-Document/Document/DocumentParties/BuyerParty/ContactData/ContactFirstName/text()',x)[1]::text

But got error

   ERROR:  syntax error at or near "["
   LINE 3: .../BuyerParty/ContactData/ContactFirstName/text()',x)[1]::text

How to fix ?

Posted also in 


https://stackoverflow.com/questions/60993975/fixing-set-returning-functions-are-not-allowed-in-update

Andrus.




Why there is 30000 rows is sample

2020-04-04 Thread Andrus

Hi!

vacuumdb output:

vacuumdb: vacuuming database "mydb"
INFO:  analyzing "public.mytable"
INFO:  "mytable": scanned 2709 of 2709 pages, containing 10834 live rows and 0 dead rows; 10834 rows in sample, 10834 estimated 
total rows


For tables with more than 3 rows, it shows that there are 3 rows in 
sample.

postgresql.conf does not set  default_statistics_target value.
It contains

#default_statistics_target = 100# range 1-1

So I expect that there should be 100 rows is sample.
Why Postgres uses 3 or number of rows in table for smaller tables ?

Is 3 some magical value, how to control it.

Using Postgres 12 in Debian.

Andrus.





Re: Using compression on TCP transfer

2020-04-04 Thread Andrus

Hi!

In case of varchar field values will appear in database sometimes with 
trailing spaces and sometimes without.
This requires major application re-design which much is more expensive than 
continuing using char fields.

A simple BEFORE INSERT OR UPDATE trigger would take care of that.


Changing char to varchar will break commands where trailing space is used in 
comparison.

For example query

create table test ( test char(10) );
insert into test values ('test');
select * from test where test ='test '; -- note trailing space

does not return data anymore if your recommendation is used:

create table test ( test varchar );
insert into test values ('test');
select * from test where test ='test '; -- note trailing space

In production 'test ' is query parameter coming from application with possible 
trailing space(s).

Adding trigger does not fix this.
How to fix this without re-writing huge number of sql commands?

Andrus.




Re: Using compression on TCP transfer

2020-04-05 Thread Andrus
Hi!
Thank you.

>If you decide that it's without, you could apply a TRIM in a trigger on each 
>INSERT and UPDATE. Then, you could replace the table by a view of the same 
>name >and implement the TRIM on SELECT there. This way, you don't have to 
>touch anything in the application.

How you provide sample code how to create view or othe method test so that my 
select statement returns data.

Currently select in code

create table test ( test varchar );
insert into test values ('test');
select * from test where test ='test '; -- note trailing space


does not return data.

Andrus.

Server with hot standby slave wont start after vacuum

2020-04-05 Thread Andrus

Hi!

Streaming asynchronous binary replication is used with hot standby slave.

To recover disk space

vacuumdb --all --full --skip-locked

is executed in every night is master.

During this vacuumdb stops with error

vacuumdb: error: vacuuming of table "myschema.mytable" in database "mydb" failed: PANIC:  could not write to file 
"pg_wal/xlogtemp.24729": No space left on device

server closed the connection unexpectedly
   This probably means the server terminated abnormally
   before or while processing the request.

And master wont start anymore:

LOG:  server process (PID 24729) was terminated by signal 6: Aborted
DETAIL:  Failed process was running: VACUUM (SKIP_LOCKED, FULL) firma39.rid;
LOG:  terminating any other active server processes
WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server 
process exited abnormally and possibly corrupted shared memory.

HINT:  In a moment you should be able to reconnect to the database and repeat 
your command.
...
LOG:  all server processes terminated; reinitializing
LOG:  database system was interrupted; last known up at 2020-04-06 01:14:10 EEST
LOG:  database system was not properly shut down; automatic recovery in progress
LOG:  redo starts at 2A0/C414BA68
FATAL:  could not extend file "global/58294678": wrote only 4096 of 8192 bytes 
at block 1728
HINT:  Check free disk space.
CONTEXT:  WAL redo at 2A0/D661D4B0 for XLOG/FPI:
LOG:  startup process (PID 24732) exited with exit code 1
LOG:  aborting startup due to startup process failure
LOG:  database system is shut down

pg_wal contains 2005 files with total size 32 GB and there is no free disk 
space.

hot standby server is connected over 20 Mbit internet.

Maybe vacuum full causes creation of creates huge number files in pg_wal which 
cannot transferred fast over 20Mbit internet.

How to fix this so that master continues to work?
Mabe it is possible to disable creation of wal files by vacuum.

Postgres 12 in Debian is used.

Andrus. 






How to prevent master server crash if hot standby stops

2020-04-06 Thread Andrus

Hi!

Streaming asynchronous binary replication is used with hot standby slave.

If slave stops responing, master server will create files in pg_wal directory.
If disk becomes full, master server crashes also.

How to avoid this ?

If disk is nearly full, master should stop additional files creation (and maybe 
stop or disable replication slot).
Postgres 12 in Debian 10 is used.

Andrus.




Re: How to prevent master server crash if hot standby stops

2020-04-06 Thread Andrus

Hi!

Thank you.


That's why you use monitoring.
Never use replication slots without monitoring replication (or at least the
disk space on the primary).


How to implement this automatically, without human interaction required ?

"superuser_reserved_connections" setting exists.
How about implementing"data_reserved_space" setting ?

How to create procedure in server or maybe cron scipt which stops replication 
if disk becomes nearly full ?

How to force to stop replication slot in master if it is in use ?

Andrus.




Re: How to prevent master server crash if hot standby stops

2020-04-06 Thread Andrus

Hi!

Thank you.


When you use replication slots, it is very important to put in place a
monitoring solution to check if too much WAL is retained, and note
that there is nothing able to do that natively in core Postgres.
There are however multiple ways to solve this problem, like a
background worker (for the slot monitoring as well as optionally
killing and/or dropping), a simple cron job or even check_postgres.


Where to find some sample how to implement this ?

I read from docs that slot cannot dropped if it is in use.
How to stop replication in this case.

Andrus.




Re: How to prevent master server crash if hot standby stops

2020-04-06 Thread Andrus

Hi!


If you prefer replication to fail silently, don't use replication
slots.  Use "wal_keep_segments" instead.


I desided to give 1 GB to wal. So I added

wal_keep_segments=60

After some time Postgres created 80 files with total size 1.3GB. 


How to fix this so that no more than 1 GB of disk space is used ?
How to get information how may wal files are yet not processed by slave ?
How to delete processed wal files so that 1 GB of disk space can used for some 
other purposes ?

/var/lib/postgresql/12/main/pg_wal# ls
000102A20072  000102A20083  000102A20094  
000102A200A5  000102A200B6
000102A20073  000102A20084  000102A20095  
000102A200A6  000102A200B7
000102A20074  000102A20085  000102A20096  
000102A200A7  000102A200B8
000102A20075  000102A20086  000102A20097  
000102A200A8  000102A200B9
000102A20076  000102A20087  000102A20098  
000102A200A9  000102A200BA
000102A20077  000102A20088  000102A20099  
000102A200AA  000102A200BB
000102A20078  000102A20089  000102A2009A  
000102A200AB  000102A200BC
000102A20079  000102A2008A  000102A2009B  
000102A200AC  000102A200BD
000102A2007A  000102A2008B  000102A2009C  
000102A200AD  000102A200BE
000102A2007B  000102A2008C  000102A2009D  
000102A200AE  000102A200BF
000102A2007C  000102A2008D  000102A2009E  
000102A200AF  000102A200C0
000102A2007D  000102A2008E  000102A2009F  
000102A200B0  000102A200C1
000102A2007E  000102A2008F  000102A200A0  
000102A200B1  archive_status
000102A2007F  000102A20090  000102A200A1  
000102A200B2
000102A20080  000102A20091  000102A200A2  
000102A200B3
000102A20081  000102A20092  000102A200A3  
000102A200B4
000102A20082  000102A20093  000102A200A4  
000102A200B5


Andrus.





Re: How to prevent master server crash if hot standby stops

2020-04-07 Thread Andrus

Hi!


About your third question, you *never* manually mess with the files in pg_wal.
The server does that.


Is it OK  to stop server, delete all files in pg_wal directory and re-start 
server ?

Or should default value put back and wait until server frees 1 GB disk space ?

Andrus.




Performance degradation if query returns no rows and column expression is used after upgrading to 12

2020-04-07 Thread Andrus

Hi!

Query returns no rows but its execution time in Postgres 12 depends on the 
column expression.

Query with  column expression

coalesce( (select sum(taitmata) from rid join dok using (dokumnr)
where toode=toode.toode and doktyyp='T' and not dok.taidetud and dok.kinnitatud 
and
taitmata is not null),0)

takes  666 ms :

explain analyze select
coalesce( (select sum(taitmata) from rid join dok using (dokumnr)
where toode=toode.toode and doktyyp='T' and not dok.taidetud and dok.kinnitatud 
and
taitmata is not null),0)
from toode
   where toode.ribakood='testmiin'::text
  or toode.nimetus ilike '%'||'testmiin'||'%' escape '!'
  or toode.toode ilike '%'||'testmiin'||'%' escape '!'
  or toode.markused ilike '%'||'testmiin'||'%' escape '!'
  or to_tsvector('english',toode.nimetus) @@ plainto_tsquery('testmiin')
or to_tsvector('english',toode.engnimetus) @@  
plainto_tsquery('testmiin')

"Gather  (cost=1000.00..505930.82 rows=153 width=32) (actual time=661.419..661.476 
rows=0 loops=1)"
"  Workers Planned: 1"
"  Workers Launched: 1"
"  ->  Parallel Seq Scan on toode  (cost=0.00..10015.31 rows=90 width=21) (actual 
time=574.922..574.922 rows=0 loops=2)"
"Filter: (((ribakood)::text = 'testmiin'::text) OR (nimetus ~~* '%testmiin%'::text) OR (toode ~~* '%testmiin%'::text) OR 
(markused ~~* '%testmiin%'::text) OR (to_tsvector('english'::regconfig, (nimetus)::text) @@ plainto_tsquery('testmiin'::text)) OR 
(to_tsvector('english'::regconfig, (engnimetus)::text) @@ plainto_tsquery('testmiin'::text)))"

"Rows Removed by Filter: 7202"
"  SubPlan 1"
"->  Aggregate  (cost=3234.63..3234.64 rows=1 width=32) (never executed)"
"  ->  Nested Loop  (cost=11.26..3234.52 rows=43 width=3) (never 
executed)"
"->  Bitmap Heap Scan on rid  (cost=10.84..1191.72 rows=270 width=7) 
(never executed)"
"  Recheck Cond: (toode = toode.toode)"
"  Filter: (taitmata IS NOT NULL)"
"  ->  Bitmap Index Scan on rid_toode_pattern_idx  
(cost=0.00..10.77 rows=312 width=0) (never executed)"
"Index Cond: (toode = toode.toode)"
"->  Index Scan using dok_pkey on dok  (cost=0.42..7.57 rows=1 
width=4) (never executed)"
"  Index Cond: (dokumnr = rid.dokumnr)"
"  Filter: ((NOT taidetud) AND kinnitatud AND (doktyyp = 
'T'::bpchar))"
"Planning Time: 2.102 ms"
"JIT:"
"  Functions: 24"
"  Options: Inlining true, Optimization true, Expressions true, Deforming true"
"  Timing: Generation 5.592 ms, Inlining 95.077 ms, Optimization 463.742 ms, 
Emission 277.062 ms, Total 841.473 ms"
"Execution Time: 666.007 ms"


Same query with sime column expression

1

run 3.6 times faster:


explain analyze select 1
from toode
   where toode.ribakood='testmiin'::text
  or toode.nimetus ilike '%'||'testmiin'||'%' escape '!'
  or toode.toode ilike '%'||'testmiin'||'%' escape '!'
  or toode.markused ilike '%'||'testmiin'||'%' escape '!'
  or to_tsvector('english',toode.nimetus) @@ plainto_tsquery('testmiin')
or to_tsvector('english',toode.engnimetus) @@
plainto_tsquery('testmiin')

"Gather  (cost=1000.00..11030.61 rows=153 width=4) (actual time=182.414..185.648 
rows=0 loops=1)"
"  Workers Planned: 1"
"  Workers Launched: 1"
"  ->  Parallel Seq Scan on toode  (cost=0.00..10015.31 rows=90 width=4) (actual 
time=155.338..155.339 rows=0 loops=2)"
"    Filter: (((ribakood)::text = 'testmiin'::text) OR (nimetus ~~* '%testmiin%'::text) OR (toode ~~* '%testmiin%'::text) OR 
(markused ~~* '%testmiin%'::text) OR (to_tsvector('english'::regconfig, (nimetus)::text) @@ plainto_tsquery('testmiin'::text)) OR 
(to_tsvector('english'::regconfig, (engnimetus)::text) @@ plainto_tsquery('testmiin'::text)))"

"Rows Removed by Filter: 7202"
"Planning Time: 1.729 ms"
"Execution Time: 185.674 ms"

If there are more column expressions, perfomance difference is bigger.
rid  table used in column expression contains 1.8 million of rows.
Performance degradation probably occured if upgraded from Postgres 9.1 to 
Postgres 12

Since no data is returned query perfomance should be same.
How to fix it ?

Andrus. 





Re: Performance degradation if query returns no rows and column expression is used after upgrading to 12

2020-04-07 Thread Andrus
Hi!
>on your query there is too slow JIT. Is strange how much. So the best way is 
>disable JIT probably 

>set jit to off;
>or same field in postgresql.conf

Thank you.

set jit to off  

makes select fast.
I have encountered this issue only in this query in one database

There is  variation of this query running with diferent data in different 
database in same Debian 10 server. It works fast.
Should I disable jit only for this query or in postgresql.conf permanently?

Andrus.

Re: Performance degradation if query returns no rows and column expression is used after upgrading to 12

2020-04-07 Thread Andrus
Hi!

>It is really strange why it is too slow. Can you prepare test case? Looks like 
>bug (maybe not Postgres's bug)

Testcase is below.
With  jit on it takes 3.3 sec  and with jit off 1.5 sec.

Andrus.

create temp table toode ( toode char(20), ribakood char(20), 
 nimetus char(50), markused char(50), engnimetus 
char(50) ) on commit drop;
insert into toode (toode) select generate_series(1,14400);
CREATE INDEX ON toode USING gin
(to_tsvector('english'::regconfig, nimetus::text));
CREATE UNIQUE INDEXON toode (ribakood )
WHERE ribakood IS NOT NULL AND btrim(ribakood::text) <> ''::text;
CREATE INDEX ON toode (toode);
CREATE UNIQUE INDEXON toode (upper(toode::text) );
create temp table dok ( dokumnr serial  primary key ) on commit drop;
insert into dok  select generate_series(1,14400);

create temp table rid ( dokumnr int, taitmata numeric, toode char(20)  ) on 
commit drop;
insert into rid  select generate_series(1,144);

CREATE INDEX rid_dokumnr_idxON rid (dokumnr );
-- jit on: 3.3 sec  jit off: 1.5 sec
set jit to off;
select 
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
  (select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) 

from toode 
where toode.ribakood='testmiin'::text
   or toode.nimetus ilike '%'||'testmiin'||'%' escape '!' 
   or toode.toode ilike '%'||'testmiin'||'%' escape '!' 
   or toode.markused ilike '%'||'testmiin'||'%' escape '!'

or to_tsvector('english',toode.nimetus) @@ plainto_tsquery('testmiin') 
 or to_tsvector('english',toode.engnimetus) @@
 plainto_tsquery('testmiin')  

Re: Performance degradation if query returns no rows and column expression is used after upgrading to 12

2020-04-08 Thread Andrus
Hi


>this query is little bit strange - it has pretty big cost, and because returns 
>nothing, then it's pretty fast against cost.  
>there is 18 subqueries, but jit_above_cost is ralated just to one query. This 
>is probably worst case for JIT.
>This query is pretty slow and expensive (and then the cost of JIT is minimal), 
>but when the query returns some rows, then JIT start to helps. 
>So maybe if you find some queries that returns some rows, then the speed will 
>be better with active JIT than with disabled JIT.

Below is modified testcase which returns one row.
In Debian 10 VPS with jit on it takes 2.5 sec  and with jit off  0.4 s

jit is still many times slower in Debian even if data is returned.

In Windows 10 workstation there is no difference.

>The situation when the query returns no rows, then JIT is significant 
>bottleneck - but it looks like corner case.

Both testcases simulate search queries in typical e-shop. 
Users can use any search term and expect that query returns fast.

Modified testcase which returns one row:

create temp table toode ( toode char(20) primary key, ribakood char(20), 
 nimetus char(50), markused char(50), engnimetus char(50) ) on 
commit drop;
insert into toode (toode) select generate_series(1,14400);
insert into toode (toode,nimetus) 
select 'TEST'|| generate_series, 'This is testmiin item'
  from generate_series(1,1);

CREATE INDEX ON toode USING gin(to_tsvector('english'::regconfig, 
nimetus::text));
CREATE UNIQUE INDEXON toode (ribakood )
WHERE ribakood IS NOT NULL AND btrim(ribakood::text) <> ''::text;
create temp table dok ( dokumnr serial  primary key ) on commit drop;
insert into dok  select generate_series(1,1);

create temp table rid (id serial primary key, 
   dokumnr int references dok, taitmata numeric, toode 
char(20) references toode ) on commit drop;
insert into rid  (dokumnr,toode)
select generate_series % 1+1, 1
from  generate_series(1,1);

CREATE INDEX ON rid(dokumnr );
CREATE INDEX ON rid(toode);
-- jit on: 2.5 sec  jit off:  0.4 s
set jit to off;
select 
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
  (select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) 

from toode 
where toode.ribakood='testmiin'::text
   or toode.nimetus ilike '%'||'testmiin'||'%' escape '!' 
   or toode.toode ilike '%'||'testmiin'||'%' escape '!' 
   or toode.markused ilike '%'||'testmiin'||'%' escape '!'

or to_tsvector('english',toode.nimetus) @@ plainto_tsquery('testmiin') 
 or to_tsvector('english',toode.engnimetus) @@
 plainto_tsquery('testmiin')  

Andrus.

Huge number of pg_temp and pg_toast_temp schemas

2020-04-10 Thread Andrus

Hi!

Postgres 12 database contains huge number of pg_temp and pg_toast_temp schemas 
named

pg_temp_1 ..  pg_temp_126

and

pg_toast_temp_1  .. pg_toast_temp_126

There are total 2 * 126 = 252  unnessecary schemas in one database.
Those schemas seems not contain any objects.

Other databases in cluster similar schemas but in smaller amount.

How to delete them automatically and prevent appear in future?

Cluster stopped  since disk decomes full and streaming async replication is 
used. Maybe one of those or some other factor created or
continues to create those schemas. Postgres 12.2 in Debian 10 is used.

Andrus. 


Re: Huge number of pg_temp and pg_toast_temp schemas

2020-04-10 Thread Andrus

Hi!

Thank you.


This is an unsurprising state, if you routinely have a hundred-plus
connections that sometimes create temp tables.  Each such session
needs schemas to keep its temp tables in.  The temp tables are
deleted at session exit, but we don't bother to remove the schema
entries, figuring that they'll probably be needed again later, and
a couple of rows in pg_namespace is negligible overhead anyway.


How to hide temp schemas from pgAdmin 4 tree?
For such large number of temporary schemas  pgAdmin schema tree view becomes 
polluted and makes database management difficult.
I turned most of Nodes options in PgAdmin options off but pgadmin still shows 
them. It shows also pg_toast schema.

Andrus.


Forcibly disconnect users from one database

2020-04-10 Thread Andrus

Hi!

Postgres 12 does not allow to drop database if it is in use. DROP DATABASE 
throws error:

postgres@template1 ERROR:  database "mydb" is being accessed by other users
postgres@template1 DETAIL:  There is 1 other session using the database.
postgres@template1 STATEMENT:  DROP DATABASE "mydb"

How to disconnect all users from database?
Using 

/etc/init.d/postgresql restart  


disconnects users from all databases. How to disconnect users from one database 
only ?
I looked into pgAdmin 4 but havent found such command. 


pgAdmin 3 had buttons to select kill processes by database name but those 
disappeared in pgAdmin 4.

How to find which users are using database and forcibly disconnect them ?

Andrus.




Which commands are guaranteed to drop role

2020-04-11 Thread Andrus

Hi!

Database "mydb" is owned by role "mydb_owner". 

User "currentuser"  tries to delete role "roletodelete" from this database using 


revoke all on all tables in schema public,firma1 from "roletodelete" cascade;
revoke all on all sequences in schema public,firma1 from "roletodelete" cascade;
revoke all on database mydb from "roletodelete" cascade;
revoke all on all functions in schema public,firma1 from "roletodelete" cascade;
revoke all on schema public,firma1 from "roletodelete" cascade;
revoke mydb_owner from "roletodelete" cascade;
ALTER DEFAULT PRIVILEGES IN SCHEMA public,firma1 revoke all ON TABLES from 
"roletodelete";
GRANT "roletodelete" TO "currentuser";
reassign owned by "roletodelete" to mydb_owner;
drop owned by "roletodelete";
drop role "roletodelete";

But got error

ERROR: role "roletodelete" cannot be dropped because some objects depend on it
DETAIL: privileges for schema public;

How to create script which  is guaranteed to delete role ?

This script already contains:

revoke all on schema public,firma1 from "roletodelete" cascade;

Why postgres complains that privileges for schema public depend on this role if 
they are revoked ?
How to fix this?

Andrus 





Re: Which commands are guaranteed to drop role

2020-04-12 Thread Andrus

Hi!


You cannot write such a script, but you will have to REVOKE and change ownership

and ALTER DEFAULT PRIVILEGES until no more dependencies on the role exist.

I ran script as superuser. In this case more detailed information appears:

ERROR: role "roletodelete" cannot be dropped because some objects depend on it

DETAIL:  privileges for default privileges on new relations belonging to role 
currentuser in schema public
privileges for default privileges on new relations belonging to role 
currentuser schema firma1

I changed script to 


do $$
   DECLARE r record;
   begin
 for r in select * from pg_views where schemaname IN ('public','firma1')
 loop
   execute 'revoke all on ' || quote_ident(r.schemaname) ||'.'|| 
quote_ident(r.viewname) || ' from roletodelete cascade';
 end loop;
   end $$;
GRANT roletodelete TO currentuser;
revoke all on all tables in schema public,firma1 from roletodelete cascade;
revoke all on all sequences in schema public,firma1  from roletodelete cascade;
revoke all on all functions in schema public,firma1 from roletodelete cascade;
revoke all on schema public,firma1 from roletodelete cascade;
REVOKE CONNECT ON DATABASE mydb from roletodelete cascade;
revoke all on database mydb from roletodelete cascade;

revoke mydb_owner  from roletodelete  cascade;
ALTER DEFAULT PRIVILEGES IN SCHEMA public,firma1 revoke all ON TABLES from 
roletodelete cascade;
ALTER DEFAULT PRIVILEGES IN SCHEMA public,firma1  revoke all ON sequences from 
roletodelete cascade;
ALTER DEFAULT PRIVILEGES IN SCHEMA public,firma1  revoke all ON functions from 
roletodelete cascade;
ALTER DEFAULT PRIVILEGES IN SCHEMA public,firma1  revoke all ON types from 
roletodelete cascade;
ALTER DEFAULT PRIVILEGES revoke all ON schemas from roletodelete cascade;

REVOKE USAGE ON SCHEMA public,firma1 FROM roletodelete cascade;
reassign owned by roletodelete to mydb_owner;
drop owned by roletodelete;
drop role roletodelete;

In this case it deleted user if was run under superuser postgres.

Non-superuser still cannot delete user using this script. How to allow 
non-superuser to dete user also ?


That is why you are well advised not to grant permissions to a role that you
plan to drop.


Role represents person. ODBC connection is used. Person rights should be 
restricted in database in this case.

How to simplify this script so that user will always deleted ?
Maybe some parts of script are not necessary.

Why postgres does not have simple command like 


drop role roletodelete reassign owned to currentuser cascade

but requires 25-line script for this.
Some parts of this script will not work in Postgres 9.0 probably. How to add 9.0+ support for it. 
Revoking privileges from view in not required in earlier releases.


Andrus.





How to restore database to previous state

2020-05-10 Thread Andrus

Hi!

Postgres 12 database is runing on Debian

max_wal_size = 5GB

is specified in postgres.conf  file

How to restore database to some previous state, e.q. before important data was 
accidently deleted.

Binary asynchrunous replication with slot is also used.

I looks like it is possible to create standby server which replicates changes using 24 hour delay and data directory backups in 
every night.

However this requires separate delayed standby server .
Is there simpler solution, e.q reverse playback of wal segments starting from 
current data directory contents.

Andrus. 






Hot and PITR backups in same server

2020-05-17 Thread Andrus

Hi!

How to implement hot standby and PITR recovery possibility in same backup 
server.

Plan is:

1. Create base backup using

pg_basebackup --checkpoint=fast --verbose --progress --write-recovery-conf -D 
/var/lib/postgresql/12/standby

2. Create backup copy of   /var/lib/postgresql/12/standby  directory for PITR

3. set max_wal_size in postgresql.conf   to   5 TB

4. Start backup server for hot standby backups.

If data from earlier point of time is required:

1. Stop backup server
2. Replace its data dirctory from of initial data directory contents created in 
previous p.2
4. Copy pg_wal contents from hot standby pg_wal directory to initial pg_wal 
directory in base backup
5. Specify recovery time in postgresql.conf and start backup server to recover 
to this point of time.

The most suspicius point is p.4 : copying manually pg_wal contents  from hot 
data to base backup data.

It this OK ? Or is some better way to implement hot stadby and PITR possibility 
in same computer ?
Postgres 12 in Debian is used.

Andrus.




Re: Hot and PITR backups in same server

2020-05-18 Thread Andrus

Hi!


This is confused or at least confusing.
- "max_wal_size" of 5TB is clearly insane.
- I don't understand what you mean by "Start backup server for hot standby 
backups".
Do I get it right that you want to copy a streaming replication standby 
server's data

directory to perform PITR?

I want to copy only pg_wal directory contents from this.

After pg_basebackup has finished, copy of its data directory is saved for 
possible PITR.
Its pg_wal contents will be replaced with current pg_wal directory contents.

After that recovery time will set it postgres.conf and separate server in 5433 
is used to preform PITR using this data directory.


That doesn't see like a good plan, because the standby
usually won't be much behind the primary server, and you can only recover to a 
later
point in time.


I will get data from copy taken when hot backup server was created and replace its pg_wal directory contents from pg_wal in current 
backup server pg_wal directory.

Since max pg wal size is big it should contain all WAL segments from time where 
base backup was taken.


If you care to elaborate, perhaps the question can be answered.


Currently binary async hot standby backup server is working OK and replicates 
main sever database almost immediately.

However if important data is deleted in same day, it cannot recovered since hot standby server database has replicated same data as 
in master server.

Master server disk space is limited. It cannot  keep large number of wal 
segments.
Standby server has lot of free disk space.

I'm looking for a way to collect data for PITR recovery (WAL segments) in 
standby server.

I have initial base backup of data directory created using pg_basebackup.
All WAL segments from master server after pg_basebackup should saved in backup server and should be used for PITR recovery when 
needed.


How to use hot standby server for this ?

Or should separate cluster in 5433 port created and pg_rewind or something 
other used for PITR.

Andrus. 






Re: Hot and PITR backups in same server

2020-05-19 Thread Andrus

Hi!

Thank you.


As Laurentz said, even if you set it to 5TB, no WAL files older than
the files needed by the last checkpoint don't remain. If you don't
need a hot-standby, you can use pg_receivewal to save WAL files on the
"standby" server.  If you need the hot-standby, WAL files should be
archived.


So pg_receivewal  should running in parallel with hot standy server to receive 
wal files to separate directory.

Will each wal file transferred two times in this case? One time by hot standby 
server and second time by pg_receivewal.
Main server if priced by amount of bandwidth.
How to receive each wal file only once ? This would allow to decrease network 
bandwidth and thus cost two times.

pg_receivewal has compression option. Will this compress WAL files before 
transfer over network ?


If so, what you need seems to be pg_receivewal, not a full-fledged
server.


For hot standby and PITR in same server the following steps are required:

1. Create base backup using pg_basebackup
2. Create copy of base backup for PITR
3. Start hot standby server using data directory created in p.1
4. Run pg_receiceval as background process to save wal files to backup created 
in p.2

If PITR is required, specify recovery time in base backup created in p.2 in 
postgresql.conf
Start second postgresql server instance in port 5433 which uses this backup for 
recovery.

Is this OK or should something changed?

How to run pg_receivewal in background ? Is there some option which creates such service so will automatically restart if server is 
restarted ?


How to allow main server to keep sufficient number of WAL segments ?
Replication slot cannot used: if backup server stops replication_slot causes main server to fill disk space with untransferred WAL 
files.

After  that main server will also stop with "no space left on device" error.

Or is there some option like to reserve some disk space or limit wal size so 
that main server can continue on backup server crash.

Andrus. 






How to recover from compressed wal archieve in windows

2020-05-20 Thread Andrus
In windows 10 


pg_receivewal --directory="d:\wallog" --verbose --compress=9

is used to archieve WAL.

This creates .gz files

For restore

restore_command='copy "D:\\wallog\\%f" "%p"' 


is used.
Restore shows "file not found" errors in console. Thi sis probably because %f 
argument is WAL file name without extension.
How to use compressed WAL files for WAL archieve and restore in windows ?

Andrus.





Re: How to recover from compressed wal archieve in windows

2020-05-20 Thread Andrus

Hi!


I'm guessing:
restore_command='copy "D:\\wallog\\%f.gz" "%p"'
will get you the file.
The next problem is that I'm pretty sure a WAL file with *.gz extension 
will not be able to be processed directly by the server. So you are 
going to have to uncompress it at some point before it gets restored.


How to decompress it automatically in windows ?
In Linux 


restore_command = 'gunzip < "archived_wal/%f" > "%p"'

maybe works.

Will wal_compression=on will produce compressed wal files to additional 
compression is not needed?

Andrus.




Re: How to recover from compressed wal archieve in windows

2020-05-20 Thread Andrus

Hi!

Will wal_compression=on will produce compressed wal files to additional 
compression is not needed?

Yes. Not sure how it will play with the streaming that pg_receivewal does.


I looked into WAL files created with wal_compression=on in pg_wal directory.
They still contain lot of ascii data from database which is not packed.
It looks like file WAL files content is not compressed.

Maybe the best thing is to back up a bit and let us know what it is you 
are trying to achieve?


Trying to add PITR possibility from hot standby server which uses async binary 
streaming replication without named slot.
Server creates 1 GB WAL files per day.
pg_receivewal with --compress=9 produces 3..5MB WAL files instead of 16MB
Trying to save disk space. 
I set windows directory attribute to "compressed" in wal archieve directory. Transparent compression is then applied by OS.


However when WAL files are copied to other device, copy is perfomed probably in 
uncompressed form in windows.
How to set compressed attribute to pg_wal directory in Linux ext4 file system ?

Andrus.




Query returns no rows in pg_basebackup cluster

2020-05-21 Thread Andrus

Hi!

In windows pg_basebackup was used to create base backup from Linux server.
baas column data type is character(8)

In Linux server  query 


select *  from firma1.desktop where baas='_LOGIFAI'

returns 16 rows.

Windows server this query returns 0 rows.

In Windows server same query using like

select *  from firma1.desktop where baas like '_LOGIFAI'

returns properly 16 rows. 


Maybe this is because database locale is not known in windows:

CREATE DATABASE sba
   WITH 
   OWNER = sba_owner

   ENCODING = 'UTF8'
   LC_COLLATE = 'et_EE.UTF-8'
   LC_CTYPE = 'et_EE.UTF-8'
   TABLESPACE = pg_default
   CONNECTION LIMIT = -1;

Correct encoding for windows should be

LC_COLLATE = 'Estonian_Estonia.1257'
LC_CTYPE = 'Estonian_Estonia.1257'

IF so how to to fix windows cluster so that query returns proper result in 
windows also?
Database in Windows is in read-only (recovery) mode so it cannot changed.
Postgres 12 is used.

Andrus.




Re: Query returns no rows in pg_basebackup cluster

2020-05-21 Thread Andrus

Hi!


Are you referring to two different instances of Postgres on Windows?


No. 
Main server is in Linux and backup server is in windows.


Andrus.




Re: Query returns no rows in pg_basebackup cluster

2020-05-21 Thread Andrus

Hi!


The LIKE query probably doesn't use an index and thus finds the relevant
data via sequential scan and equality checks on each record.



Yeah, exactly.  An equality condition will use a btree index if
available.  LIKE, however, sees the "_" as a wildcard so it cannot
use an index and resorts to a seqscan --- which will work fine.
It's just index searches (and index-based sorts) that are broken.
Of course, if there isn't an index on the column in question
then this theory falls to the ground.


There is composite index on baas column

CREATE TABLE public.desktop
(
   id integer NOT NULL DEFAULT nextval('desktop_id_seq'::regclass),
   recordtype character(5) COLLATE pg_catalog."default" NOT NULL,
   klass character(1) COLLATE pg_catalog."default",
   baas character(8) COLLATE pg_catalog."default" NOT NULL,
   liigid character(1) COLLATE pg_catalog."default" NOT NULL DEFAULT ''::bpchar,
   jrk numeric(4,0) NOT NULL DEFAULT 0,
...
   CONSTRAINT desktop_pkey PRIMARY KEY (id),
   CONSTRAINT desktop_baas_not_empty CHECK (baas <> ''::bpchar),
   CONSTRAINT desktop_id_check CHECK (id > 0),
   CONSTRAINT desktop_recordtype_check CHECK (recordtype = 'Aken'::bpchar OR 
recordtype = 'Veerg'::bpchar)
)

TABLESPACE pg_default;
CREATE INDEX desktop_baas_liigid_idx
   ON public.desktop USING btree
   (baas COLLATE pg_catalog."default" ASC NULLS LAST, liigid COLLATE 
pg_catalog."default" ASC NULLS LAST)
   TABLESPACE pg_default;

Maybe it is possible to force postgres in windows to use the same locale as in 
Linux. Locales are actually the same.

Andrus.





Re: Query returns no rows in pg_basebackup cluster

2020-05-21 Thread Andrus

Hi!


No, what it sounds like is the OP tried to physically replicate a
database on another platform with completely different sorting rules.


The sorting rules for this locale must be the same in both platforms.
Only locale names are different. It looks like windows server does not 
recognize Linux locale name.


Which means all his text indexes are corrupt according to the
destination platform's sorting rules, which easily explains the
observed misbehavior (ie, index searches not finding the expected rows).


Lot of queries seems working properly.

REINDEX would fix it. 


REINDEX throws error

ERROR:  cannot execute REINDEX during recovery
SQL state: 25006


But the major point here is you can't just ignore
a collation mismatch, which in turn implies that you can't do physical
replication from Linux to Windows, or vice versa (and most other
cross-platform cases are just as dangerous).


Database is used in recovery mode to find proper recovery point and to get data 
from it in this point.
Locales are actually same. In windows Postgres does not recognize Linux locale 
name.


Database in Windows is in read-only (recovery) mode so it cannot changed.

Then you might as well just rm -rf it (or whatever the equivalent Windows
incantation is).  On Windows, that database is broken and useless.


Most queries seems to work. 
Database should examined to get accidently deleted data from it.


Is making it read-write and index only solution or can it fixed in read-only 
database also, e-q forcing same local in postgres.conf

Andrus.




Re: Query returns no rows in pg_basebackup cluster

2020-05-21 Thread Andrus

Hi!


Main server is in Linux and backup server is in windows.

This is not a supported setup if you want to run a physical backup.
Your backup and your primary need to be the same - software and hardware.  
Consider anything that is working to be a false
negative – assume >something will break or simply give incorrect results.


This base backup should used for recovery. Taking new base backup in Linux does 
not allow to recover to earlier date.
Both servers have Intel 64 bit CPUs.
I understand that only issue is the index structure and that REINDEX will fix 
this.
What other issues may occur ?

Will pg_dump/pg_restore in Windows server fix all issues.

Andrus. 






Re: Query returns no rows in pg_basebackup cluster

2020-05-21 Thread Andrus

Hi!


Database in Windows is in read-only (recovery) mode so it cannot changed.

Then you might as well just rm -rf it (or whatever the equivalent Windows
incantation is).  On Windows, that database is broken and useless.


Backup in created in Windows from Linux server using pg_receivewal and 
pg_basebackup .
Can this backup used for PITR in Linux ?

Andrus.




How to migrate database from 10.1 to 9.1

2018-05-13 Thread Andrus

Database is created in

   "PostgreSQL 10.1 on x86_64-pc-mingw64, compiled by gcc.exe (Rev5, Built 
by MSYS2 project) 4.9.2, 64-bit"


and contains tables with autogenerated primary keys like

   CREATE TABLE public.logifail
   (
 id integer NOT NULL DEFAULT nextval('logifail_id_seq'::regclass),
 ..
 CONSTRAINT logifail_pkey PRIMARY KEY (id),
 CONSTRAINT logifail_id_check CHECK (id > 0)
   )
   WITH (
 OIDS=FALSE
   );

if this database is restored to Postgres 9.1 from custom backup using


   pg_restore --role=$1_owner --no-owner --dbname=$1 --jobs=4 --verbose 
--username=postgres

automatic primary key generation stops working.

id column is restored without sequnece. Table becomes:

   CREATE TABLE logifail
   (
 id integer NOT NULL,
  ...
 CONSTRAINT logifail_pkey PRIMARY KEY (id),
 CONSTRAINT logifail_id_check CHECK (id > 0)
   )
   WITH (
 OIDS=FALSE
   );

How to fix this so that sequences are also restored ?
I tried to restore using both 10.1 and 9.1 pg_restore but autogenerated 
primary key columns are still lost.


Posted also in

https://stackoverflow.com/questions/50317935/how-to-ove-database-from-postgres-10-1-to-9-1

Andrus. 





How to create StartsWith function for char(n) type with ending space

2018-07-06 Thread Andrus

Hi!

I need to create startswith function which returns true if  char(n) database
column starts with some characters which
may can contain space at end.
Spaces should treated like other characters.

Using sample data below

startswith( test, 'A')
startswith( test, 'A  ')
StartsWith(test, rpad('A',19) )

shuld return true

but

startswith( test, RPAD( 'A', 20))  should return false  since there is extra
space in end of check string

Database contains test column which has char(20) type column and this cannot
changed.

I tried code below but it returns false.

How to fix this so that it returns true?
Using Postgres starting from 9.1

Andrus.

CREATE or replace FUNCTION public.likeescape( str text )
--
https://stackoverflow.com/questions/10153440/how-to-escape-string-while-matching-pattern-in-postgresql
RETURNS text AS $$
SELECT replace(replace(replace($1,'^','^^'),'%','^%'),'_','^_') ;
$$ LANGUAGE sql IMMUTABLE;

CREATE or replace FUNCTION public.StartWith( cstr text, algusosa text )
RETURNS bool AS $$
SELECT $2 is null or $1 like likeescape($2) ||'%' ESCAPE '^' ;
$$ LANGUAGE sql IMMUTABLE;

create temp table test ( test char(20) ) on commit drop;
insert into test values ('A' );

select StartWith(test, 'A ' ) from test

posted also in

https://stackoverflow.com/questions/51206529/how-to-create-startswith-function-for-charn-type-with-ending-space 





How to return argument data type from sql function

2022-10-14 Thread Andrus

PostgreSQL 12.2+ function is defined as

    create FUNCTION torus(eevarus text) returns text immutable AS $f$
 select translate( $1, U&'\00f8\00e9', U&'\0451\0439' );
    $f$ LANGUAGE SQL ;

This function is called as CHAR(n) or text columns like

    create temp table test (
    charcol char(10),
    textcol text );

    insert into test values ('test', 'test');

    select torus(charcol), torus(textcol), charcol

torus(charcol) returns text column and loses original column width. How 
to force torus() to return argument type:


if char(n) column is passed as argument, torus() should also return 
char(n) data type.

I tried to use bpchar instead on text

    create or replace FUNCTION torusbpchar(eevarus bpchar) returns 
bpchar immutable AS $f$

 select translate( $1, U&'\00f8\00e9', U&'\0451\0439' );
    $f$ LANGUAGE SQL ;

torusbpchar(charcol) still returns text data type.

npgsql DataReader is used to get data.

Andrus.


Re: How to return argument data type from sql function

2022-10-14 Thread Andrus

Hi!

Making separate functions for text and bpchar works for me.

regression=# select pg_typeof(torus(f1)) from char_tbl;
  pg_typeof
---
  character


I tried

create or replace FUNCTION torus(eevarus bpchar) returns bpchar 
immutable AS $f$

select translate( $1, U&'\00f8\00e9', U&'\0451\0439' );
$f$ LANGUAGE SQL ;

create temp table test (
charcol char(10) );
insert into test values ('test');
select torus(charcol)
FROM Test

but it still returns result without trailing spaces. So it is not working.


Another possibility is to have just one function declared
to take and return anyelement.  You'd get failures at
execution if the actual argument type isn't coercible
to and from text (since translate() deals in text) but
that might be fine.


I tried

create or replace FUNCTION torus(eevarus anylement ) returns anylement 
immutable AS $f$

select translate( $1, U&'\00f8\00e9', U&'\0451\0439' );
$f$ LANGUAGE SQL ;

but got error

type anyelement does not exists.

Finally I tried

create or replace FUNCTION torus(eevarus text ) returns text immutable 
AS $f$

select translate( $1, U&'\00f8\00e9', U&'\0451\0439' );
$f$ LANGUAGE SQL ;

create or replace function public.ColWidth(p_namespace text, p_table 
text, p_field text)

    returns int as $f$
select atttypmod-4 from pg_namespace n, pg_class c, pg_attribute a
     where n.nspname = p_namespace and
     c.relnamespace = n.oid and
     c.relname = p_table and
     a.attrelid = c.oid and
     a.attname = p_field;
$f$ LANGUAGE SQL ;

create table public.test ( charcol char(10) );
insert into test values ('test');
select rpad ( torus(charcol),  colwidth('public', 'test', 'charcol') )
FROM Test

as Adrian Klaver recommends in

https://stackoverflow.com/questions/74061290/how-to-return-argument-datatype-from-sql-function#comment130780708_74061290

at this worked. In this best solution?

How to remove p_namespace  parameter from colwidth()? ColWidth() should 
return column width in first search_path table just like select ... from 
test finds table test.


Andrus.


Re: How to return argument data type from sql function

2022-10-14 Thread Andrus

Hi!

>Yeah, you could do that if you have the column information at hand.

Personally I'd also throw in "... and atttypid = 'bpchar'::regtype",
because that atttypmod calculation will give you garbage for types
other than bpchar and varchar.


I added this:

create or replace function public.ColWidth(p_namespace text, p_table 
text, p_field text)

    returns int as $f$
select atttypmod-4 from pg_namespace n, pg_class c, pg_attribute a
 where n.nspname = p_namespace and
    c.relnamespace = n.oid and
    c.relname = p_table and
    a.attrelid = c.oid and
    atttypid = 'bpchar'::regtype and
    a.attname = p_field;
$f$ LANGUAGE SQL ;

Tables with same name are in different schemas.

How to change this query so that it searches schemas in set search_path 
order and returns column width from it ? In this case p_namespace 
parameter can removed.


Or should it replaced with dynamic query like

execute 'select ' || p_field || ' from ' || p_table || ' limit 0'

and get column size from this query result somehow ?

Andrus.


Re: How to return argument data type from sql function

2022-10-14 Thread Andrus

> Adrian Klaver recommends in



https://stackoverflow.com/questions/74061290/how-to-return-argument-datatype-from-sql-function#comment130780708_74061290

at this worked. In this best solution?

Padding a text typed output with actual significant spaces "works"? It 
is not equivalent to a bpchar with insignificant padding spaces...


You are right. I need char(n) type and this is not working.  How to use 
expression in cast, like


select torus(charcol) :: CHAR( ColWidth('public', 'test', 'charcol')  ) 
from test


This throws error in Postgres. ColWidth is immutable and called with 
constant arguments so it should work. How to fix postgres to allow 
constant ColWidth() expression in cast ?


Andrus.


How to create hot standby of Postgres 12/Debian 10.3 in Windows 11

2022-11-25 Thread Andrus

Hi!

Postgres 12 from Postgres repository is running in Debian 10.3 VPS server:

PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, 
compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit


(its upgrade is currently not an option).

Cluster should mirrored to Windows 11 workstation in real time over 
internet.


I installed Debian in Windows 11  using Linux Subsystem for Windows with

wsl --install Debian

This installs Debian 11.5

I tried to install Postgres 12 for replication in this Debian using

sudo apt install postgresql-12

but this fails with

E: Unable to locate package postgresql-12

How to install Postgres 12 in Debian 11.5 ?

Will this replication work since Debian versions are different but 
hopefully locale implementations are same ?


Which is most reasonable way to replicate whole Postgres 12/Debian 10.3  
cluster in Windows 11 ?


Andrus.


WAL contains references to invalid pages in hot standby

2022-11-26 Thread Andrus

Hi!

WAL playback in hot standby server crashes  with "WAL contains 
references to invalid pages" error :


2022-11-26 17:48:12.889 EET [497] LOG:  restored log file 
"000105790064" from archive
2022-11-26 17:48:20.897 EET [497] LOG:  restored log file 
"000105790065" from archive
2022-11-26 17:48:26.564 EET [497] LOG:  restored log file 
"000105790066" from archive
2022-11-26 17:48:32.019 EET [497] LOG:  consistent recovery state 
reached at 579/66954858
2022-11-26 17:48:32.019 EET [495] LOG:  database system is ready to 
accept read only connections
2022-11-26 17:48:32.019 EET [497] WARNING:  page 11 of relation 
base/54455050/83221012 is uninitialized
2022-11-26 17:48:32.019 EET [497] CONTEXT:  WAL redo at 579/66967DB0 for 
Heap2/VISIBLE: cutoff xid 167913422 flags 0x01
2022-11-26 17:48:32.019 EET [497] PANIC:  WAL contains references to 
invalid pages
2022-11-26 17:48:32.019 EET [497] CONTEXT:  WAL redo at 579/66967DB0 for 
Heap2/VISIBLE: cutoff xid 167913422 flags 0x01
2022-11-26 17:48:34.315 EET [495] LOG:  startup process (PID 497) was 
terminated by signal 6: Aborted
2022-11-26 17:48:34.315 EET [495] LOG:  terminating any other active 
server processes

2022-11-26 17:48:34.330 EET [495] LOG:  database system is shut down

It looks like file 000105790067 is corrupted. I looked into 
this file and it contains data like other WAL files.


How to fix this error so that Wal playback can continue?

Master server is in Postgres 12 in Debian 10.3

Hot standby in Postgres 12 in WSL Debian 11.5

Can Debian version 10.3/11.5 difference cause this exception?

Andrus.


How to create directory format backup

2023-02-08 Thread Andrus

Hi!

Creating backup in directory format using

    pg_dump -f "sba" -Fdirectory --jobs=32 --verbose sba

throws error

    pg_dump: error: could not stat file "sba/282168.data.gz": value too 
large


How to fix it ?

Server is Postgres 12 running in Debian Linux 10 under WSL
Client is pg_dump from Postgres 15 running in Windows 11

Andrus


Re: How to create directory format backup

2023-02-08 Thread Andrus

Hi!

>Looks like your filesystem on client is having limits on file sizes. 
Use better filesystem, or just dump on linux, it's filesystems usually 
don't hit these limits.


This file size is only 6.2 GB. If custom format is used, pg_dump creates 
large file without problems. There are no file size limits. Error 
message is not about this.


Backup client is running in Windows 11 and this cannot changed.

How to create backup in format from which tables can selectively restored?

Posted also in 
https://stackoverflow.com/questions/75387616/how-to-make-directory-format-backup


Andrus.



Re: How to create directory format backup

2023-02-08 Thread Andrus

Hi!



How to create backup in format from which tables can selectively restored?

Dump as custom-format archive (-F custom) and use that with pg_restore and
options --table or --list/--use-list to select what should be restored.


How to select tables interactively like pgAdmin allows to select when 
directory format is used ?


Database contains hundreds of schemas. I need to restore public and 
other other schema.


Whole backup file is scanned to restore only two schemas. It takes lot 
of time.


Also directory format allows to use all cores with --jobs=32 parameter. 
Dump and partial restore using custom format are much slower.


Andrus.



--
Erik

How to pass table column values to function

2023-02-11 Thread Andrus

Hi!

Table source contains integer column. Its values should be passed to 
function for selecting data from other table.


I tried

    CREATE OR REPLACE FUNCTION 
public.TestAddAssetTransactions(dokumnrs int[])

 RETURNS int AS
    $BODY$

    with i1 as (
    INSERT INTO bilkaib (dokumnr)
    select dokumnr  from dok where dokumnr in (select * from 
unnest(dokumnrs))

    returning *
    )

    select count(*) from i1;
    $BODY$ language sql;

    create temp table bilkaib (dokumnr int ) on commit drop;
    create temp table dok (dokumnr serial primary key ) on commit drop;
    create temp table source (dokumnr int ) on commit drop;
    insert into source values (1),(2);

    select TestAddAssetTransactions( (select ARRAY[dokumnr] from 
source)::int[] )



but got error

> ERROR: more than one row returned by a subquery used as an expression

How to pass set of integers to function? Should temp table with fixed 
name used or is there better solution?


Using Postgresql 12+

Andrus.


How to speed up product code and subcode match

2023-05-22 Thread Andrus

Hi!

Price list of main products vordlusajuhinnak contains 3 prices for 
product (column toode) and has 39433 products:


    create table vordlusajuhinnak( toode varchar(60), n2 numeric(8,2), 
n3 numeric(8,2), n4 numeric(8,2) );


toode column in unique, may be primary key in table and contains upper 
case letters, digits and - characters.


product table (toode) contains 733021 products:

    CREATE TABLE toode (
    grupp character(1),
    toode character(60) primary key,
    ... lot of other columns
  );

Both tables have pattern indexes to speed up queries:

    CREATE INDEX toode_toode_pattern_idx
    ON toode (toode bpchar_pattern_ops ASC NULLS LAST);
    -- This index is probably not used, should removed:
    CREATE INDEX vordlusajuhinnak_toode_pattern_idx ON 
vordlusajuhinnak(toode bpchar_pattern_ops);


Product table as both main products and subproducts with sizes. Size is 
separated by main product code always by / character:



    SHOE1-BLACK
    SHOE1-BLACK/38
    SHOE1-BLACK/41
    SHOE1-BLACK/42
    SHOE1-BLACK/43
    SHOE2/XXL
    SHOE2/L

Product codes contain upper characers only in this table also.

Trying to get prices for all products using

    create table peatoode as
    select toode.toode , n2, n3, n4
    from toode, vordlusajuhinnak
    where  toode.toode between vordlusajuhinnak.toode and 
vordlusajuhinnak.toode||'/z'


Takes 4.65 hours. How to speed this query up?
Output from explain:

    > "Nested Loop  (cost=0.55..272273178.12 rows=3202240012 width=78)" "
    > ->  Seq Scan on vordlusajuhinnak  (cost=0.00..721.33 rows=39433 
width=32)" "  ->  Index Only Scan using toode_pkey on toode

    > (cost=0.55..6092.62 rows=81207 width=60)" "    Index Cond: (toode
    > >= (vordlusajuhinnak.toode)::bpchar)" "    Filter: 
((toode)::text <= ((vordlusajuhinnak.toode)::text || '/z'::text))"


Using  PostgreSQL 13.2, compiled by Visual C++ build 1900, 64-bit in 
Windows server and psqlODBC driver.

Upgrading Postgres is possible, if this helps.

Tried also using like:

    WHERE toode.toode=vordlusajuhinnak.toode OR
  toode.toode LIKE vordlusajuhinnak.toode||'/%'

Posted also in

https://stackoverflow.com/questions/76311957/how-to-match-character-columns-with-separated-subcolumns

Andrus.





Re: How to speed up product code and subcode match

2023-05-23 Thread Andrus

Hi!

I ran

analyze toode;

create index vordlusajuhinnak_toode_pattern_idx on 
vordlusajuhinnak(toode bpchar_pattern_ops);


create index vordlusajuhinnak_toode_idx on vordlusajuhinnak(toode);
analyze vordlusajuhinnak;

Select runs now more than one hour. Output from explain

explain create table peatoode as
select toode.toode , n2, n3, n4
from toode, vordlusajuhinnak
WHERE toode.toode=vordlusajuhinnak.toode OR  toode.toode LIKE 
vordlusajuhinnak.toode||'/%'


"Gather  (cost=1000.55..443361894.55 rows=143828910 width=78)"
"  Workers Planned: 2"
"  ->  Nested Loop  (cost=0.55..428978003.55 rows=59928712 width=78)"
"    Join Filter: ((toode.toode = (vordlusajuhinnak.toode)::bpchar) 
OR (toode.toode ~~ ((vordlusajuhinnak.toode)::text || '/%'::text)))"
"    ->  Parallel Index Only Scan using toode_pkey on toode  
(cost=0.55..95017.93 rows=303869 width=60)"
"    ->  Seq Scan on vordlusajuhinnak  (cost=0.00..721.33 rows=39433 
width=32)"


with

Set enable_nestloop to off;

explain output is:

"Gather  (cost=1001000.55..10443361906.55 rows=143828910 width=78)"
"  Workers Planned: 2"
"  ->  Nested Loop  (cost=100.55..10428978015.55 rows=59928712 
width=78)"
"    Join Filter: ((toode.toode = (vordlusajuhinnak.toode)::bpchar) 
OR (toode.toode ~~ ((vordlusajuhinnak.toode)::text || '/%'::text)))"
"    ->  Parallel Index Only Scan using toode_pkey on toode  
(cost=0.55..95029.93 rows=303869 width=60)"
"    ->  Seq Scan on vordlusajuhinnak  (cost=0.00..721.33 rows=39433 
width=32)"


How to speed it up?

Andrus.

23.05.2023 14:32 Bzm@g kirjutas:

Great,

However I think it is still way to slow.
Next step is to run analyze also for the other table  vordlusajuhinnak.

And make sure you have an index on vordlusajuhinnak.toode similar to 
the index on toode.toode


--
Boris



Am 23.05.2023 um 12:56 schrieb Andrus :



Hi!

I ran analyze firma2.toode and changed where clause to use like:

create table peatoode as
select toode.toode , n2, n3, n4
from toode, vordlusajuhinnak
WHERE toode.toode=vordlusajuhinnak.toode OR  toode.toode LIKE 
vordlusajuhinnak.toode||'/%'


In this case it took 37 minutes, returned 277966 rows.

Thank you for help.

Andrus.

23.05.2023 11:24 Bzm@g kirjutas:

Also your row count is way off I guess. Did you ever run analyze bigtable?

--
Boris



Am 23.05.2023 um 10:22 schrieb...@2bz.de:

Hi there,

I guess the main problem is the nested loop.

As a quick recheck what happened if you run your query Without nested loop?

This is not a solution but a quickt test

In a Session

Set enable_nestedloop = off;
Explain Select your query ;
--
Boris



Am 23.05.2023 um 08:53 schrieb Andrus:

Hi!

Price list of main products vordlusajuhinnak contains 3 prices for product 
(column toode) and has 39433 products:

create table vordlusajuhinnak( toode varchar(60), n2 numeric(8,2), n3 
numeric(8,2), n4 numeric(8,2) );

toode column in unique, may be primary key in table and contains upper case 
letters, digits and - characters.

product table (toode) contains 733021 products:

CREATE TABLE toode (
grupp character(1),
toode character(60) primary key,
... lot of other columns
  );

Both tables have pattern indexes to speed up queries:

CREATE INDEX toode_toode_pattern_idx
ON toode (toode bpchar_pattern_ops ASC NULLS LAST);
-- This index is probably not used, should removed:
CREATE INDEX vordlusajuhinnak_toode_pattern_idx ON vordlusajuhinnak(toode 
bpchar_pattern_ops);

Product table as both main products and subproducts with sizes. Size is 
separated by main product code always by / character:


SHOE1-BLACK
SHOE1-BLACK/38
SHOE1-BLACK/41
SHOE1-BLACK/42
SHOE1-BLACK/43
SHOE2/XXL
SHOE2/L

Product codes contain upper characers only in this table also.

Trying to get prices for all products using

create table peatoode as
select toode.toode , n2, n3, n4
from toode, vordlusajuhinnak
where  toode.toode between vordlusajuhinnak.toode and 
vordlusajuhinnak.toode||'/z'

Takes 4.65 hours. How to speed this query up?
Output from explain:


"Nested Loop  (cost=0.55..272273178.12 rows=3202240012 width=78)""
->  Seq Scan on vordlusajuhinnak  (cost=0.00..721.33 rows=39433 width=32)" "  
->  Index Only Scan using toode_pkey on toode
(cost=0.55..6092.62 rows=81207 width=60)" "Index Cond: (toode

= (vordlusajuhinnak.toode)::bpchar)" "Filter: ((toode)::text <= 
((vordlusajuhinnak.toode)::text || '/z'::text))"

Using  PostgreSQL 13.2, compiled by Visual C++ build 1900, 64-bit in Windows 
server and psqlODBC driver.
Upgrading Postgres is possible, if this helps.

Tried also using like:

WHERE toode.toode=vordlusajuhinnak.toode OR
  toode.toode LIKE vordlusajuhinnak.toode||'/%'

Posted also in

https://stackoverflow.com/questions/76311957/how-to-match-character-columns-with-separated-subcolumns

Andrus.




Re: How to speed up product code and subcode match

2023-05-23 Thread Andrus

Hi!


Separate
the product code and size into two columns --- if there's somebody
who really wants to see them in the above format, give them a
view or generated column.  Then instead of the impossible-to-optimize
queries you showed, you could do something like

select toode.toode , n2, n3, n4
from toode, vordlusajuhinnak
where toode.toode = vordlusajuhinnak.toode;


Can function index

create index on toode ( *split_part( toode, '/',1) *)

and query

select toode.toode , n2, n3, n4

from toode, vordlusajuhinnak

where *split_part( toode.toode, '/',1) *= vordlusajuhinnak.toode;


used and keeping existing table structure? Functional index should 
produce same speed improvement as using separate column?


Andrus.



Re: How to speed up product code and subcode match

2023-05-23 Thread Andrus

Hi!

Using index

create index on toode ( *split_part( toode, '/',1) *)

and query

select toode.toode , n2, n3, n4

from toode, vordlusajuhinnak

where *split_part( toode.toode, '/',1) *= vordlusajuhinnak.toode;

reduces run time to 5 minutes.

Andrus.


23.05.2023 17:26 Andrus kirjutas:


Hi!


Separate
the product code and size into two columns --- if there's somebody
who really wants to see them in the above format, give them a
view or generated column.  Then instead of the impossible-to-optimize
queries you showed, you could do something like

select toode.toode , n2, n3, n4
from toode, vordlusajuhinnak
where toode.toode = vordlusajuhinnak.toode;


Can function index

create index on toode ( *split_part( toode, '/',1) *)

and query

select toode.toode , n2, n3, n4

from toode, vordlusajuhinnak

where *split_part( toode.toode, '/',1) *= vordlusajuhinnak.toode;


used and keeping existing table structure? Functional index should 
produce same speed improvement as using separate column?


Andrus.



How to remove user specific grant and revoke

2023-06-03 Thread Andrus

User groups table is defined as

    CREATE TABLE IF NOT EXISTS public.kaspriv
    (
    id serial primary key,
    user character(10) NOT NULL,
    group character(35) NOT NULL
    ...
    )

There are hundreds of users. Earlier time grant and revoke commands were 
executed for every user separately. Later revoke and grant commands for 
public were added:


    REVOKE ALL ON TABLE public.kaspriv FROM PUBLIC;
    GRANT SELECT ON TABLE public.kaspriv TO PUBLIC;

pgAdmin SQL tab still shows revoke and grant commands for every user also:

    REVOKE ALL ON TABLE public.kaspriv FROM PUBLIC;
    REVOKE ALL ON TABLE public.kaspriv FROM someuser;
    REVOKE ALL ON TABLE public.kaspriv FROM someotheruser;
    ...
    GRANT SELECT ON TABLE public.kaspriv TO PUBLIC;
    GRANT SELECT ON TABLE public.kaspriv TO someuser;
    GRANT SELECT ON TABLE public.kaspriv TO someother;
    ...

How to remove those unnecessary user-specific GRANT and REVOKE commands 
to make rights cleaner? pgAdmin does not have delete option for those.


Something like

    DROP REVOKE ALL ON TABLE public.kaspriv FROM all EXCEPT public;
    DROP GRANT SELECT ON TABLE public.kaspriv FROM all EXCEPT public;

This will be one-time action. It can be done manually in pgadmin or 
using some script running once.


Using

    PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu,
    compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit

and latest pgAdmin 7.2

Posted also in 
https://stackoverflow.com/questions/76394896/how-to-remove-unnecessary-grant-and-revoke-privileges-from-table


Andrus.


Re: How to remove user specific grant and revoke

2023-06-03 Thread Andrus

Hi!



Or just execute those REVOKE ALL commands (except for PUBLIC) that pgAdmin
already gives you.


pgAdmin gives

REVOKE ALL ON TABLE public.kaspriv FROM someuser;

I ran it but pgAdmin still gives this statement.

Andrus.



Re: How to remove user specific grant and revoke

2023-06-03 Thread Andrus

Hi!

I tried

alter role  alekspoluh reset all

After this command pgAdmin still shows revoke and grant commands for 
alekspoluh  role.


How to remove all grant and revoke assignments for role ?


Andrus.

03.06.2023 20:50 Andrus kirjutas:


Hi!

> REVOKE must be executed by the grantor (sba_owner) or a superuser in 
case you

not already tried that.
REVOKE was executed by superuser, postgres. PgAdmin uses this user to 
log on.

Do you get any errors?


There are no errors.

Andrus.



Re: How to remove user specific grant and revoke

2023-06-03 Thread Andrus

Hi!



After this command pgAdmin still shows revoke and grant commands for
alekspoluh role.
How to remove all grant and revoke assignments for role ?

Please confirm that \dp public.kaspriv no longer shows an ACL for alekspoluh
after running:

REVOKE ALL ON public.kaspriv FROM alekspoluh;

psql (12.2 (Debian 12.2-2.pgdg100+1))
Type "help" for help.

sba=# REVOKE ALL ON public.kaspriv FROM alekspoluh;
REVOKE
sba=# \o result.txt
sba=#  \dp public.kaspriv

sba=# \q

#grep alekspoluh result.txt

Returns nothing. So output does not contain this role.


There must be something wrong with pgAdmin if it still shows REVOKE ALL for
that role after its ACL is gone.  Looking at the code, pgAdmin emits REVOKE ALL
for any grantee it find in the ACL.

https://github.com/pgadmin-org/pgadmin4/blob/REL-7_2/web/pgadmin/browser/server_groups/servers/databases/schemas/tables/utils.py#L712


I re-opened pgadmin.  alekspoluh role is no more displayed in kaspriv 
table sql window.


pgadmin shows only single reset role command. Now it shows

REVOKE ALL ON TABLE public.kaspriv FROM yllelohmus;

I ran

REVOKE ALL ON TABLE public.kaspriv FROM yllelohmus;

After that pgadmin shows next single revoke command:

REVOKE ALL ON TABLE public.kaspriv FROM villuuus;

It looks like pgAdmin shows only one REVOKE command but actually there 
are more revokes.


Should I ran separate revoke commands for every user to remove those 
revokes ?


pgAdmin also shows about 100 grant commands for users like

GRANT SELECT ON TABLE public.kaspriv TO paide;

How to remove user-spefic grants ?

Andrus.


Re: How to remove user specific grant and revoke

2023-06-03 Thread Andrus

Hi!


Should I ran separate revoke commands for every user to remove those 
revokes ?


How to remove user-spefic grants ? 


After running revoke commands in psql, GRANT commands disappeared 
magically. It looks like pgAdmin does not allow execute REVOKO commands.


After running script which adds user group tabel modification rights for 
admin users:


CREATE POLICY kaspriv_sel_policy ON kaspriv FOR SELECT USING (true);
CREATE POLICY kaspriv_mod_policy ON kaspriv USING (
  lower(kasutaja)= current_user OR kasutaja in
    ( select kasutaja from kasutaja where ','||firmad||','
    LIKE '%,'|| (select firmad from kasutaja where lower(kasutaja)= 
current_user) || ',%'

    )
);
ALTER TABLE kaspriv  ENABLE ROW LEVEL SECURITY;
revoke all on kaspriv from public;
grant select on kaspriv to public;
grant insert, update, delete on kaspriv to admin1, admin2;

pgAdmin shows revoke commands for those users:

REVOKE ALL ON TABLE public.kaspriv FROM admin1;
REVOKE ALL ON TABLE public.kaspriv FROM admin2;

How to prevent pgAdmin to show those revokes?

Andrus.



  1   2   >