Conflict with recovery on PG version 11.6

2020-06-16 Thread Toomas Kristin
Hi! Basically after upgrade to version 11.5 from 10.6 I experience error messages on streaming replica host “FATAL: terminating connection due to conflict with recovery” and “ERROR: canceling statement due to conflict with recovery”. There is no changes for vacuuming on master nor

Re: Clarification on Expression indexes

2020-06-16 Thread Tom Lane
Koen De Groote writes: >> Index expressions are relatively expensive to maintain, because the derived >> expression(s) must be computed for each row upon insertion and whenever it >> is updated > I'd like to get an idea on "relatively expensive". It's basically whatever the cost of evaluating

Clarification on Expression indexes

2020-06-16 Thread Koen De Groote
Greetings all. The following page: https://www.postgresql.org/docs/11/indexes-expressional.html States the following: Index expressions are relatively expensive to maintain, because the derived > expression(s) must be computed for each row upon insertion and whenever it > is updated > I'd like

Re: pg_repack: WARNING: relation must have a primary key or not-null unique keys

2020-06-16 Thread Eugene Pazhitnov
Ok, thanks a lot! Got it. вт, 16 июн. 2020 г. в 17:12, Tom Lane : > Eugene Pazhitnov writes: > > xbox=> \d herostat > > ... > > "herostat_pkey" PRIMARY KEY, btree (xuid, titleid, heroid) INCLUDE > (valfloat) > > > eugene@dignus:/var/www/html/health$ sudo -u postgres pg_repack -t > herostat

Sv: autovacuum failing on pg_largeobject and disk usage of the pg_largeobject growing unchecked

2020-06-16 Thread Andreas Joseph Krogh
På tirsdag 16. juni 2020 kl. 17:59:37, skrev Jim Hurne mailto:jhu...@us.ibm.com>>: We have a cloud service that uses PostgreSQL to temporarily store binary content. We're using PostgreSQL's Large Objects to store the binary content. Each large object lives anywhere from a few hundred

Re: autovacuum failing on pg_largeobject and disk usage of the pg_largeobject growing unchecked

2020-06-16 Thread Michael Lewis
On Tue, Jun 16, 2020 at 1:45 PM Jim Hurne wrote: > Thanks Michael, > > Here are our current autovacuum settings: > > autovacuum | on > autovacuum_analyze_scale_factor | 0.1 > autovacuum_analyze_threshold| 50 > autovacuum_freeze_max_age |

RE: autovacuum failing on pg_largeobject and disk usage of the pg_largeobject growing unchecked

2020-06-16 Thread Jim Hurne
Thanks Michael, Here are our current autovacuum settings: autovacuum | on autovacuum_analyze_scale_factor | 0.1 autovacuum_analyze_threshold| 50 autovacuum_freeze_max_age | 2 autovacuum_max_workers | 3

Re: autovacuum failing on pg_largeobject and disk usage of the pg_largeobject growing unchecked

2020-06-16 Thread Michael Lewis
On Tue, Jun 16, 2020 at 10:01 AM Jim Hurne wrote: > Other than the increasing elapsed times for the autovacuum, we don't see > any other indication in the logs of a problem (no error messages, etc). > > We're currently using PostgreSQL version 10.10. Our service is JVM-based > and we're using

Re: Logical replication - ERROR: could not send data to WAL stream: cannot allocate memory for input buffer

2020-06-16 Thread Aleš Zelený
Thanks for the comment. from what I was able to monitor memory usage was almost stable and there were about 20GB allocated as cached memory. Memory overcommit is disabled on the database server. Might it be a memory issue, since wit was synchronizing newly added tables with a sum of 380 GB of

autovacuum failing on pg_largeobject and disk usage of the pg_largeobject growing unchecked

2020-06-16 Thread Jim Hurne
We have a cloud service that uses PostgreSQL to temporarily store binary content. We're using PostgreSQL's Large Objects to store the binary content. Each large object lives anywhere from a few hundred milliseconds to 5-10 minutes, after which it is deleted. Normally, this works just fine and

Re: create batch script to import into postgres tables

2020-06-16 Thread Adrian Klaver
On 6/16/20 7:59 AM, Pepe TD Vo wrote: Just noticed you cross posted to pgsql-admin listed. FYI, That is not a good practice. I can run \copy in Linux with individual csv file into the table fine and run import using pgadmin into AWS instance. I am trying to run \copy all csv files import

Re: create batch script to import into postgres tables

2020-06-16 Thread Pepe TD Vo
I can run \copy in Linux with individual csv file into the table fine and run import using pgadmin into AWS instance.  I am trying to run \copy all csv files import into its own table in Linux and in AWS instance. If all csv files into one table is fine but each csv for each table.  Should I

PSQL console encoding

2020-06-16 Thread Jean Gabriel
Hello, I am having some issues setting/using my PSQL console encoding to UTF-8 under Windows 10. I have a Windows server and client. The |Postgres 12| database contains tables with content in multiple languages (ex: English, French (with characters such as |é|), Korean (with characters

Re: create batch script to import into postgres tables

2020-06-16 Thread Adrian Klaver
On 6/16/20 7:20 AM, Pepe TD Vo wrote: good morning experts, I nêd to set up a batch script to import multi csv files to import them to Postgres tables.  Each csv files will be named table1_todaydate.csv, table2_todaydate.csv, etc... tablen_todaydate.csv.  Each csv file will import to its

create batch script to import into postgres tables

2020-06-16 Thread Pepe TD Vo
good morning experts, I nêd to set up a batch script to import multi csv files to import them to Postgres tables.  Each csv files will be named table1_todaydate.csv, table2_todaydate.csv, etc... tablen_todaydate.csv.  Each csv file will import to its table and how do I execute the script to

Minor Upgrade Question

2020-06-16 Thread Susan Joseph
So when I first started working with PostgreSQL I was using the latest version (11.2).   I don't want to move to 12 yet but I would like to get my 11.2 up to 11.8.  Due to my servers not being connected to the Internet I ended up downloading the libraries and building the files locally.  My

Re: pg_repack: WARNING: relation must have a primary key or not-null unique keys

2020-06-16 Thread Tom Lane
Eugene Pazhitnov writes: > xbox=> \d herostat > ... > "herostat_pkey" PRIMARY KEY, btree (xuid, titleid, heroid) INCLUDE > (valfloat) > eugene@dignus:/var/www/html/health$ sudo -u postgres pg_repack -t herostat > -N -d xbox > INFO: Dry run enabled, not executing repack > WARNING: relation

Re: pg_repack: WARNING: relation must have a primary key or not-null unique keys

2020-06-16 Thread Michael Lewis
On Tue, Jun 16, 2020, 4:52 AM Eugene Pazhitnov wrote: > xbox=> \d herostat >Table "public.herostat" > Indexes: > "herostat_pkey" PRIMARY KEY, btree (xuid, titleid, heroid) INCLUDE > (valfloat) > > WARNING: relation "public.herostat" must have a primary key or not-null >

Re: Something else about Redo Logs disappearing

2020-06-16 Thread Laurenz Albe
On Tue, 2020-06-16 at 00:28 +0200, Peter wrote: > On Mon, Jun 15, 2020 at 09:46:34PM +0200, Laurenz Albe wrote: > ! On Mon, 2020-06-15 at 19:00 +0200, Peter wrote: > ! > And that is one of a couple of likely pitfalls I perceived when > ! > looking at that new API. > ! > ! That is a property of my

Re: Something else about Redo Logs disappearing

2020-06-16 Thread Peter
On Mon, Jun 15, 2020 at 09:46:34PM +0200, Laurenz Albe wrote: ! On Mon, 2020-06-15 at 19:00 +0200, Peter wrote: ! > And that is one of a couple of likely pitfalls I perceived when ! > looking at that new API. ! ! That is a property of my scripts, *not* of the non-exclusive ! backup API... Then

Re: Something else about Redo Logs disappearing

2020-06-16 Thread Peter
On Sun, Jun 14, 2020 at 03:05:15PM +0200, Magnus Hagander wrote: ! > You can see that all the major attributes (scheduling, error-handling, ! > signalling, ...) of a WAL backup are substantially different to that ! > of any usual backup. ! ! > This is a different *Class* of backup object,

pg_repack: WARNING: relation must have a primary key or not-null unique keys

2020-06-16 Thread Eugene Pazhitnov
Hello everyone! eugene@dignus:/var/www/html/health$ psql xbox Timing is on. psql (12.3 (Ubuntu 12.3-1.pgdg20.04+1)) Type "help" for help. xbox=> \d herostat Table "public.herostat" Column | Type | Collation | Nullable | Default

Re: Index no longer being used, destroying and recreating it restores use.

2020-06-16 Thread Bruce Momjian
On Tue, Jun 16, 2020 at 11:49:15AM +0200, Koen De Groote wrote: > Alright, I've done that, and that seems to be a very good result: https:// > explain.depesz.com/s/xIph > > The method I ended up using: > > create or replace function still_needs_backup(shouldbebackedup bool, > backupperformed

Re: Index no longer being used, destroying and recreating it restores use.

2020-06-16 Thread Koen De Groote
Alright, I've done that, and that seems to be a very good result: https://explain.depesz.com/s/xIph The method I ended up using: create or replace function still_needs_backup(shouldbebackedup bool, backupperformed bool) returns BOOLEAN as $$ select $1 AND NOT $2; $$ language sql immutable;