So, there is a way to call replication, to client restart connection and try
again ?
Because aparently it stays disconnected for hours to try again, gets that
timeout message and remains disconnected for hours again
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.htm
On publisher it shows me "terminating walsender process due to replication
timeout"
But the problem is that this message is occurring 3 or 4 times a day only.
It tries to send data, gets a timeout then shows the message. Then it waits
a long long long time to try again, I don´t know why.
wal_send
I have a server which replicates using Publication/Subscription. On
subscriber server I have some ETL processes running on it and from this
second server I´ve created a new publication to send this database to a
third server. While this second server was copying data with COPY to third,
everything
I have tables which are Master Detail and usually my program loads all detail
records of a master record. So I configured CLUSTER on all those detail
tables to use an index which is the relation with master table. With that I
can load less records to Shared Buffers because all detaild records are o
Well, I didn´t know where was the problem exactly, then ...
The entire script to see that problem is
create domain public.i32 as integer;
CREATE FUNCTION public.fntextonumero(ftext text, finteger public.i32)
RETURNS boolean
LANGUAGE sql STABLE
AS $$
SELECT ftext = finteger::text;
$$;
C
And the operator function, just to you know, is only a conversion to text
CREATE FUNCTION public.fntextonumero(ftext text, finteger public.i32)
RETURNS boolean
LANGUAGE sql STABLE
AS $$
SELECT ftext = finteger::text;
$$;
CREATE FUNCTION public.fntextonumero(finteger public.i32, ftext te
It works, shouldn´t but works. Results False
And I have created these two operators a long time ago.
CREATE OPERATOR public.= (
FUNCTION = public.fntextonumero,
LEFTARG = text,
RIGHTARG = public.i32
);
CREATE OPERATOR public.= (
FUNCTION = public.fntextonumero,
LEFTARG = publ
Correct, on yours it gives an error message, but on mine it tries to execute
and goes to recovery.
So, how can I know which custom operator is being used on that comparison ?
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
Ok, it works if I cast TipoRecebimento to text.
(TipoRecebimento::text in (select substring(VarValue from 3) from Var where
Name = '/Config/TipoRecebimentoCancelamento'))
But now, how can I know what custom operator it´s trying to use, because
some day I can get in troubles again because that.
This is the way you can test to push your server immediatelly to recovery
mode. This one you´ll get the problem instantaneously, differently from the
last one which you have to wait a long time to happen.
A master detail relation with an additional table of configurations. Usually
this master has
Hi Michael, so sorry for the delay. I did never had that problem again, so
tougth it was solved, but seems not. We have two different SQL which pushs
server to recovery mode. This SQL is the hardest one, because you´ll have to
wait some time to get the error. Just runs this SQL, opens your HTOP and
I understood, but the problem is that I cannot just migrate from 11 to 12, I
have to carefully verify all code before migration.
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
Ok Tom but then you cannot go back and forth, like this ...
select to_timestamp(jsonb_build_object('mydate',
current_timestamp)->>'mydate', '-MM-DD HH24:MI:SS');
works on 11.7 but not on 12.3.
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
Both are ISO, MDY
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
select To_Json(Current_Timestamp);
returns "2020-05-15T14:49:44.266342+00:00" on version 11.7 or 12.3
So I have lots of JSONS which have timestamp on them.
select
JS ->> 'mydate'::text,
to_timestamp((JS ->> 'mydate'), '-MM-DD HH24:MI:SS')
from (select '{"somefield": true, "otherfield":
pg_Stat_Subscription does not anwer what I want. This view just show if
replication is being processed but not exactly what it´s doing.
Is it replicating a huge table which an erroneous proess updated several
times ?
Replication process is recreating what index and that is spending how much
time ?
On my built in logical replication I see that sometimes it tooks a long time
to be updated with master. The question is how can I see what is being done
on replica ? I know I can have a WAL reader, I tried WAL2JSON, it listen all
data which comes from master, but as I understood when I use that plu
Well, for now it´s solved but I´ll explain what happens to solve it better on
future.
Suppose on Master you have a database with hundreds of schemas with same
structure, so table Customer happens 500 times on that DB. That database
being replicated with publication/subscription for all tables model
built in logical replication
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
Now I have the same problem with a different message.
I´ve added a table on all schemas and did a refresh publication. when
postgres sees a new table on publisher he goes to replicamand trucates that
table to start copying. ok but I have 300 schemas, how can I know what
schema that table belongs to
correct, what schema that table belongs to.
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
I have a publication/subscription replication.
Then this week started to see this message on Log of replica server.
Message is "duplicate key value violates unique constraint "pksyslookup""
Detail is "Key (lookup_id)=(56) already exists."
and on production server message is
"logical decoding fou
Additionally, there is a list of all options for create foreign server and
create foreign table ?
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
I´m trying to insert/update some tables using FDW. I´m having problems when I
want to use default values and triggers.
For columns with default values, if I drop that column on foreign table
definition is fine for inserts but I cannot use that column for
updates/selects because it doesn´t exist o
I have to replace ; , " and ' chars in an string putting its correspondent
ascii value
I have a
Replace(Replace(Replace(Replace($$Text,with;On'It"$$,',','chr(59)'),';','chr(44)'),'"','chr(34)'),,'chr(39)')
It works but I would like to call just one Replace.
I tried this but didn´t work
select
I know it´s possible, but it´s ugly.
When you see a trigger called zaudittable is really strange
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
well, my way of doing auditing is done on replica, so it´s a little different
on production server I do on before update
IF (tg_op = 'UPDATE') AND (new.* IS DISTINCT FROM old.*) THEN
new.userauditing = User_ID
new.datetimeauditing = current_timestamp;
END IF;
RETURN new;
Then, on re
On create trigger page of the doc:
If multiple triggers of the same kind are defined for the same event, they
will be fired in alphabetical order by name.
But suppose we have two triggers, one is called for every table for auditing
purposes, for example. And other for a specific process of that tab
We use replication with publication/subsctription. It´s ok, works fine.
But if I go to my replica server and do select * from pg_subscription
on field subconninfo I have all properties to connect. host, port, user,
password and dbname, all these info are available.
Documentation says user for repl
Very good, records were changed and everything is ok.
Well, sorry for this stupid question but some time ago we did something
similar to this and our replica stoped, so I was just afraid of that
happenning again.
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
Correct, those records are on replica too. I´m just talking about the best
way to update those keys and all their dependent tables.
If I change them first on master they will not be replicated because it will
be an update and their pk will not be found on replica, correct ?
If so, do I need to upda
I have replication using Publication/Subscription and configured with REPLICA
IDENTITY DEFAULT, so it uses PK values to do the replication. Then, I´ve
imported data in a new schema and that has several tables with a record with
its PK = 0. Replication works but my application doesn´t because it nee
Sorry, I told you that the status was always populated but not, I need that
filter for the index because not all records are using that json structure.
When json is not null then yes, Status is always there.
I have maybe 20 or 25% of records having json populated, so, I really need
that filter.
Y
No, no, no, partial index is the where clause of it and I´m not talking about
that.
I´m talking about datevalue, which will be used one or another, depending on
status value
This record iindex will be status and datevalue from "visitadescartada"
object
{
"status": "visitadescartada",
"atribui
I told it was almost dynamic because it uses DateValue from an object or
another, depending on value of staus key.
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
I think I solved my problem. I didn´t know it was possible but I´ve created
an almost dynamic index, because it´ll index for status and corresponding
datevalue of an object with that status value.
Postgres, I love you.
Just one B-Tree index with 2 fields, Status and DateTime of that respective
S
I think I solved my problem. I didn´t know it was possible but I´ve created
an almost dynamic index, because it´ll index for status and corresponding
datevalue of an object with that status value.
Postgres, I love you.
Just one B-Tree index with 2 fields, Status and DateTime of that respective
ok, my select performed better but I had to create 8 indices to speed up my
query.
I would love to create just one index using GIN(JsonBField jsonb_ops) but
using version 11 I cannot use operators like > and <.
I see on docs that version 12 has jsonpath Filter Expression Elements and
they inclu
select * from MyTable where
((JsonBField->>'status'='descartada' and
To_Date(JsonBField->'descartada'->>'data','-mm-dd') > Current_Date) or
(JsonBField->>'status'='contrato' and
To_Date(JsonBField->'contrato'->>'data','-mm-dd') > Current_Date-7) or
(JsonBField->>'status'='naoatribui
your select returns no records but if I use
WHERE p.proname ~ 'day_inc'
instead of
WHERE p.proname OPERATOR(pg_catalog.~) '^(pg_catalog.day_inc)$'
Schema
Name
Result data type
Argument data types
Type
pg_catalog
day_inc
anyelemen
This select gives me:
ERROR: more than one function named "pg_catalog.day_inc"
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
SELECT * FROM pg_extension;
extname, extversion
plpgsql,1.0
pg_stat_statements,1.6
pg_trgm,1.4
tablefunc,1.0
unaccent,1.1
pageinspect,1.7
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
oprname oprkind oprleft oprright
oprresult oprcode
+ l 0 20 20 int8up
+ l 0 21 21 int2up
+ l 0 23 23 int4up
+ l 0 700
Some days ago one of my customers claimed that one of his queries were
spending too much time. Then on postgres log it appears really strange. That
query doesn´t usually appears as parse, sometimes only as execute, because
it tooks almost all times less than 500ms. But on that day something
happene
format_type format_type castfunccastcontext
castmethod
bigint smallint714 a f
bigint integer 480 a f
bigint real652 i f
bigint double precision4
Doing a inner join with pg_proc I´ll get this result. So, none of this casts
I´ve created. Column prorettype is different, this is the problem ?select
format_type(castsource, NULL), format_type(casttarget, NULL),castfunc,
castcontext, castmethod, pr.* from pg_cast inner join pg_proc pr on castfunc
Nope, seems I didn´t create anything.
SELECT * FROM pg_operator WHERE oid = '+(numeric, bigint)'::regoperator;
ERROR: operator does not exist: +(numeric, bigint)
Ok, I can cast, it works. But why works without casting for you and not for
me ?
--
Sent from: https://www.postgresql-archive.org/Po
I don´t know if I did.
PostgreSQL 11.5 (Ubuntu 11.5-1.pgdg16.04+1) on x86_64-pc-linux-gnu, compiled
by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609, 64-bit
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
If I do ...
select 1::NUMERIC + 1::BIGINT;
I get ...
[42725] ERROR: operator is not unique: numeric + bigint Hint: Could not
choose a best candidate operator. You might need to add explicit type casts.
This error means I have more than one way to calculate that formula ?
Did I create that operat
these values were collected before vacuum analyse:select (select reltuples
from pg_Class where relName = t.relName), * from pg_stat_sys_tables t where
schemaname = 'pg_catalog' and relname in
('pg_class','pg_attribute','pg_index');
reltuples relid schemaname relname seq_scanseq
This week we added just 5 new customers. Every schema has 100 tables, 300
indices, 400 triggers.
I cannot imagine our script doing 75000 updates just for adding those
schemas.
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
autovacuum_analyze_threshold = 50;
autovacuum_analyze_scale_factor = 0.1
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
https://explain.depesz.com/s/5Rrd
EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) SELECT
current_database() AS TABLE_CATALOG, n.nspname AS TABLE_SCHEMA, t.relname AS
TABLE_NAME, t.oid AS TABLE_OID, current_database() AS INDEX_CATALOG,
n.nspname AS INDEX_SCHEMA, c.relname AS INDEX_NAME, c.oid AS INDEX_O
Coming back to my problem, today happened again, that search comes slow.
Explaining again, auto vacuum and auto analyse for pg_* tables are not
configured one by one, so its using that defaults of scale_factor = 0.2 and
threshold = 50. Today, when that problem of speed came back, the view
pg_stat_s
As I told you before, these queries cannot be changed because the driver
creates them.
As I cannot change them how can I help Postgres to run it faster, just that.
My log_min_duration_statement = 500ms, so I find SQL some dozens of this sql
inside it daily, but I´m sure this sql is used thousands
Well, not exactly.
Values for autovacuum and autoanalyse are Null because I did not configured
them for system tables yet, but I´m doing vacuum manually once a week. My
question now is why those selects varies that way.
Almost all times it spend 20ms but 2 o 3% of the times it spend 500ms, why ?
last values for autovacuum and autoanalyse are Null.
Thanks, I´ll change and see if that solve our problem
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
The driver I´m using does some queries on system tables and obviously I
cannot change them because are driver inside. These two queries are usually
fast, but sometimes they take 8 or 15 times more to run. The question is,
can I change something on these tables ? Can i create an index, can I load
th
Well, I used only domains with fixed sizes.
create domain i32 as integer;
create domain i16 as smallint;
create domain datahora as timestamp;
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
select count(*) Total_Tables,
count(*) filter (where t2.oid is not null) Toasted_Tables,
count(*) filter (where t2.reltuples > 0) Toasted_with_records
from pg_class t1 left join pg_class t2 on t1.reltoastrelid = t2.oid and
t2.relkind = 't'
where t1.relkind = 'r'
total_tablestoast
Correct, seems to be something wrong on system tables. Maybe our script is
running something is already there, we will check.
select * from pg_stat_sys_tables where schemaname = 'pg_catalog'
relname n_live_tup n_dead_tup
pg_attrdef 3699 1095
pg_index
pg_stat_all_tables
count count
37158 807
pg_stat_sys_tables
count count
16609 223
pg_stat_user_tables
count count
20549 584
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
select count(*), count(*) filter (where last_autovacuum is not null) from
pg_stat_all_tables
count count
36605 1178
But what tables should I see if vacuum ran on it ?
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
Nope, that schema and all its entire structure did not exist.
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
Running that sql:
namesetting
autovacuum on
autovacuum_analyze_scale_factor 0.1
autovacuum_analyze_threshold50
autovacuum_freeze_max_age 2
autovacuum_max_workers 3
autovacuum_multixact_freeze_max_age 4
autovacuum_naptime 60
autovacuum_vacuum_cost_delay
Nope, no one message near those statements.
I haven´t changed anything on Postgres.conf related with autovacuum.
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
So, what should I tune on autovacuum ?
My script was running strangely. Postgres log shows me the time spent to
create functions. That happens when creating triggers and tables too.
Sometimes it´s too fast and sometimes ...
statement: create or replace function valoresdfe... 0 mins 1.135 secs
I did not have vacuumed or reindexed my database for last 30 days and that
was my problem.
It works fine if I do a reindex database before adding that new schema.
Well, I´ll try just reindexing system before adding a new schema to see if
it works.
--
Sent from: https://www.postgresql-archive.
We have in a single database 190 identical schemas. Now, when we create a new
one, with exactly same structure as the previous ones, it takes 20 or 30
minutes to finish. Usual time to finish that script was 30 seconds.
Basically, my script creates an entire structure for a new customer:
- Create s
- Because we don´t need to give rigths to user on sequences;
- Nobody will change values of pk fields, because we would like to have
GENERATE ALWAYS on those PK Fields.
--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
Sorry, the example I was thinking was this one, which works on Firebird,
using its way of writing, obviously.
create function myproc(id integer) returns I32 language sql as 'select $1';
On postgres ERROR: return type mismatch in function declared to return i32
What I mean is that Firebird sees I
Adrian, on Firebird if you create a table or procedure with fields or params
declared with domains, they can be used with those domains or with their
base type. On Postgres I32 is not equal to integer.
create procedure myproc(id i32) returns(x i32) as ...
select * from myproc(cast(5 as integer))
Well, I think it´ll not as easy as you said. That tables has dependencies.
So, if I try to alter type it gives me ERROR: cannot alter type of a column
used in a trigger definition. I dropped all Triggers of that table and it
gives me ERROR: cannot alter type of a column used by a view or rule.
The
ok, thanks for the explanation but ... I cannot add a field and move data,
constraints, triggers, identity to it because my pk field will be
repositioned to the last field on that table and I have lots of other codes
which point to pk as the first field on every table.
So, there is a way to conver
Domains on Postgres are really strange to me. Am I creating a domain which is
exactly equal to integer, right ?
create domain i32 as integer;
create domain T50 as varchar(50);
Create table MyTable(
ID I32 not null primary key,
Description T50);
Then, after inserts and updates done to that table,
Our database has 180 schemas with 100 tables each, with majority of them
being small tables. Then, when we do a dump or restore we don´t know how
much time it´ll spend to do that job.
So, there is an option to dump or restore ordered alphabetically ? It could
be by schema or table, do we have it ?
Yes, they are jsonb.
Their length varies between 400 and 2.000 chars, when using
pg_column_size(jsonb field). When casting to text it goes to 1.800 to 3.500
of length.
I didn´t set any storage strategie for that table or field. It´s defined as
x for storage.
No, I don´t have any index on that field
I have a table with lots of updates in a json field and few updates on other
fields. On that table I have several indices and it is main table of other
lots of child tables.
When querying this table I always do a lateral join with that json field, so
having that field on a separate table wouldn´t b
Well, I think is not a transaction problem, because if you do the same thing
on a DO it will work.
DO $$
declare vMaster_ID integer;
begin
insert into Master(Customer_ID, Field2) values(1, 'BlaBla') returning
Master_ID into vMaster_ID;
insert into Detail(Master_ID, Product_ID, ProductValue)
We like to use With to insert, update and return some value to user. But some
informations of those related tables are not available on that time, is that
a bug ?
with
Master(Master_ID) as (insert into Master(Customer_ID, Field2) values(1,
'BlaBla') returning Master_ID),
Detail as (insert int
ok, you won. Sorry, I didn´t read that "If neither is specified, FOR EACH
STATEMENT is the default"
--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
I´m not saying it should inspect function code, but I think it should deny
when I try to create a trigger missing a needed argument.
When I do ...
create table MyTable(integer);
gives me an "syntax error at end of input" because I forgot field name.
why when I do ...
create trigger MyTrigger aft
sorry, I wasn´t clear, when I sad returning Null I tried to say that new.PK
was Null, not the function result.
Try to create exactly what I sent you and you´ll see null values on that pk.
And that occurs just because that trigger was created without EACH
ROW/STATEMENT.
So, my question is, PG shou
We created a usual trigger which seemed to be not firing or not working
properly. Then we put some raise notices on it and saw that it was returning
Null values. But why, it´s after insert and is a primary key, cannot be
null.
create function MyFunction() returns trigger as $$
begin
raise '%', n
Restart Postgres means exactly what ? We tried just restart the service but
we tried to refresh publication the old view was used because it took 2hours
and gave us a timeout.
I found some people talking that I need to initdb, but initdb means recreate
entirely my database or just reinstall my pos
I cannot because we created a replication for ALL TABLES
--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
I tried sometime ago ... but with no responses, I ask you again.
pg_publication_tables is a view that is used to refresh publication, but as
we have 15.000 tables, it takes hours and doesn´t complete. If I change that
view I can have an immediate result. The question is: Can I change that view
? Th
On version 10 and 11 hash index was redesigned and now seems to be better
than btree.
Their speed, size and memory consuption are better, so ...
Why is not possible to have all PK and FK just using hash indices ? The only
thing I need on a PK and FK is search and join, so hash index responds very
*We use logical replication from a PG version 10.6 to a 11.2. Both are Ubuntu
16.04.We have a hundred schemas with more or less a hundred tables, so
number of tables is about 10.000. All replication is ok but when we try to
do a REFRESH SUBSCRIPTION because we added a new schema, it takes hours and
sorry, it´s a missing part of the CTE
that constant should be on beginning part of it.
with feriados as (
SELECT dia, repete
FROM (
VALUES ('2014-10-11'::DATE, FALSE), ('2014-10-13', FALSE),
('2014-10-14', FALSE), ('2014-10-15', FALSE), ('2014-10-16', FALSE),
('2014-10-17', FALS
is a sql FROM VALUES, why do you need tables ?
--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
*Here is a SQL which will get that recovery mode. You can run it on any
database because we created it with FROM VALUES, so ...*
But this one is that one which grows and grows memory use until all memory
and swap space are gone, so problem occurs. That other SQL which gives us
the same problem but
*Well, now we have two queries which stops completelly our postgres server.
That problem occurs on 10.6 and 11.1 versions.
On both server the problem is the same.
Linux logs of old crash are:*
Feb 1 18:39:53 fx-cloudserver kernel: [ 502.405788] show_signal_msg: 5
callbacks suppressedFeb 1 18:39
About replication ... Logical Replication with CREATE
PUBLICATION/SUBSCRIPTION.
Yes, some DDL commands were ran on that server but none of them were related
with that select.
Let me explain better. We have a single server with a single database on it.
Each customer has its own schema and connects
Nothing was change, Postgres 10, Ubuntu 16.04 and Schema was the same, before
and after that problem.
Well, that database is replicated and on replicated server that problem
doesn´t occur.
--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
Just to be readable ...
And ... server log has only "the database system is in recovery mode" every
time I run that query.
I have a complex query which puts my server in recovery mode every time I
run it.
I don´t need to say that recovery mode is a situation you don´t want your
server goes to.
I´m using Postgres 10 on ubuntu in a Google VM (8 cores, 32Gb RAM, 250Gb SSD)
and DB has 70GB
--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
I have a complex query which puts my server in recovery mode every time I run
it. I don´t need to say that recovery mode is a situation you don´t want
your server goes to.*If I´m using some subselects I´ll get that
situation*with StatusTrabalhando(Intkey) as (select Intkey from sys_Lookup
where Gr
ok
then, help me to find alternatives to it.
As I sad, sometimes whe change our PK, so using OID would be a smart way to
have a join between old and new transition tables and we would like to use
transition tables because each statement is a lot faster than each row for
auditing purposes.
So, what
yes, mytable has OID column
select oid, ID, Name from MyTable limit 3
oid id name
279515 1104Carol
279516 1106Dalva
279517 11008354Melissa
--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
1 - 100 of 119 matches
Mail list logo