Using version 16, seems strange when toast needs to be created. Tested with
domain being numeric or varchar(10) with the same results.
And If that domain is integer then no toast is created.
I think none of these tables should have a toast, right ?
postgres=# create domain mynum as
create event trigger ... on login is now available but it is not shown on
DOCs or is it in another page ?
https://www.postgresql.org/docs/devel/event-trigger-matrix.html
doesn't have it, should be there ?
regards
Marcos
Em dom., 16 de jun. de 2024 às 12:11, Pavel Stehule
escreveu:
> I don't follow this idea - when it does not make sense, then why do you
> use it? It can be a signal of some issue in your code.
>
>>
I don't use it, but sometimes it occurs, and there are lots of languages
which ignore it, so it
Em dom., 16 de jun. de 2024 às 11:37, Pavel Stehule
escreveu:
>
> What is the expected benefit? Generally PL/pgSQL has very strict syntax -
> and using double semicolons makes no sense.
>
> exactly, makes no sense. That is because it should be ignored, right ?
But ok, if this is a different
Can you remove or just ignore double ; too ?
postgres=# do $$
declare var_x integer;
begin
var_x = 99;;
delete from x where x = var_x;
end; $$;
ERROR: syntax error at or near ";"
LINE 1: do $$ declare var_x integer; begin var_x = 99;; delete from ...
Atenciosamente,
Em dom., 16 de jun.
-
Rename SLRU columns in system view pg_stat_slru (Alvaro Herrera)
The column names accepted by pg_stat_slru_rest() are also changed.
Is pg_stat_slru_rest() correct ?
While seeing changes and new features of
https://www.postgresql.org/docs/devel/release-17.html
I see that there are too little links to other DOC pages, which would be
useful.
There are links to
"logical-replication", "sql-merge", "plpgsql", "libpq" and
"pgstatstatements"
But no one link is
This page has 3 items that are between parentheses, there is an explanation
why they are used this way ?
https://www.postgresql.org/docs/devel/functions-json.html#FUNCTIONS-SQLJSON-TABLE
Each syntax element is described below in more detail.
*context_item*, *path_expression* [ AS
This is my first patch, so sorry if I miss something.
If I have a function which returns lots of columns and any of these columns
returns a wrong type it's not easy to see which one is that column because
it points me only to its position, not its name. So, this patch only adds
that column name,
Borodin
escreveu:
>
>
> > On 24 Jan 2024, at 22:00, Marcos Pegoraro wrote:
> >
> > Is enough from 1970 ?
> Per standard unix_ts_ms field is a number of milliseconds from UNIX start
> date 1970-01-01.
>
> > How about if user wants to have an UUID of his birt
Is enough from 1970 ?
How about if user wants to have an UUID of his birth date ?
regards
Marcos
Em qua., 24 de jan. de 2024 às 13:54, Andrey M. Borodin <
x4...@yandex-team.ru> escreveu:
>
>
> > On 24 Jan 2024, at 20:46, Aleksander Alekseev
> wrote:
> >
> > Only the
> > fact that timestamp
>
> The years are converted to months and the fractional month is rounded half
>> up:
>>
>> 1.05 year = 12.6 month
>> => 1 year 0.6 month
>> => 1 year 1 month(after rounding)
>>
>> Compare that to 12.5 months to see when the rounding occurs:
>>
>> 12.5 month
Em seg., 6 de fev. de 2023 às 10:59, Erik Wienhold
escreveu:
> > On 06/02/2023 12:20 CET Marcos Pegoraro wrote:
> >
> > I was just playing with some random timestamps for a week, for a month,
> > for a year ...
> >
> > select distinct current_date+((random():
Em dom., 29 de jan. de 2023 às 09:24, Dmitry Dolgov <9erthali...@gmail.com>
escreveu:
> > On Fri, Jan 27, 2023 at 08:15:29PM +0530, vignesh C wrote:
> > The patch does not apply on top of HEAD as in [1], please post a rebased
> patch:
>
> Thanks. I think this one should do the trick.
>
There is
>
> I agree with your idea and try to implement it and will soon attach a
> patch with a solution.
>
Additionally, if those OR constants repeat you'll see ...
If all constants are the same value, fine
explain select * from x where ((ID = 1) OR (ID = 1) OR (ID = 1));
Index Only Scan using x_id on x
Having a sup_user and a normal_user, login with sup_user
select session_user, current_user
sup_user, sup_user
set role normal_user;
select session_user, current_user
sup_user, normal_user
But then, when sup_user was running with normal_user grants an exception
occurs
select * from
>
> I started this time around from the newest of my patches in those
> threads, but the implementation has changed considerably from what's
> there.
>
I don´t know exactly what will be the scope of this process you're working
on, but there is a gap on foreign key constraint too.
It is possible
>
> OK, we really need a repeatable test if possible. Perhaps a pgbench run
> with lots of concurrent runs of a some very long query would do the trick.
>
OK, I can do it but ... strangely that error usually occurs at random
times, sometimes at 08:00, sometimes at 19:00, and it's busier between
>
> What platform is this on? Is it possible that on some platform the chunk
> size we're using is not doing an atomic write?
>
Until last year we were Ubuntu 16.04 and Postgres 11 with the latest minor
update.
This January we changed to Ubuntu 20.04 and Postgres 14, now updated to
14.4.
But the
>
>
> How are you running postgres? If the logger process runs into trouble it
> might
> write to stderr.
>
> Is there a chance your huge statements would make you run out of space?
>
> Well, I don't think it is a out of space problem, because it
doesn´t stop logging, it just splits that message.
it´s csvlog only
Atenciosamente,
Em dom., 7 de ago. de 2022 às 11:12, Tom Lane escreveu:
> Marcos Pegoraro writes:
> > I have to fix log files because its content is not properly formatted,
>
> What mechanism are you using to store the log? If syslog is involved,
> it
I have to fix log files because its content is not properly formatted, I´m
using version 14.4 but that happened when I was using version 11 too. It
happens only when that statement is huge, or because it is a long sequence
of updates in a WITH or DO statements, or because i´m updating a bytea
"","Key (customer_id)=(530540) already
exists.""","logical replication worker",,6675519194010520265
2022-06-28 08:18:24.160 -03,,,2009,,62b35392.7d9,88469,,2022-06-22 14:38:26
-03,,0,LOG,0,"background worker ""logical replication w
I don´t know how to create a patch, maybe someday, but for now I´m just
sending this little problem if somebody can solve it.
In a multi schema environment where several tables has same structure is a
little bit hard to know which one already has that primary key.
On log I see now on replica
Em seg., 21 de fev. de 2022 às 13:13, Zheng Li
escreveu:
>
> 2. Table level
> Allows DDLs on the published tables to be replicated except for
> certain edge cases.
>
> Think how to handle triggers and functions with same name but different
purpose.
Publisher
create function public.audit()
On a multi tenant server, with hundreds of schemas with same structure, I
have an audit table shared with all of them. When any record is deleted I
add on this table tenant, table and PK values, just that. Something like
this:
drop table if exists audit;
create table audit(id serial primary key,
>
> For example, if I define a variable called "relkind", then psql's \sv
> meta-command is broken because the query it performs can't distinguish
> between the column and the variable.
>
> If variables use : as prefix you´ll never have these conflicts.
select relkind from pg_class where relkind
A question I always have, and I didn´t find anybody answering it. If it´s
possible
select * from MyDB.MySchema.MyTable;
And from user point of view ... all databases are accessible for the same
postgres instance, user just says connect to this or that database, why is
it not possible to do
select
>
> I think the reason why we can't update a materialized view directly is
> because
> it is basically a "view" and it should not contains any data irrelevant to
> its
> definition and underlying tables. If we would have a feature to update a
> materialized view direcly, maybe, it should behave
Sorry, I didn´t explain exactly what I was doing, I just wrote "This
replication is a auditing database" on my second email.
Atenciosamente,
Em seg., 29 de nov. de 2021 às 09:20, Amit Kapila
escreveu:
> On Mon, Nov 29, 2021 at 5:04 PM Marcos Pegoraro wrote:
> >>
>
> > 2138: Incremental Materialized View Maintenance
>
> I've responded to it in the following thread, and described the recent
> discussions,
> current status, summary of IVM feature and design, and past discussions.
>
IVM is a wonderful feature, but some features were omitted because of
its
>
> On thinking about this point again, it is not clear to me why that
> would matter for this particular use case? Basically, when you create
> a new subscription, it should copy the entire existing data from the
> table directly and then will decode changes from WAL. So, I think in
> your case
>
> I think we don't want to make assumptions about the remote end being
> the same after the upgrade and we let users reactivate the
> subscriptions in a suitable way. See [1] (Start reading from "..When
> dumping logical replication subscriptions..") In your case, if you
> wouldn't have allowed
>
> AFAIU the main problem in your case is that you didn't block the write
> traffic on the publisher side. Let me try to understand the situation.
> After the upgrade is finished, there are some new tables with data on
> the publisher, and did old tables have any additional data?
>
Correct.
>
>
>
> The reason is after an upgrade, there won't be any data in
> pg_subscription_rel, and only when you tried to refresh it is trying
> to sync again which leads to the "duplicate key value ..." problem you
> are seeing.
>
> So, is pg_upgrade populating pg_subscription and not pg_subscription_rel
A publication for all tables was running fine, Master is a PostgreSQL
11.11. Replica was running version 13 (don´t remember minor version).
Then we tried to update only subscriber server, nothing was done on master
side.
Then we did ...
- installed postgresql-14.
- configured postgresql.conf to
>
> suppress_redundant_updates_trigger was created precisely because it's
> not always easy to create application code in such a way that it
> generates no redundant updates. However, there is a cost to using it,
> and the break even point can be surprisingly high. It should therefore
> be used
>
> I suppose if the application side cannot be trusted to code to a
> specification without having the server side add validation and/or
> compensation code to catch the bugs then, yes, one option is to have the
> server side do extra work. There are other solutions, some of which are
> not even
>
> I get the idea of letting the server centralize logic like this - but
> frankly if the application is choosing to send all that data across the
> wire just to have the server throw it away the application is wasting
> network I/O. If it does manage its resources carefully then the server
>
>
> Because it takes resources to determine that nothing changed. If you want
> to opt-in into that there is even an extension trigger that makes doing so
> fairly simple. But it's off by default because the typical case is that
> people don't frequently perform no-op updates so why eat the
Why this happens ?
create table t(i int);
CREATE TABLE
insert into t values(1);
INSERT 0 1
select (ctid::text::point)[1]::int, * from t;
ctid | i
--+---
1 | 1
(1 row)
update t set i = i;
UPDATE 1
select (ctid::text::point)[1]::int, * from t;
ctid | i
--+---
2 | 1
(1 row)
If
Em qua., 27 de out. de 2021 às 16:48, Marcos Pegoraro
escreveu:
> Postgres weekly came today with this interesting article:
> https://hakibenita.com/sql-medium-text-performance
>
> The problem it explains is when you have no one records being stored on
> toast, even if the
Postgres weekly came today with this interesting article:
https://hakibenita.com/sql-medium-text-performance
The problem it explains is when you have no one records being stored on
toast, even if they are toastable. That is because the size of that column
fits on toast_tuple_target size, so it
> *Table 27.32. pg_stat_progress_analyze View*
> and
> *Table 27.33. ANALYZE phases*
>
you´re right, I didn´t see that always have a phases table later. sorry ...
>
>
> Page explaining progress reporting views, for all versions, have "The
>> tables" expression several times when it points to a single table. So,
>> singular expressions should be used, right ?
>>
>> "The tables below describe the information that will be reported and
>> provide information
Page explaining progress reporting views, for all versions, have "The
tables" expression several times when it points to a single table. So,
singular expressions should be used, right ?
"The tables below describe the information that will be reported and
provide information about how to interpret
>
> What kind of reasons do you see where users prefer to delay except to
> avoid data loss in the case where users unintentionally removed some
> data from the primary?
>
>
> Debugging. Suppose I have a problem, but that problem occurs once a week
or a month. When this problem occurs again a
>
> No, I´m talking about that configuration you can have on standby servers
> recovery_min_apply_delay = '8h'
>
>
> oh okay, I think this can be useful in some cases where we want to avoid
> data loss similar to its use for physical standby. For example, if the user
> has by mistake truncated the
>
> oh okay, I think this can be useful in some cases where we want to avoid
> data loss similar to its use for physical standby. For example, if the user
> has by mistake truncated the table (or deleted some required data) on the
> publisher, we can always it from the subscriber if we have such a
No, I´m talking about that configuration you can have on standby servers
recovery_min_apply_delay = '8h'
Atenciosamente,
Em seg., 20 de set. de 2021 às 23:44, Amit Kapila
escreveu:
> On Mon, Sep 20, 2021 at 9:47 PM Marcos Pegoraro wrote:
> >
> > One thing is needed and is
One thing is needed and is not solved yet is delayed replication on logical
replication. Would be interesting to document it on Restrictions page,
right ?
regards,
Marcos
We can run triggers using position only, this way we don´t have these few
cycles to determine ordering.
On creation time we populate position, even if it's not set, so for the
first time position will match trigger names. When user changes a trigger
position we sum 1 to the followers.
regards,
. And usually trigger names
have same names as their functions, so for this table I have to have a
different name just to be fired first.
Em qua., 15 de set. de 2021 às 10:51, Alvaro Herrera <
alvhe...@alvh.no-ip.org> escreveu:
> On 2021-Sep-15, Marcos Pegoraro wrote:
>
> > This
PM, Tom Lane wrote:
> > Marcos Pegoraro writes:
> >> Alphabetical order of triggers sometimes makes me write a_Recalc or
> z_Calc
> >> to be sure it´ll be the first or the last trigger with same event of
> that
> >> table
> >
> >> Oracle and S
This way would be interesting for those are migrating from these databases
too. But ok, I´ll forget it.
Em qua., 15 de set. de 2021 às 08:40, Tom Lane escreveu:
> Marcos Pegoraro writes:
> > Alphabetical order of triggers sometimes makes me write a_Recalc or
> z_Calc
> >
This problem can raise ... there is a trigger foo using position 1, please
choose another
Atenciosamente,
Em qua., 15 de set. de 2021 às 07:59, Daniel Gustafsson
escreveu:
> > On 15 Sep 2021, at 12:28, Marcos Pegoraro wrote:
>
> > CREATE TRIGGER RECALC_THAT BEFORE UP
Hi Hackers,
Alphabetical order of triggers sometimes makes me write a_Recalc or z_Calc
to be sure it´ll be the first or the last trigger with same event of that
table
Oracle and SQL Server have FOLLOWS and PRECEDES when defining trigger
execution order. Firebird has POSITION, which I like it
57 matches
Mail list logo