> On May 7, 2024, at 05:02, Amit Kapila wrote:
>
>
> In PG-14, we have added a feature in logical replication to stream
> long in-progress transactions which should reduce spilling to a good
> extent. You might want to try that.
That's been my principal recommendation (since that would also
Thank you for the reply!
> On May 1, 2024, at 02:18, Ashutosh Bapat wrote:
> Is there a large transaction which is failing to be replicated repeatedly -
> timeouts, crashes on upstream or downstream?
AFAIK, no, although I am doing this somewhat by remote control (I don't have
direct access to
Hi,
I wanted to check my understanding of how control flows in a walsender doing
logical replication. My understanding is that the (single) thread in each
walsender process, in the simplest case, loops on:
1. Pull a record out of the WAL.
2. Pass it to the recorder buffer code, which,
3.
Hi,
I wanted to hop in here on one particular issue:
> On Dec 12, 2023, at 02:01, Tomas Vondra wrote:
> - desirability of the feature: Random IDs (UUIDs etc.) are likely a much
> better solution for distributed (esp. active-active) systems. But there
> are important use cases that are likely to
> On Jan 12, 2023, at 12:35, Andres Freund wrote:
>
> On 2023-01-12 12:28:39 -0800, Christophe Pettus wrote:
>> What's the distinction between errdetail and errdetail_log in the ereport
>> interface?
>
> Only goes to the server log, not to the client.
Thanks!
What's the distinction between errdetail and errdetail_log in the ereport
interface?
which would break the BEGIN/EXCEPTION/END
semantics.
It's not clear to me what the alternative semantics would be. Can you propose
specific database behavior for a COMMIT or ROLLBACK inside a
BEGIN/EXCEPTION/END block which retain the savepoint behavior of
BEGIN/EXCEPTION/END?
--
-- Christophe
of course!
--
-- Christophe Pettus
x...@thebuild.com
te a wiki page to accurately describe the failure scenarios for both
exclusive and non-exclusive backups, and the recovery actions for them. If it
exists already, my search attempts weren't successful. If it doesn't, I'm
happy to start one.
--
-- Christophe Pettus
x...@thebuild.com
be an argument going on that all of the
cool kids will have moved off the old interface and there was essentially no
cost to removing it in v12 or v13, and that didn't correspond to my experience.
--
-- Christophe Pettus
x...@thebuild.com
tious, and the change is more or less "drop the file
into mumble/conf.d rather than mumble", which is less of a break.
--
-- Christophe Pettus
x...@thebuild.com
system" is going to be.
--
-- Christophe Pettus
x...@thebuild.com
g that too many dependencies are dragged in, or just lack of familiarity
with them.
--
-- Christophe Pettus
x...@thebuild.com
be
> removed, and then we'll actually remove it in PG13.
That's not my position, certainly; I still object to its removal.
--
-- Christophe Pettus
x...@thebuild.com
ntable using shell scripts.
It's not complicated why: Backup has to interact with a lot more components of
the overall environment than a CSV export, and those components vary *wildly*
from one installation to another, and are often over-contrained by local policy.
--
-- Christophe Pettus
x...@thebuild.com
me of the incompatible catalog changes (in particular, in pg_stat_activity) I
thought were gratuitous, but we did them, and no point in relitigating that
now. I'd say that the internal layout of PGDATA is fairly weak promise
compared to an SQL-level construct, especially one as widely used as
pg_st
* of installations depend on.
> I don't agree that simply documenting the issues with
> it is a sufficient solution to make us keep it.
Understood, but I think we need to document it no matter what.
--
-- Christophe Pettus
x...@thebuild.com
I.
> Sure, it'd be nice if someone would fix it to list out the problems with
> the exclusive API, but that requires someone wanting to spend time on
> it.
I'll take a turn at it.
--
-- Christophe Pettus
x...@thebuild.com
his one because we
said so" when it is an API of long standing that works just as it always did
isn't going to cut it.
--
-- Christophe Pettus
x...@thebuild.com
han deprecate the existing API, I'd rather see the documentation
updated to discuss the danger cases.
--
-- Christophe Pettus
x...@thebuild.com
". I was just bitted by that this week.
--
-- Christophe Pettus
x...@thebuild.com
dump) would be an undoubtedly
superior choice. A long depreciation window would cover a lot of those
situations.
--
-- Christophe Pettus
x...@thebuild.com
> On Jul 26, 2018, at 19:35, Peter Geoghegan wrote:
>
> Why, specifically, would it make them unhappy?
Forensic and archive backups in .tar format (which I know of users doing) would
require a two-step restore process on newer versions.
--
-- Christophe Pettus
x...@thebuild.com
> On Jul 18, 2018, at 14:33, Thomas Munro wrote:
> Here you side step those questions completely and make that the end
> user's problem. I like it.
+1. This is a clever solution, since any kind of key vault or other system
could be dropped in there.
--
-- Christophe Pe
checkpoint or have a --checkpoint option along the lines of pg_basebackup?
This scenario (pg_rewind being run very quickly after secondary promotion) is
not uncommon when there's scripting around the switch-over process.
--
-- Christophe Pettus
x...@thebuild.com
> On Jul 12, 2018, at 19:22, Christophe Pettus wrote:
>
>
>> On Jul 12, 2018, at 17:52, Michael Paquier wrote:
>> Wild guess: you did not issue a checkpoint on the promoted standby
>> before running pg_rewind.
>
> I don't believe a manual checkpoint was don
tup after the timeline switch:
> 2018-07-10 19:28:38 UTC [5068]: [1-1] user=,db=,app=,client= LOG: checkpoint
> starting: force
The pg_rewind was started about 90 seconds later.
--
-- Christophe Pettus
x...@thebuild.com
for about 10 seconds (although it had completed recovery) before being
shut down again, and pg_rewind applied to it to reconnect it with the promoted
secondary.
--
-- Christophe Pettus
x...@thebuild.com
r Chamber that is answerable only to itself. It also allows
for an appeal mechanism.
--
-- Christophe Pettus
x...@thebuild.com
are left with some unappealing options to handle that
in a pooling environment.
The next most common problem are prepared statements breaking, which certainly
qualifies as a session-level feature.
--
-- Christophe Pettus
x...@thebuild.com
son to
move it into core is to avoid the limitations that a non-core pooler has.
--
-- Christophe Pettus
x...@thebuild.com
ssed by this in contrib/ (with the same concern you noted), at
minimum as an example of how to do logging in other formats.
--
-- Christophe Pettus
x...@thebuild.com
library handles them just file, for example. You have to handle newlines that
are part of a log message somehow; a newline in a PostgreSQL query, for
example, needs to be emitted to the logs.
--
-- Christophe Pettus
x...@thebuild.com
> On Apr 15, 2018, at 11:00, David Arnold <dar@xoe.solutions> wrote:
>
> CSV Logs: https://pastebin.com/uwfmRdU7
Is the issue that there are line breaks in things like lines 7-9?
--
-- Christophe Pettus
x...@thebuild.com
t; I see your note:
> I have reviewed some log samples and all DO contain some kind of multi line
> logs which are very uncomfortable to parse reliably in a log streamer.
... but I don't see any actual examples of those. Can you elaborate?
--
-- Christophe Pettus
x...@thebuild.com
> On Apr 15, 2018, at 10:07, Christophe Pettus <x...@thebuild.com> wrote:
>
>
>> On Apr 15, 2018, at 09:51, David Arnold <dar@xoe.solutions> wrote:
>>
>> 1. Throughout this vivid discussion a good portion of support has already
>> been manifest
on to this need.
I'm afraid I don't see that. While it's true that as a standard, CSV is
relatively ill-defined, as a practical matter in PostgreSQL it is very easy to
write code that parses .csv format.
--
-- Christophe Pettus
x...@thebuild.com
table will always be created in the default
tablespace unless otherwise specified, but child tables are sufficiently
distinct from that case that I don't see it as a painful asymmetry.
--
-- Christophe Pettus
x...@thebuild.com
think there's any
> need to panic.
No reason to panic, yes. We can assume that if this was a very big persistent
problem, it would be much more widely reported. It would, however, be good to
find a way to get the error surfaced back up to the client in a way that is not
just monitoring the ker
e popular with people running on file systems or OSes that
don't have this issue. (Setting aside the daunting prospect of implementing
that.)
--
-- Christophe Pettus
x...@thebuild.com
llocatable memory.
I guess in theory that they could swap them, but swapping out a file system
buffer in hopes that sometime in the future it could be properly written
doesn't seem very architecturally sound to me.
--
-- Christophe Pettus
x...@thebuild.com
pluggable storage coming along) to address this.
While the failure modes are more common, the solution (a PITR backup) is one
that an installation should have anyway against media failures.
--
-- Christophe Pettus
x...@thebuild.com
essed
disk block reporting an uncorrectable error when we finally get around to
reading it.
--
-- Christophe Pettus
x...@thebuild.com
wer in this bad
situation is (a) fix the error, then (b) replay from a checkpoint before the
error occurred, but it appears we can't even guarantee that a PostgreSQL
process will be the one to see the error.
--
-- Christophe Pettus
x...@thebuild.com
> On Apr 2, 2018, at 17:05, Andres Freund <and...@anarazel.de> wrote:
>
> Don't we pretty much already have agreement in that? And Craig is the main
> proponent of it?
For sure on the second sentence; the first was not clear to me.
--
-- Christophe Pettus
x...@thebuild.com
semantics. Given that, I think the PANIC option is the soundest one, as
unappetizing as it is.
--
-- Christophe Pettus
x...@thebuild.com
ntaining a data
warehouse from an OLTP system.
--
-- Christophe Pettus
x...@thebuild.com
47 matches
Mail list logo