4.
>
> On Jul 15, 2019 4:53:47 PM, Tiemen Ruiten wrote:
>
> You could just export the variables in .pgsql_profile in the home
> directory of the user running PostgreSQL (usually /var/lib/pgsql). This is
> what I have in there for oracle_fdw:
>
> export PATH=$PATH:/usr/pgsql-11
You could just export the variables in .pgsql_profile in the home directory
of the user running PostgreSQL (usually /var/lib/pgsql). This is what I
have in there for oracle_fdw:
export PATH=$PATH:/usr/pgsql-11/bin
> ORACLE_HOME=/usr/lib/oracle/12.1/client64
> export
On Wed, Jul 10, 2019 at 2:47 PM Jehan-Guillaume de Rorthais
wrote:
> >
> > I double-checked monitoring data: there was approximately one minute of
> > replication lag on one slave and two minutes of replication lag on the
> > other slave when the original issue occurred.
>
> what lag? current
On Tue, Jul 9, 2019 at 4:21 PM Jehan-Guillaume de Rorthais
wrote:
> On Tue, 9 Jul 2019 13:22:06 +0200
> Tiemen Ruiten wrote:
>
> > On Mon, Jul 8, 2019 at 10:01 PM Jehan-Guillaume de Rorthais <
> j...@dalibo.com>
> ...
> > > I dig in xlog.c today. Maybe
On Mon, Jul 8, 2019 at 10:01 PM Jehan-Guillaume de Rorthais
wrote:
> I should have step up to this thread, sorry :)
>
Really appreciate all the assistance so far.
> The real problem is not how much xact you will lost during failover, but
> how we
> can choose the best standby to elect. This
On Mon, Jul 8, 2019 at 4:59 PM Jehan-Guillaume de Rorthais
wrote:
> On Mon, 8 Jul 2019 13:56:49 +0200
> Tiemen Ruiten wrote:
>
> > Thank you for the clear explanation and advice.
> >
> > Hardware is adequate: 8x SSD and 20 cores per node, but I should note
>
200
> Tiemen Ruiten wrote:
>
> > On Fri, Jul 5, 2019 at 5:09 PM Jehan-Guillaume de Rorthais <
> j...@dalibo.com>
> > wrote:
> >
> > > It seems to me the problem comes from here:
> > >
> > > Jul 03 19:31:38 [30151] ph-sql-03.prod.ams.i.
On Fri, Jul 5, 2019 at 5:09 PM Jehan-Guillaume de Rorthais
wrote:
> It seems to me the problem comes from here:
>
> Jul 03 19:31:38 [30151] ph-sql-03.prod.ams.i.rdmedia.com crmd:
> notice:
> te_rsc_command: Initiating notify operation
> pgsqld_pre_notify_promote_0 on
that would mean 120s for demote timeout? Or 30s for
start/stop?
On Fri, 14 Jun 2019 at 15:55, Jehan-Guillaume de Rorthais
wrote:
> On Fri, 14 Jun 2019 13:18:09 +0200
> Tiemen Ruiten wrote:
>
> > Thank you, useful advice!
> >
> > Logs are attached, they cover th
I've crossposted the question about checkpoints taking a long time to
pgsql-general as well :)
On Fri, 14 Jun 2019 at 15:05, Tiemen Ruiten wrote:
> Current size of the database is around 600GB uncompressed (LZ4 compression
> is enabled on the ZFS dataset).
>
> On Fri, 14 Jun 2
Current size of the database is around 600GB uncompressed (LZ4 compression
is enabled on the ZFS dataset).
On Fri, 14 Jun 2019 at 14:59, Tiemen Ruiten wrote:
> Hi, yes I'm also puzzled by this. The cluster is certainly not
> underpowered, running on baremetal with 8x SSD in ZFS
checkpoint_completion_target = 0.9
I wonder if checkpoint_timeout should be lowered?
On Fri, 14 Jun 2019 at 14:49, Adrien Nayrat
wrote:
> On 6/14/19 12:27 PM, Tiemen Ruiten wrote:
> > This took longer than the configured timeout of 60s (checkpoint hadn't
> completed
> > yet) and t
heckpoints can take up to 15
minutes to complete on this cluster. So is 20 minutes reasonable? Any other
operations I should increase the timeouts for?
Why didn't pacemaker elect and promote one of the other nodes?
--
Tiemen Ruiten
Infrastructure Engine
won't be possible to
>> master/slave systemd resources as it is not supported anyway.
>>
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1493416
>
>
>
>> Regards,
>> Tomas
>>
>> [1]: http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-sin
.
pacemaker-libs-1.1.16-12.el7_4.2.x86_64
pacemaker-cluster-libs-1.1.16-12.el7_4.2.x86_64
pacemaker-1.1.16-12.el7_4.2.x86_64
pacemaker-cli-1.1.16-12.el7_4.2.x86_64
corosynclib-2.4.0-9.el7_4.2.x86_64
corosync-2.4.0-9.el7_4.2.x86_64
Am I doing something wrong?
--
Tiemen Ruiten
Systems Engineer
R
15 matches
Mail list logo