ds to a monitoring timeout and
> resource restart etc
>
> Is there any way to ignore one timed out monitoring request and react only on
> two (or more) failed requests in a row?
>
> Best regards,
> Klecho
Not currently, but that is planned for a future versio
t.
> Is there any way to check ssk keys?
I'd just login once to the host as root from the cluster nodes, to make
it sure it works, and accept the host when asked.
>
> Sorry for all theese questions.
>
>
> Thanks a lot
>
>
>
>
>
>
> El 1 sept. 2017 0:
o be done. last-lrm-refresh is just a
dummy property that the cluster uses to trigger that. It's set in
certain rare circumstances when a resource cleanup is done. You should
see a line in your logs like "Triggering a refresh after ... deleted ...
from the LRM". That might give
ing cib_modify operation for section
> status to all (origin=local/crmd/45)
> Aug 31 23:38:31 [1531] vdicnode01cib: info:
> cib_perform_op: Diff: --- 0.163.5 2
> Aug 31 23:38:31 [1531] vdicnode01cib: info:
> cib_perform_op: Diff: +++ 0.163.6 (null)
> A
h multiple agents on one level pacemaker always does
> on/off and no reboot.
> But for the higher level instance you can map the on-action to reboot
> and the off-action to metadata.
> While for the lower prio level you would just map the on-action to
> metadata (to make it
needs to be done about failures. It looks like the other
node was DC at this time, so its logs will be more relevant. It's fine
for this node not to have logs if the DC didn't ask it to do anything.
Logs with "pengine:" on the other node will show the decisions made.
>
>
> I am
ion-threshold set to INFINITY
>
> Thank you in advance.
> Regards,
> Paolo
>
> On Tue, Oct 3, 2017 at 7:12 AM, Ken Gaillot <kgail...@redhat.com>
> wrote:
> > On Mon, 2017-10-02 at 12:32 -0700, Paolo Zarpellon wrote:
> > > Hi,
> > > on
would be to make sure that your version of
crm_mon supports the mail-* arguments. It's a compile-time option, and
I don't know if Ubuntu enabled it. Simply do "man crm_mon", and if it
shows the mail-* options, then you have the capability.
--
Ken Gaillot <kgail...@redhat.com>
__
simply a colocation constraint with a negative score.
For details, see http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/htm
l-single/Pacemaker_Explained/index.html#s-resource-colocation (and/or
the help for whatever higher-level tools you're using)
>
> > &
t one example, it happens randomly with others resources
> and times.
>
> How can it be avoid?
>
> Regards.
>
> ·
>
> Roberto Muñoz
--
Ken Gaillot <kgail...@redhat.com>
___
Users mailing li
_outdated_slaves=false
> binary="/usr/sbin/mysqld" test_user=test test_passwd=test \
> op start interval=0 timeout=60s \
> op stop interval=0 timeout=60s \
> op monitor interval=5s role=Master OCF_CHECK_LEVEL=1 \
> op monitor interval=2s rol
b_file_write_with_digest: Reading cluster configuration file
> /var/lib/pacemaker/cib/cib.kA8iQp (digest:
> /var/lib/pacemaker/cib/cib.Va05np)
> Oct 11 13:56:03 [3556] zfs-serv2cib: info:
> cib_process_ping:Reporting our current digest to zfs-serv2:
> 10209a8d
ome: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.
> pdf
> Bugs: http://bugs.clusterlabs.org
--
Ken Gaillot <kgail...@redhat.com>
___
Users mailing list: Users@clusterlabs.org
http://lists.cluste
is being inherited from the second resource
> > when it does not have any value.
> >
> > I must have something wrongly configuration but I can't really see
> > why there is this relationship...
> >
> > Gerard
> >
> > On Tue, Oct 17, 2017 at 3:35 PM, Ken Gaillot
he plumbing in
for it, so that lrmd can execute alert agents as the hacluster user.
All that would be needed would be a new resource meta-attribute and the
IPC API to use it. It's low priority due to a large backlog at the
moment, but we'd be happy to take a pull request for it. The resource
agent would obviousl
release next week. Any testing you can do is very welcome.
--
Ken Gaillot <kgail...@redhat.com>
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users
Project Home: http://www.clusterlabs.org
Getting started
that fails, nothing else can proceed.
>
> I disabled it for now, and
>
> pcs resource debug-start resource-zfs --full
>
> works fine: the pool is imported, filesystems are mounted and
> exported
> -- but the resources remain stopped no matter what.
>
> I don't see
(or 2.0.0) time
frame.
On Mon, 2017-09-25 at 18:53 -0500, Ken Gaillot wrote:
> Hi all,
>
> I thought I'd call attention to one of the most visible deprecations
> coming in 1.1.18: stonith-enabled. In order to deprecate that option,
> we have to provide an alternate way t
nt.
> I know that my english and my pacemaker knowledge are not so high but
> could you please give me some explanations about that behavior that I
> misunderstand.
Not at all, this was a very clear and well-thought-out post :)
> ð If something is wrong with my post, just tell me (th
ed in 1.1.17. The bug only
affected cloned resources where one clone's name ended with the
other's.
FYI, CentOS 7.4 has 1.1.16, but that won't help this issue.
>
> On Wed, Oct 18, 2017 at 4:42 PM, Ken Gaillot <kgail...@redhat.com>
> wrote:
> > On Wed, 2017-10-18 at 14:25 +0200
e, in RHEL, TasksMax was backported as of RHEL 7.3, but the
default was changed to infinity.
--
Ken Gaillot <kgail...@redhat.com>
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users
Pr
and appreciated.
Many thanks to all contributors of source code to this release,
including Andrew Beekhof, Aravind Kumar, Artur Novik, Bin Liu, Ferenc
Wágner, Helmut Grohne, Hideo Yamauchi, Igor Tsiglyar, Jan
Pokorný, Kazunori INOUE, Keisuke MORI, Ken Gaillot, Klaus Wenninger,
Nye Liu, Tomer Azran
.tld crmd: notice:
> > process_lrm_event: Operation ovndb_servers_monitor_0: ok
> > (node=node-1.domain.tld, call=185, rc=0, cib-update=88,
> confirmed=true)
> > <29>Nov 23 23:06:03 node-1 crmd[665251]: notice:
> process_lrm_event:
> > Operation
a particular value depending on whether it is started or
stopped, was added for cases like this. You can create a group of A
plus the attribute, then create a location constraint with a rule
allowing B to run only where the attribute is set as started. This way,
A is unaware of the relationship and ignores
10)
> Inverse inc. :: Leaders behind SOGo (www.sogo.nu), PacketFence
> (www.packetfence.org) and Fingerbank (www.fingerbank.org)
>
> > On Nov 10, 2017, at 11:39, Ken Gaillot <kgail...@redhat.com> wrote:
> >
> > On Thu, 2017-11-09 at 20:27 -0500, Derek Wuelfr
cluster now
> wants
> to restart elsewhere? If that's the case, would it be possible to
> optionally limit startup fencing to when it's really needed?
>
> Thanks for any light you can shed!
There's no automatic mechanism to know that, but if you know befo
pcs and RHEL versions. Upgrading to RHEL 7.4
would get you recent versions of everything, though, so that would be
easiest if it's an option.
--
Ken Gaillot <kgail...@redhat.com>
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs
stem1 vmgi
> order vmgi_after_libvirtd inf: cl_libvirtd vmgi
Those look good as far as ordering vmgi relative to the filesystem, but
I see below that it's vm_lomem1 that's left running. Is vmgi a group
containing vm_lomem1?
> On 20.11.2017 16:44:00 Ken Gaillot wrote:
> > On Fri, 2017-11-10 at 11:15
t; timeout="60"/>
> name="demote" timeout="60"/>
>
>
>
> <op name="monitor" interval="20"
> timeout="30" id="ovndb-servers-monitor-20"/>
>
>
>
best idea I can think of
> > > would be
> > > to set all nodes except one in standby, and then shutdown
> > > pacemaker
> > > everywhere...
> > >
> >
> > What issues does it solve? Which node should be the one?
> >
> > How do you ge
by the pacemaker DC:
> > > <30>Nov 30 15:22:19 node-1 ovndb-servers(tst-ovndb)[2980860]:
> > > INFO: ovsdb_server_monitor
> > > <30>Nov 30 15:22:19 node-1 ovndb-servers(tst-ovndb)[2980860]:
> > > INFO: ovsdb_server_check_status
> > > <30
ou're using systemd, you can run "systemctl
disable --now sshd" on all your nodes, and add a systemd:sshd resource
to your cluster.
--
Ken Gaillot <kgail...@redhat.com>
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.
On Wed, 2017-11-01 at 10:04 +0100, Ferenc Wágner wrote:
> Ken Gaillot <kgail...@redhat.com> writes:
>
> > When an operation completes, a history entry () is
> > added to
> > the pe-input file. If the agent supports reload, the entry will
> > include
> &
config’
>
> https://pastebin.com/1TUvZ4X9
>
> Cheers!
> -dw
>
> --
> Derek Wuelfrath
> dwuelfr...@inverse.ca :: +1.514.447.4918 (x110) :: +1.866.353.6153
> (x110)
> Inverse inc. :: Leaders behind SOGo (www.sogo.nu), PacketFence
> (www.packetfence.org) and Fingerban
gt; IBM Italia S.p.A. Sede Legale: Circonvallazione Idroscalo - 20090
> Segrate (MI) Cap. Soc. euro 347.256.998,80 C. F. e Reg. Imprese MI
> 01442240030 - Partita IVA 10914660153 Societa' con unico azionista
> Societa' soggetta all'attivita' di direzione e coordinamento di
> Intern
outside cluster control, and re-detecting
resource status after a clean-up. So, reverting the behavior would not
be a good idea; the solution really is to use resource-discovery=never
when appropriate.
--
Ken Gaillot <kgail...@redhat.com>
___
Users m
g mount point".
What does the configuration for the resources and constraints look
like? Based on what you described, Pacemaker shouldn't try to stop the
Filesystem resource before successfully stopping the VM first.
--
Ken Gaillot <kgail...@redhat.com>
_
the cluster-glue configuration for such things? If
not, I'd prefer to drop this.
--
Ken Gaillot <kgail...@redhat.com>
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users
Project Home
gt; cluster-infrastructure=corosync \
> cluster-name=debian \
> no-quorum-policy=ignore \
> default-resource-stickiness=100 \
> stonith-enabled=false \
> last-lrm-refresh=1509546667
>
> So is it possible to check if the ressource nfs is
s
>
That's odd, it should only happen if the cluster is not running, but
then the agent wouldn't have been called.
The CIB is one of the core daemons of pacemaker; it manages the cluster
configuration and status. If it's not running, the cluster can't do
anything.
Perhaps the CIB is crashing, o
time??
>
> thank you!
> regards
> Philipp
>
Good question, I didn't realize that. crm_simulate is a good tool for
exploring that sort of "why", but it's rather arcane. If you have a pe-
input file from the transition wit
before the final
release next week. Any testing you can do is very welcome.
--
Ken Gaillot <kgail...@redhat.com>
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users
Project Home: http://www.clusterla
e-stickiness=100 \
> stonith-enabled=false \
> last-lrm-refresh=1507890181
>
> is that ok? the manual failover looks good.
>
> best regards
--
Ken Gaillot <kgail...@redhat.com>
___
Users mailing list: Users@clusterlabs.
): FAILED
> 192.168.2.177
> > Started: [ 192.168.2.178 192.168.2.179 ]
> > Clone Set: fm_mgt_replica [fm_mgt]
> > Started: [ 192.168.2.178 192.168.2.179 ]
> > Stopped: [ 192.168.2.177 ]
> > I am confusing very much. Is there somethi
tion constraints. If the base resource should only fail
over to the opposite group, that's trickier, but something roughly
similar would be to prefer one node in each group with an equal
positive score location constraint, and migration-threshold=1.
--
Ken Gaillot <kgail...@redhat.com>
_
pacemaker daemonize itself more
"properly", but no one's had the time to address it.
--
Ken Gaillot <kgail...@redhat.com>
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users
Project Hom
On Fri, 2017-11-03 at 08:24 +0100, Kristoffer Grönlund wrote:
> Ken Gaillot <kgail...@redhat.com> writes:
>
> > I decided to do another release candidate, because we had a large
> > number of changes since rc3. The fourth release candidate for
> > Pacemaker
>
u have a strong preference,
you can always build your favorite yourself (which is less of an option
if you are using an enterprise distro and want everything supported).
--
Ken Gaillot <kgail...@redhat.com>
___
Users mailing list: Users@clus
ion about it.
crm_simulate is not very user-friendly, so if you can attach the pe-
input file, I can take a look at it. (The pe-input will be listed at
the end of the transition in the logs on the node that was DC at the
time; you'll see a bunch of "pengine:" messages including one that the
res
hancement, but it would be a big project, so I don't know what the
time frame would be.
--
Ken Gaillot <kgail...@redhat.com>
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users
Project Ho
operating system is not managing the httpd process (via systemd,
upstart, lsb init, etc.).
> How we can achieve a resources failover?
migration-threshold=1
>
> Further I will use this environment for testing the migration-
> threshold.
> Any suggestions regarding this also welcome.
ed start-delay operation
attribute, that you can put on the status operation to delay the first
monitor. That may give you the behavior you want.
> At 2017-11-01 21:20:50, "Ken Gaillot" <kgail...@redhat.com> wrote:
> >On Sat, 2017-10-28 at 01:11 +0800, lkxjtu wrote:
> >
On Tue, 2017-12-05 at 17:43 +0100, Jehan-Guillaume de Rorthais wrote:
> On Tue, 05 Dec 2017 08:59:55 -0600
> Ken Gaillot <kgail...@redhat.com> wrote:
>
> > On Tue, 2017-12-05 at 14:47 +0100, Ulrich Windl wrote:
> > > > > > Tomas Jelinek <tojel...@redhat
able it in Pacemaker so Pacemaker starts everything again.
> For example if i change the global_conf of DRDB configuration, what
> actions I need to make on the Pacemaker in order to reload the
> resource
> with the updated values ?
>
> Sincerely ,
> Vagg
on this during the summit too, but I'm not sure if they
> led anywhere.
>
> Hope this feedback is useful!
--
Ken Gaillot <kgail...@redhat.com>
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo
On Thu, 2017-12-07 at 17:15 +, Adam Spiers wrote:
> Ken Gaillot <kgail...@redhat.com> wrote:
> > On Thu, 2017-12-07 at 12:13 +, Adam Spiers wrote:
> > > https://gocardless.com/blog/incident-review-api-and-dashboard-out
> > > age-
> > > on-10th-o
1375]: warning: Action 246
> (ost0033-es04a_monitor_0) on es7700-3-srv failed (target: 7 vs. rc:
> 189): Error
> Sep 20 08:55:41 md12k-1-srv crmd[11375]: warning: Action 247
> (ost0034-es01a_monitor_0) on es7700-3-srv failed (target: 7 vs. rc:
> 189): Error
> Sep 20 08:55:41
quorum, because as soon
> as I
> increase number of votes back to 2, node immediately resets (due to
> no-quorum-policy=suicide).
>
> Confused ... is it intentional behavior or a bug?
The no-quorum-policy message above shouldn't prevent the cluster
On Mon, 2017-12-11 at 23:43 +0300, Vladislav Bogdanov wrote:
> 11.12.2017 23:06, Ken Gaillot wrote:
> [...]
> > > =
> > >
> > > * The first issue I found (and I expect that to be a reason for
> > > some
> > > other issues) is that
> >
he cluster will always prefer the newest Pacemaker Remote
connection to a remote node, even if an older (dead) connection has not
yet timed out.
--
Ken Gaillot <kgail...@redhat.com>
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/
On Thu, 2017-11-30 at 11:58 +, Adam Spiers wrote:
> Ken Gaillot <kgail...@redhat.com> wrote:
> > On Wed, 2017-11-29 at 14:22 +, Adam Spiers wrote:
> > > Hi all,
> > >
> > > A colleague has been valiantly trying to help me belatedly learn
&g
On Fri, 2017-12-01 at 16:21 -0600, Ken Gaillot wrote:
> On Thu, 2017-11-30 at 11:58 +, Adam Spiers wrote:
> > Ken Gaillot <kgail...@redhat.com> wrote:
> > > On Wed, 2017-11-29 at 14:22 +, Adam Spiers wrote:
> > > > Hi all,
> > > >
> >
> > standing, the CIB history is effectively being forked. So how is
> > this
> > issue avoided?
>
> Quorum? "Cluster formation delay"?
>
> >
> > > The only way to bring up a cluster from being completely stopped
> > > is to
> > >
hat's happened.
Hopefully we can come up with a fix. If you want, you can file a bug
report at bugs.clusterlabs.org, to track the progress.
> 2) Is there any workaround other than "Do not start at the same
> time"?
>
> Best Regards
Before starting pacemaker, if /var/lib/pace
de Rorthais napsal(a):
> > > On Mon, 4 Dec 2017 12:31:06 +0100
> > > Tomas Jelinek <tojel...@redhat.com> wrote:
> > >
> > > > Dne 4.12.2017 v 10:36 Jehan-Guillaume de Rorthais napsal(a):
> > > > > On Fri, 01 Dec 2017 16:34:08 -0600
&g
Best regards
> Antony
> tel. +380669197533
> tel2. +380636564340
> Paypal http://paypal.me/Satskiy
> satski...@gmail.com
--
Ken Gaillot <kgail...@redhat.com>
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlab
it happen.
Still investigating a fix. A workaround is to assign some stickiness or
utilization to sv-fencer.
On Wed, 2017-10-11 at 14:01 +1000, Leon Steffens wrote:
> I've attached two files:
> 314 = after standby step
> 315 = after resource update
>
> On Wed, Oct 11, 2017 at 12:
ingle/Pace
> > maker_Explained/index.html#s-resource-ordering
>
> Ok but i see only how can i create a start order, but how can i
> create a different stop order?
>
> Best regards
> Stefan
symmetrical=false plus first-actio
t; > > Basically what's occurring in my cluster is that the first rule
> > stops the
> > > Sync node from being promoted if the Master ever dies. The second
> > doesn't
> > > but I can't quite follow why.
> >
> > Getting a score of -inf means that
On Tue, 2017-10-31 at 09:33 +0100, Ferenc Wágner wrote:
> Ken Gaillot <kgail...@redhat.com> writes:
>
> > On Fri, 2017-10-20 at 15:52 +0200, Ferenc Wágner wrote:
> >
> > > Ken Gaillot <kgail...@redhat.com> writes:
> > >
> > >
On Mon, 2017-10-30 at 10:48 +0600, Sergey Korobitsin wrote:
> Ken Gaillot ☫ → To Cluster Labs - All topics related to open-source
> clustering welcomed @ Fri, Oct 27, 2017 10:38 -0500
>
> > > Hello,
> > > I'm trying to use https://github.com/marcan/pacemaker-exporter,
&
On Tue, 2017-10-31 at 18:44 +0100, Ferenc Wágner wrote:
> Ken Gaillot <kgail...@redhat.com> writes:
>
> > The pe-input is indeed entirely sufficient.
> >
> > I forgot to check why the reload was not possible in this case. It
> > turns out it is this:
> &
h) the cluster
may choose to move the resource back to that node. That's one reason
failures aren't automatically cleaned after a successful start
elsewhere. Also, keeping the failure allows an administrator to notice
that something went wrong, and manually investigate before allowing the
node to host the r
; Thanks
> Niraj Singh
Hi Niraj,
There are no "official" ansible playbooks for pacemaker and corosync
that I'm aware of, but various users have made some available online.
It's an area I'd like to see more attention given to, but unfortunately
I per
ck yet. I'm using Ubuntu 16
> so it may happen to just work better on your RHEL instances. If you
> have a different ESX version than 6.0, you may have better luck as
> well.
>
> Best wishes,
--
Ken Gaillot <kgail...@redhat.com>
___
ht start timing out, causing unnecessary
recovery.
--
Ken Gaillot <kgail...@redhat.com>
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users
Project Home: http://www.clusterlabs.org
G
> throttling is to keep Pacemaker from overloading the nodes such that
> actions might start timing out, causing unnecessary recovery.
>
>
> lkxjtu
> 邮箱:lkx...@163.com
> 签名由 网易邮箱大师 定制
>
>
>
--
Ken Gaillot <kgail...@redhat.com>
___
and appreciated.
Many thanks to all contributors of source code to this release,
including Gao,Yan, Hideo Yamauchi, Jan Pokorný, and Ken Gaillot.
--
Ken Gaillot <kgail...@redhat.com>
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterla
6
> Skype: georgemelikov
>
> С наилучшими пожеланиями,
> Георгий Меликов,
> m...@gmelikov.ru
> Моб: +7 9152783936
> Skype: georgemelikov
--
Ken Gaillot <kgail...@redhat.com>
___
Users mailing list: Users@clusterl
ng its own fence
device (which would be almost pointless). There was a distant time when
such a constraint was a requirement for fencing to work, but now it's
just for monitoring.
I'm not familiar with VMware fencing, so I can't comment on the
specifics of the agents ...
>
> Thank you in adva
after enabling eth0 of node1, error from previous procedure
> still exist.
> 4. Got an additional error, I have two errors now
> 5. VirtualIP resource doesn't start
>
>
> Regards,
>
> imnotarobot
--
Ken Gaillot <kgail...@redhat.com>
_
On Thu, 2018-05-24 at 16:14 +0200, Klaus Wenninger wrote:
> On 05/24/2018 04:03 PM, Ken Gaillot wrote:
> > On Thu, 2018-05-24 at 06:47 -0400, Jason Gauthier wrote:
> > > On Thu, May 24, 2018 at 12:19 AM, Andrei Borzenkov <arvidjaar@gma
> > > il.c
> > > om&
> > > Users mailing list: Users@clusterlabs.org
> > > https://lists.clusterlabs.org/mailman/listinfo/users
> > >
> > > Project Home: http://www.clusterlabs.org
> > > Getting started: http://www.clusterlabs.org/doc/Cluster_from
t; fence_sanbox2 - Fence agent for QLogic SANBox2 FC switches
> fence_sbd - Fence agent for sbd
> fence_scsi - Fence agent for SCSI persistentl reservation
> fence_tripplite_snmp - Fence agent for APC, Tripplite PDU over SNMP
> fence_vbox - Fence agent for VirtualBox
> fence_virsh - Fenc
> Here is the output of `pcs status` before powering off the primary:
>
> --
> Online: [ d-gp2-dbpg0-1 d-gp2-dbpg0-2 d-gp2-dbpg0-3 ]
>
> Full list of resources:
>
> vfencing (stonith:external/vcenter): Started d-gp2-dbpg0-1
> postgresql-master-vip
n_timeout=60
> power_wait=3 op monitor interval=60s
>
> This results in the following error:
>
> Error: Unable to create resource 'stonith:fence_vmware_soap', it is
> not installed on this system (use --force to override)
>
> In the output of `pcs stonith list`, I see:
&g
ra pengine: info: native_stop_constraints:
> > > cluster_fs_stop_0 is implicit after clusterb is fenced
> > > clustera pengine: info: native_stop_constraints:
> > > cluster_vip_stop_0 is implicit after clusterb is fenced
> > > clustera pengine:
On Mon, 2018-06-18 at 10:10 -0400, Jason Gauthier wrote:
> On Mon, Jun 18, 2018 at 9:55 AM Ken Gaillot
> wrote:
> >
> > On Fri, 2018-06-15 at 21:39 -0400, Jason Gauthier wrote:
> > > Greetings,
> > >
> > > Previously, I was using fiber channe
ttps://paste.debian.net/hidden/9376add7/
>
> best regards
> Stefan
As of the end of that log file, the cluster does intend to start the
resources:
Jun 15 14:29:11 [5623] zfs-serv3pengine: notice: LogActions:
Start nfs-server (zfs-serv3)
Jun 15 14:29:11 [5623] zfs-
pha beta"
> \
> op monitor interval=2h \
> meta target-role=Stoppedprimitive st_libvirt
> stonith:external/libvirt \
> params hypervisor_uri="qemu:///system" hostlist="alpha beta"
> \
> op monitor interval=2h
>
>
eam: Cannot write NULL to
> /var/lib/pacemaker/cib/shadow.20008
> Could not create '/var/lib/pacemaker/cib/shadow.20008': Success
>
> Could anyone help me how to read those messages and what's going on
> my server?
>
> Thanks a lot..
>
>
> On Fri, Jun 8,
Hi,
>
> additional remark:
>
> With some tweaks I made my cluster start two resources (i.e. IP1 and
> IP2) at the same time. But it takes about 4 seconds to that the
> cluster starts the next resources (i.e. IP3 and IP4).
>
> Did anybody see this behaviour before?
>
&
t; er_Explained/ap-upgrade.html
>
> What I want to do is first migrate pacemaker manually and then
> automate it with some scripts.
>
> According to what Ken Gaillot said:
>
> "Rolling upgrades are always supported within the same major number
> line
> (i.e. 1.any
le
>
> I tested it several times, and the results were the same. Why does
> the resource not be scheduled when failure-timeout setting too short?
> And what does
>
> it have to do with the time consuming stop of another resource? Is
> this a bug?
>
> My pacemaker ve
t; > Good luck anyway :)
> >
> > --
> > Jehan-Guillaume de Rorthais
> > Dalibo
>
> ___
> Users mailing list: Users@clusterlabs.org
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: ht
; [1] https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/1.1/html-si
> ngle/Pacemaker_Explained/index.html#_reusing_resource_definitions
> [2] https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/1.1/html-si
> ngle/Pacemaker_Explained/index.html#s-reusing-config-elemen
e devices and
Pacemaker Remote connection resources).
* Allow a monitor to be cancelled when its resource is unmanaged.
The only known issue remaining to be resolved before final release is
some tweaking of the transform of pre-2.0 configurations after an
upgrade.
--
Ken Gaillot
___
at would not be treated like error
> > (causing all sorts of fatal consequences) but still evaluated for
> > dependencies (i.e. dependent resources would not be started). That
> > would
> > be ideal for such case.
I'm not clear what such a result would mean. Is the goal to s
ist
> Id Name State
>
> 9 sl-gate-01 running
>
>
> [root@n03 mmike]# LANG=C virsh list
> Id Name State
> -
t; > > Regards,
> > > imnotarobot
> >
> > Your configuration is correct, but keep in mind scores of all kinds
> > will be added together to determine where the final placement is.
> >
> > In this case, I'd check that you don't have any constraints with
601 - 700 of 1686 matches
Mail list logo