Hi folks,
I'm assigned to system test Pacemaker/Corosync on the KVM on System Z
platform
with pacemaker-1.1.13-10 and corosync-2.3.4-7 .
I have a cluster with 5 KVM hosts, and a total of 200
ocf:pacemakerVirtualDomain resources defined to run
across the 5 cluster nodes (symmertical is true
On 08/29/2016 03:27 PM, Vladislav Bogdanov wrote:
> Maybe #!/bin/ocfsh symlink provided by resource-agents package?
... and that's how lennartware ended up implementing its own syslog...
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
On August 29, 2016 11:07:39 PM GMT+03:00, Lars Ellenberg
wrote:
>On Mon, Aug 29, 2016 at 04:37:00PM +0200, Dejan Muhamedagic wrote:
>> Hi,
>>
>> On Mon, Aug 29, 2016 at 02:58:11PM +0200, Gabriele Bulfon wrote:
>> > I think the main issue is the usage of the "local"
On Mon, Aug 29, 2016 at 04:37:00PM +0200, Dejan Muhamedagic wrote:
> Hi,
>
> On Mon, Aug 29, 2016 at 02:58:11PM +0200, Gabriele Bulfon wrote:
> > I think the main issue is the usage of the "local" operator in ocf*
> > I'm not an expert on this operator (never used!), don't know how hard it is
>
I've got a number of scripts that are based on LSB compliant scripts,
but which also accept arguments & values. For example, a script to manage
multiple virtual machines has a command-line in the form:
vbox_init --vmname $VMNAME [-d|--debug] [start|stop|status|restart]
I'd like to manage
On Mon, 29 Aug 2016 10:02:28 -0500
Ken Gaillot wrote:
> On 08/29/2016 09:43 AM, Dejan Muhamedagic wrote:
...
>> I doubt that we could do a moderately complex shell scripts
>> without capability of limiting the variables' scope and retaining
>> sanity at the same time.
>
>
Ok, got it, I hadn't gracefully shut pacemaker on node2.
Now I restarted, everything was up, stopped pacemaker service on host2 and I
got host1 with both IPs configured. ;)
But, though I understand that if I halt host2 with no grace shut of pacemaker,
it will not move the IP2 to Host1, I don't
On 08/29/2016 05:18 PM, Gabriele Bulfon wrote:
> Hi,
>
> now that I have IPaddr work, I have a strange behaviour on my test
> setup of 2 nodes, here is my configuration:
>
> ===STONITH/FENCING===
>
> primitive xstorage1-stonith stonith:external/ssh-sonicle op monitor
> interval="25" timeout="25"
Hi,
now that I have IPaddr work, I have a strange behaviour on my test setup of 2
nodes, here is my configuration:
===STONITH/FENCING===
primitive xstorage1-stonith stonith:external/ssh-sonicle op monitor
interval="25" timeout="25" start-delay="25" params hostlist="xstorage1"
primitive
On 2016-08-29 04:06, Gabriele Bulfon wrote:
Thanks, though this does not work :)
Uhm... right. Too many languages, sorry: perl's system() will call the
login shell, system system() uses /bin/sh, and exec()s will run whatever
the programmer tells them to. The point is none of them cares what
Hi,
On Mon, Aug 29, 2016 at 08:47:43AM -0500, Ken Gaillot wrote:
> On 08/29/2016 04:17 AM, Gabriele Bulfon wrote:
> > Hi Ken,
> >
> > I have been talking with the illumos guys about the shell problem.
> > They all agreed that ksh (and specially the ksh93 used in illumos) is
> > absolutely
Hi,
On Mon, Aug 29, 2016 at 02:58:11PM +0200, Gabriele Bulfon wrote:
> I think the main issue is the usage of the "local" operator in ocf*
> I'm not an expert on this operator (never used!), don't know how hard it is
> to replace it with a standard version.
Unfortunately, there's no command
On 08/27/2016 09:15 PM, chenhj wrote:
> Hi all,
>
> When i use the following command to simulate data lost of network at one
> member of my 3 nodes Pacemaker+Corosync cluster,
> sometimes it cause Pacemaker on another node exit.
>
> tc qdisc add dev eth2 root netem loss 90%
>
> Is there any
On 08/29/2016 04:03 PM, Ken Gaillot wrote:
> On 08/29/2016 01:38 AM, Stefano Ruberti wrote:
>> Dear all,
>>
>> I have following situation and I need an advice from you:
>>
>> in my Active/Passive Cluster (Ubuntu_16.04 corosync + pacemaker , no pcs)
>>
>> Node_ANode_B
>> Resource1
On 08/29/2016 01:38 AM, Stefano Ruberti wrote:
> Dear all,
>
> I have following situation and I need an advice from you:
>
> in my Active/Passive Cluster (Ubuntu_16.04 corosync + pacemaker , no pcs)
>
> Node_ANode_B
> Resource1Resource1
> Resource2Resource2
> Resource3
On 08/29/2016 03:47 PM, Ken Gaillot wrote:
> On 08/29/2016 04:17 AM, Gabriele Bulfon wrote:
>> Hi Ken,
>>
>> I have been talking with the illumos guys about the shell problem.
>> They all agreed that ksh (and specially the ksh93 used in illumos) is
>> absolutely Bourne-compatible, and that the
On 08/29/2016 04:17 AM, Gabriele Bulfon wrote:
> Hi Ken,
>
> I have been talking with the illumos guys about the shell problem.
> They all agreed that ksh (and specially the ksh93 used in illumos) is
> absolutely Bourne-compatible, and that the "local" variables used in the
> ocf shells is not a
I think the main issue is the usage of the "local" operator in ocf*
I'm not an expert on this operator (never used!), don't know how hard it is to
replace it with a standard version.
Happy to contribute, it still the case
Gabriele
Gabriele Bulfon writes:
> Hi Ken,
> I have been talking with the illumos guys about the shell problem.
> They all agreed that ksh (and specially the ksh93 used in illumos) is
> absolutely Bourne-compatible, and that the "local" variables used in the ocf
> shells is not a
Hi Ken,
I have been talking with the illumos guys about the shell problem.
They all agreed that ksh (and specially the ksh93 used in illumos) is
absolutely Bourne-compatible, and that the "local" variables used in the ocf
shells is not a Bourne syntax, but probably a bash specific.
This means
On 08/28/2016 04:15 AM, chenhj wrote:
> Hi all,
>
> When i use the following command to simulate data lost of network at
> one member of my 3 nodes Pacemaker+Corosync cluster,
> sometimes it cause Pacemaker on another node exit.
>
> tc qdisc add dev eth2 root netem loss 90%
>
> Is there any
Dear all,
I have following situation and I need an advice from you:
in my Active/Passive Cluster (Ubuntu_16.04 corosync + pacemaker , no pcs)
Node_ANode_B
Resource1Resource1
Resource2Resource2
Resource3Resource3
rsyslogd rsyslogd
1. is possible to configure
22 matches
Mail list logo