ALL,
I have setup an active/passive cluster using Pacemaker, CLVM, and GFS2 for
Oracle12c. I can fail over and the system handles as expected. When trying to
run a second instance in Active / Active it never comes up and just falls over.
This outdated documentation states that Oracle
ClusterLabs is proud to announce the latest release of the Pacemaker
cluster resource manager, version 1.1.17. The source code is available at:
https://github.com/ClusterLabs/pacemaker/releases/tag/Pacemaker-1.1.17
The most significant enhancements in this release are:
* A new "bundle" resource
On 07/06/2017 10:29 AM, Ken Gaillot wrote:
> On 07/06/2017 10:13 AM, ArekW wrote:
>> Hi,
>>
>> It seems that my the fence_vbox is running but there are errors in
>> logs every few minutes like:
>>
>> Jul 6 12:51:12 nfsnode1 fence_vbox: Unable to connect/login to fencing
>> device
>> Jul 6
On 07/06/2017 10:13 AM, ArekW wrote:
> Hi,
>
> It seems that my the fence_vbox is running but there are errors in
> logs every few minutes like:
>
> Jul 6 12:51:12 nfsnode1 fence_vbox: Unable to connect/login to fencing device
> Jul 6 12:51:13 nfsnode1 stonith-ng[7899]: warning:
Hi,
It seems that my the fence_vbox is running but there are errors in
logs every few minutes like:
Jul 6 12:51:12 nfsnode1 fence_vbox: Unable to connect/login to fencing device
Jul 6 12:51:13 nfsnode1 stonith-ng[7899]: warning: fence_vbox[30220]
stderr: [ Unable to connect/login to fencing
On 07/06/2017 04:48 PM, Ken Gaillot wrote:
> On 07/06/2017 09:26 AM, Klaus Wenninger wrote:
>> On 07/06/2017 04:20 PM, Cesar Hernandez wrote:
If node2 is getting the notification of its own fencing, it wasn't
successfully fenced. Successful fencing would render it incapacitated
On 07/06/2017 09:26 AM, Klaus Wenninger wrote:
> On 07/06/2017 04:20 PM, Cesar Hernandez wrote:
>>> If node2 is getting the notification of its own fencing, it wasn't
>>> successfully fenced. Successful fencing would render it incapacitated
>>> (powered down, or at least cut off from the network
On 07/06/2017 04:20 PM, Cesar Hernandez wrote:
>> If node2 is getting the notification of its own fencing, it wasn't
>> successfully fenced. Successful fencing would render it incapacitated
>> (powered down, or at least cut off from the network and any shared
>> resources).
>
> Maybe I don't
>
> If node2 is getting the notification of its own fencing, it wasn't
> successfully fenced. Successful fencing would render it incapacitated
> (powered down, or at least cut off from the network and any shared
> resources).
Maybe I don't understand you, or maybe you don't understand me... ;)
On 07/06/2017 08:54 AM, Cesar Hernandez wrote:
>
>>
>> So, the above log means that node1 decided that node2 needed to be
>> fenced, requested fencing of node2, and received a successful result for
>> the fencing, and yet node2 was not killed.
>>
>> Your fence agent should not return success
On 07/06/2017 07:24 AM, pradeep s wrote:
> Team,
>
> I am working on configuring cluster environment for NFS share using
> pacemaker. Below are the resources I have configured.
>
> Quote:
> Group: nfsgroup
> Resource: my_lvm (class=ocf provider=heartbeat type=LVM)
> Attributes: volgrpname=my_vg
On 07/06/2017 03:51 AM, mlb_1 wrote:
> thanks for your solution.
>
> Is anybody can officially reply this topic ?
Digimer is correct, the Red Hat and SuSE limits are their own chosen
limits for technical support, not enforced by the code. There are no
hard limits in the code, but practically
On 07/04/2017 08:28 AM, Cesar Hernandez wrote:
>
>>
>> Agreed, I don't think it's multicast vs unicast.
>>
>> I can't see from this what's going wrong. Possibly node1 is trying to
>> re-fence node2 when it comes back. Check that the fencing resources are
>> configured correctly, and check whether
Team,
I am working on configuring cluster environment for NFS share using
pacemaker. Below are the resources I have configured.
Quote:
Group: nfsgroup
Resource: my_lvm (class=ocf provider=heartbeat type=LVM)
Attributes: volgrpname=my_vg exclusive=true
Operations: start interval=0s timeout=30
On 07/06/2017 02:21 AM, Ulrich Windl wrote:
Ken Gaillot schrieb am 29.06.2017 um 21:15 in
Nachricht
> <44ee8b24-fe14-a204-f791-248546c2f...@redhat.com>:
>> On 06/29/2017 01:38 PM, Ludovic Vaugeois-Pepin wrote:
>>> On Thu, Jun 29, 2017 at 7:27 PM, Ken Gaillot
thanks for your solution.
Is anybody can officially reply this topic ?
At 2017-07-06 11:45:05, "Digimer" wrote:
>I'm not employed by Red Hat, so I can't speak authoritatively.
>
>My understanding, however, is that they do not distinguish as corosync
>on its own doesn't
Hi,
I would like to change default Port for webaccess - actually this is
"hardcoded" to 2224 - any plans to integrate this into any config file so
this could be changed more easy?
thank you!
regards
Philipp
___
Users mailing list:
I don't know what can happen, if the ssl expired, but looking in
/usr/lib/pcsd/ssl.rb I found the function.
def generate_cert_key_pair(server_name)
name = "/C=US/ST=MN/L=Minneapolis/O=pcsd/OU=pcsd/CN=#{server_name}"
ca = OpenSSL::X509::Name.parse(name)
key = OpenSSL::PKey::RSA.new(2048)
On 05/07/17 14:55, Ken Gaillot wrote:
> Wow! I'm looking forward to the September summit talk.
>
Me too! Congratulations on the release :)
Chrissie
> On 07/05/2017 01:52 AM, Digimer wrote:
>> Hi all,
>>
>> I suspect by now, many of you here have heard me talk about the Anvil!
>>
>>> schrieb am 03.07.2017 um 15:30 in Nachricht
:
> Ken Gaillot schrieb am 29.06.2017 21:15:59:
>
>> Von: Ken Gaillot
>> An: Ludovic Vaugeois-Pepin
>>> Ken Gaillot schrieb am 29.06.2017 um 21:15 in
>>> Nachricht
<44ee8b24-fe14-a204-f791-248546c2f...@redhat.com>:
> On 06/29/2017 01:38 PM, Ludovic Vaugeois-Pepin wrote:
>> On Thu, Jun 29, 2017 at 7:27 PM, Ken Gaillot wrote:
>>> On 06/29/2017 04:42 AM,
>>> Ken Gaillot schrieb am 29.06.2017 um 19:27 in
>>> Nachricht
:
> On 06/29/2017 04:42 AM, philipp.achmuel...@arz.at wrote:
>> Hi,
>>
>> In order to reboot a Clusternode i would like to set the node to standby
>> first, so a
>
> I don't have answers, but questions:
> Assuming node1 was DC when stopped: Will ist CIB still record it as DC after
> being stopped?
> Obviously node1 cannot know about any changes node2 did. And node1 when
> started will find that node2 is unexpectedly down, so it will fence it to be
>
>
> AFAIK that's not proper fencing. SunOS once had a "fasthalt" command. In
> Linux "halt -nf" might do a similar thing, or maybe trigger a reboot via
> sysrq (echo b > /proc/sysrq-trigger).
>
> Fencing is everything but a clean shutdown. The specific problem is that
> shutdown may be
>>
>>
>> Thanks. But I think is not a good idea to disable startup fencing: I have
>> shared disks (drbd) and stonith is very important in this scenario
>
> AFAIK. DRBD is not considered to be a shared disk; it's a replicated disk at
> best.
>
Of course I know it. Only 1 of the nodes can
25 matches
Mail list logo