Re: [ClusterLabs] Virtual Machines with USB Dongle

2015-10-08 Thread Kai Dupke
On 10/08/2015 01:09 PM, J. Echter wrote:
> I go for a network usb sharing hub.

So the dongle isn't seen as a single point of failure?

greetings
Kai Dupke
Senior Product Manager
Server Product Line
-- 
Sell not virtue to purchase wealth, nor liberty to purchase power.
Phone:  +49-(0)5102-9310828 Mail: kdu...@suse.com
Mobile: +49-(0)173-5876766  WWW:  www.suse.com

SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] group resources not grouped ?!?

2015-10-08 Thread Jorge Fábregas
On 10/08/2015 06:04 AM, zulucloud wrote:
>  are there any other ways?

Hi,

You might want to check external/vmware or external/vcenter.  I've never
used them but apparently one is used to fence via the hypervisor (ESXi
itself) and the other thru vCenter.

-- 
Jorge

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Corosync+Pacemaker error during failover

2015-10-08 Thread Ken Gaillot
On 10/08/2015 10:16 AM, priyanka wrote:
> Hi,
> 
> We are trying to build a HA setup for our servers using DRBD + Corosync
> + pacemaker stack.
> 
> Attached is the configuration file for corosync/pacemaker and drbd.

A few things I noticed:

* Don't set become-primary-on in the DRBD configuration in a Pacemaker
cluster; Pacemaker should handle all promotions to primary.

* I'm no NFS expert, but why is res_exportfs_root cloned? Can both
servers export it at the same time? I would expect it to be in the group
before res_exportfs_export1.

* Your constraints need some adjustment. Partly it depends on the answer
to the previous question, but currently res_fs (via the group) is
ordered after res_exportfs_root, and I don't see how that could work.

> We are getting errors while testing this setup.
> 1. When we stop corosync on Master machine say server1(lock), it is
> Stonith'ed. In this case slave-server2(sher) is promoted to master.
>But when server1(lock) reboots res_exportfs_export1 is started on
> both the servers and that resource goes into failed state followed by
> servers going into unclean state.
>Then server1(lock) reboots and server2(sher) is master but in unclean
> state. After server1(lock) comes up, server2(sher) is stonith'ed and
> server1(lock) is slave(the only online node).
>When server2(sher) comes up, both the servers are slaves and resource
> group(rg_export) is stopped. Then server2(sher) becomes Master and
> server1(lock) is slave and resource group is started.
>At this point configuration becomes stable.
> 
> 
> PFA logs(syslog) of server2(sher) after it is promoted to master till it
> is first rebooted when resource exportfs goes into failed state.
> 
> Please let us know if the configuration is appropriate. From the logs we
> could not figure out exact reason of resource failure.
> Your comment on this scenario will be very helpful.
> 
> Thanks,
> Priyanka
> 
> 
> 
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> 


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] [Announce] clufter-0.50.5 released

2015-10-08 Thread Jan Pokorný
I am happy to announce that clufter-0.50.5, a tool/library for
transforming/analyzing cluster configuration formats, has been
released and published (incl. signature using my 60BCBB4F5CD7F9EF key):


or alternative (original) location:



Changelog highlights:
- this is a maintenance release and the only visible change is that
  wrapping of long lines in cib2pcscmd command output actually got
  enabled (as it should have been since beginning)

 * * *

The public repository (notably master and next branches) is currently at

(rather than ).

Official, signed releases can be found at
 or, alternatively, at

(also beware, automatic archives by GitHub preserve a "dev structure").

Natively packaged in Fedora (python-clufter, clufter-cli).

Issues & suggestions can be reported at either of (regardless if Fedora)
,

(rather than ).


Happy clustering/high-availing :)

-- 
Jan (Poki)


pgpp0MR61y1V3.pgp
Description: PGP signature
___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] (no subject)

2015-10-08 Thread Digimer
On 08/10/15 09:03 PM, TaMen说我挑食 wrote:
> Corosync+Pacemaker error during failover

You need to ask a question if you want us to be able to help you.

-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] dynamic adding nodes

2015-10-08 Thread Vijay Partha
how to add a node dynamically using cman?  could someone help me out by
telling the commands and how is it achieved?

thanks and regards
P.Vijay
___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] gfs2 crashes when i, e.g., dd to a lvm volume

2015-10-08 Thread J. Echter
Am 08.10.2015 um 16:34 schrieb Bob Peterson:
> - Original Message -
>>
>> Am 08.10.2015 um 16:15 schrieb Digimer:
>>> On 08/10/15 07:50 AM, J. Echter wrote:
 Hi,

 i have a strange issue on CentOS 6.5

 If i install a new vm on node1 it works well.

 If i install a new vm on node2 it gets stuck.

 Same if i do a dd if=/dev/zero of=/dev/DATEN/vm-test (on node2)

 On node1 it works:

 dd if=/dev/zero of=vm-test
 Schreiben in „vm-test“: Auf dem Gerät ist kein Speicherplatz mehr
 verfügbar
 83886081+0 Datensätze ein
 83886080+0 Datensätze aus
 42949672960 Bytes (43 GB) kopiert, 2338,15 s, 18,4 MB/s


 dmesg shows the following (while dd'ing on node2):

 INFO: task flush-253:18:9820 blocked for more than 120 seconds.
Not tainted 2.6.32-573.7.1.el6.x86_64 #1
 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> 
 any hint on fixing that?
>>> Every time I've seen this, it was because dlm was blocked. The most
>>> common cause of DLM blocking is a failed fence call. Do you have fencing
>>> configured *and* tested?
>>>
>>> If I were to guess, given the rather limited information you shared
>>> about your setup, the live migration consumed the network bandwidth,
>>> chocking out corosync traffic which caused the peer to be declared lost,
>>> called a fence which failed and left locking hung (which is by design;
>>> better to hang that risk corruption).
>>>
>> Hi,
>>
>> fencing is configured and works.
>>
>> I re-checked it by typing
>>
>> echo c > /proc/sysrq-trigger
>>
>> into node2 console.
>>
>> The machine is fenced and comes back up. But the problem persists.
> Hi,
>
> Can you send any more information about the crash? What makes you think
> it's gfs2 and not some other kernel component? Do you get any messages
> on the console? If not, perhaps you can temporarily disable or delay fencing
> long enough to get console messages.
>
> Regards,
>
> Bob Peterson
> Red Hat File Systems
>
> ___
>
Hi,

i just recognized that gfs2 is probably the wrong candidate.

I use clustered lvm (drbd), and i experience this on a  lvm volume, not
formatted to anything.

What logs would you need to identify the cause?

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org