Re: [ClusterLabs] (no subject) --> JUNK email

2015-10-09 Thread roger zhou

TaMen说我挑食 <974120...@qq.com>,

You'd better compose your email title with a word like, JUNK or TEST, to 
avoid misleading people here.


Digimer,

You are really nice! It is suspicious to me this user just to send a 
junk email to confirm the subscription not in digest format ;)


Regards,
Roger


On 10/09/2015 09:38 AM, Digimer wrote:

On 08/10/15 09:03 PM, TaMen说我挑食 wrote:

Corosync+Pacemaker error during failover

You need to ask a question if you want us to be able to help you.



--
regards,
Zhou, ZhiQiang (Roger) (office#:+86 10 65339283, cellphone# +86 13391978086)


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Antw: Monitoring Op for LVM - Excessive Logging

2015-10-09 Thread Jorge Fábregas
On 10/09/2015 09:06 AM, Ulrich Windl wrote:
> Did you try daemon_options="-d0"? (in clvmd resource)

I've just found this:

http://pacemaker.oss.clusterlabs.narkive.com/C5BaFych/ocf-lvm2-clvmd-resource-agent

...so apparently SUSE changed the resource agent's default of "-d0" to
"-d2" (from SP2 to SP3).  This is still the case in  SP4.

Can anyone from SUSE please explain why is the "clvmd" resource agent
coming with such a noisy default out of the box?  I'm curious.

Thanks,
Jorge

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Antw: group resources not grouped ?!?

2015-10-09 Thread zulucloud



On 10/08/2015 10:37 AM, Dejan Muhamedagic wrote:

On Wed, Oct 07, 2015 at 05:13:40PM +0200, zulucloud wrote:




Well, they're quite verbose and a little bit cryptic...;) I didn't
find anything what could enlighten that for me...


If you're using crmsh, you can at least let history filter out
the stuff you don't want to look at. There's an introduction on
the feature here:

http://crmsh.github.io/history-guide/

Thanks,

Dejan



Hi Dejan,

that looks very good, thank you. I need to get the source and compile... 
Do you know if there are any usage restrictions if the rest of the 
cluster stack software is quite old (pacemaker 1.0.9.1, corosync 1.2.1-4) ?


thx

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] gfs2 crashes when i, e.g., dd to a lvm volume

2015-10-09 Thread Lars Ellenberg
On Thu, Oct 08, 2015 at 01:50:50PM +0200, J. Echter wrote:
> Hi,
> 
> i have a strange issue on CentOS 6.5
> 
> If i install a new vm on node1 it works well.
> 
> If i install a new vm on node2 it gets stuck.
> 
> Same if i do a dd if=/dev/zero of=/dev/DATEN/vm-test (on node2)
> 
> On node1 it works:
> 
> dd if=/dev/zero of=vm-test
> Schreiben in „vm-test“: Auf dem Gerät ist kein Speicherplatz mehr verfügbar
> 83886081+0 Datensätze ein
> 83886080+0 Datensätze aus
> 42949672960 Bytes (43 GB) kopiert, 2338,15 s, 18,4 MB/s
> 
> 
> dmesg shows the following (while dd'ing on node2):
> 
> INFO: task flush-253:18:9820 blocked for more than 120 seconds.
>   Not tainted 2.6.32-573.7.1.el6.x86_64 #1
> "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> flush-253:18  D  0  9820  2 0x0080
>  8805f9bc3490 0046 8805f9bc3458 8805f9bc3454
>  880620768c00 88062fc23080 006fc80f5a1e 8800282959c0
>  02aa 00010002be11 8806241d65f8 8805f9bc3fd8
> Call Trace:
>  [] drbd_al_begin_io+0x1a5/0x240 [drbd]

What DRBD version is this?
What does the IO stack look like?

DRBD seems to block and wait for something in its make request function.
So maybe it's backend is blocked for some reason?

You'd see this for example on a thin provisioned backend that is
configured to block when out of physical space...

>  [] ? bio_alloc_bioset+0x5b/0xf0
>  [] ? autoremove_wake_function+0x0/0x40
>  [] drbd_make_request_common+0xf5c/0x14a0 [drbd]
>  [] ? mempool_alloc+0x63/0x140
>  [] ? bio_alloc_bioset+0x5b/0xf0
>  [] ? __map_bio+0xad/0x140 [dm_mod]
>  [] drbd_make_request+0x531/0x870 [drbd]
>  [] ? throtl_find_tg+0x46/0x60
>  [] ? blk_throtl_bio+0x1ea/0x5f0
>  [] ? blk_queue_bio+0x494/0x610
>  [] ? dm_make_request+0x122/0x180 [dm_mod]
>  [] generic_make_request+0x240/0x5a0
>  [] ? mempool_alloc_slab+0x15/0x20
>  [] ? mempool_alloc+0x63/0x140
>  [] ? apic_timer_interrupt+0xe/0x20
>  [] submit_bio+0x70/0x120

-- 
: Lars Ellenberg
: http://www.LINBIT.com | Your Way to High Availability
: DRBD, Linux-HA  and  Pacemaker support and consulting

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Antw: Monitoring Op for LVM - Excessive Logging

2015-10-09 Thread Ken Gaillot
On 10/09/2015 08:06 AM, Ulrich Windl wrote:
 Jorge Fábregas  schrieb am 09.10.2015 um 14:20
> in
> Nachricht <5617b10f.1060...@gmail.com>:
>> Hi,
>>
>> Is there a way to stop the excessive logging produced by the LVM monitor
>> operation?  I got it set at the default (30 seconds) here on SLES 11
>> SP4.  However, everytime it runs the DC will write 174 lines on
>> /var/log/messages (all coming from LVM).   I'm referring to the LVM
>> primitive resource (the one that activates a VG).  I'm also using DLM/cLVM.
>>
>> I checked /etc/lvm/lvm.conf and the logging defaults are reasonable
>> (verbose value set at 0 which is the lowest).
> 
> Did you try daemon_options="-d0"? (in clvmd resource)

It's been a long while since I used clvmd, so they may have fixed this
since then, but there used to be a bug that clvmd would always start up
with debug logging, even if -d0 was set.

Luckily, dynamic disabling of debug mode worked and could be done
anytime after clvmd is started, using "clvmd -C -d0".

What I wound up doing was configuring swatch to monitor the logs, and
run that command if it saw debug messages!


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Antw: Monitoring Op for LVM - Excessive Logging

2015-10-09 Thread Andrei Borzenkov

09.10.2015 19:40, Jorge Fábregas пишет:

On 10/09/2015 09:06 AM, Ulrich Windl wrote:

Did you try daemon_options="-d0"? (in clvmd resource)


I've just found this:

http://pacemaker.oss.clusterlabs.narkive.com/C5BaFych/ocf-lvm2-clvmd-resource-agent

...so apparently SUSE changed the resource agent's default of "-d0" to
"-d2" (from SP2 to SP3).  This is still the case in  SP4.

Can anyone from SUSE please explain why is the "clvmd" resource agent
coming with such a noisy default out of the box?  I'm curious.



Personally I find more logs better than less logs. It is not always 
possible to reproduce problem, especially on production system, so 
having as much information as possible to be able to analyze it is a 
good thing.


Of course, as long as it does not get in the way ...

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Antw: group resources not grouped ?!?

2015-10-09 Thread Dejan Muhamedagic
Hi,

On Fri, Oct 09, 2015 at 01:56:34PM +0200, zulucloud wrote:
> 
> 
> On 10/08/2015 10:37 AM, Dejan Muhamedagic wrote:
> >On Wed, Oct 07, 2015 at 05:13:40PM +0200, zulucloud wrote:
> 
> >>
> >>Well, they're quite verbose and a little bit cryptic...;) I didn't
> >>find anything what could enlighten that for me...
> >
> >If you're using crmsh, you can at least let history filter out
> >the stuff you don't want to look at. There's an introduction on
> >the feature here:
> >
> >http://crmsh.github.io/history-guide/
> >
> >Thanks,
> >
> >Dejan
> >
> 
> Hi Dejan,
> 
> that looks very good, thank you. I need to get the source and
> compile... Do you know if there are any usage restrictions if the
> rest of the cluster stack software is quite old (pacemaker 1.0.9.1,
> corosync 1.2.1-4) ?

Hmm, v1.0.x. Where did you find such an old thing? :) Does it
come with crmsh (until v1.1.8, crmsh was part of pacemaker)? Even
so, I doubt that it has the history feature.  You could try to
build the v1.2.6 branch, but I'm not sure whether it's going to
work. I can also recall that colleagues of NTT Japan were
maintaining a version for some older pacemaker versions, but
don't know where do they keep the packages.

Thanks,

Dejan

> thx
> 
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Monitoring Op for LVM - Excessive Logging

2015-10-09 Thread Dejan Muhamedagic
Hi,

On Fri, Oct 09, 2015 at 08:20:31AM -0400, Jorge Fábregas wrote:
> Hi,
> 
> Is there a way to stop the excessive logging produced by the LVM monitor
> operation?  I got it set at the default (30 seconds) here on SLES 11
> SP4.  However, everytime it runs the DC will write 174 lines on
> /var/log/messages (all coming from LVM).   I'm referring to the LVM
> primitive resource (the one that activates a VG).  I'm also using DLM/cLVM.

Can you post some samples? If you want this fixed, best is to
open a support call with SUSE.

Thanks,

Dejan

> I checked /etc/lvm/lvm.conf and the logging defaults are reasonable
> (verbose value set at 0 which is the lowest).
> 
> Thanks!
> 
> -- 
> Jorge
> 
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Antw: Monitoring Op for LVM - Excessive Logging

2015-10-09 Thread Ulrich Windl
>>> Jorge Fábregas  schrieb am 09.10.2015 um 14:20
in
Nachricht <5617b10f.1060...@gmail.com>:
> Hi,
> 
> Is there a way to stop the excessive logging produced by the LVM monitor
> operation?  I got it set at the default (30 seconds) here on SLES 11
> SP4.  However, everytime it runs the DC will write 174 lines on
> /var/log/messages (all coming from LVM).   I'm referring to the LVM
> primitive resource (the one that activates a VG).  I'm also using DLM/cLVM.
> 
> I checked /etc/lvm/lvm.conf and the logging defaults are reasonable
> (verbose value set at 0 which is the lowest).

Did you try daemon_options="-d0"? (in clvmd resource)

> 
> Thanks!
> 
> -- 
> Jorge
> 
> ___
> Users mailing list: Users@clusterlabs.org 
> http://clusterlabs.org/mailman/listinfo/users 
> 
> Project Home: http://www.clusterlabs.org 
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
> Bugs: http://bugs.clusterlabs.org 




___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Antw: group resources not grouped ?!?

2015-10-09 Thread zulucloud



On 10/09/2015 02:57 PM, Dejan Muhamedagic wrote:

Hi,

On Fri, Oct 09, 2015 at 01:56:34PM +0200, zulucloud wrote:



On 10/08/2015 10:37 AM, Dejan Muhamedagic wrote:

On Wed, Oct 07, 2015 at 05:13:40PM +0200, zulucloud wrote:




Well, they're quite verbose and a little bit cryptic...;) I didn't
find anything what could enlighten that for me...


If you're using crmsh, you can at least let history filter out
the stuff you don't want to look at. There's an introduction on
the feature here:

http://crmsh.github.io/history-guide/

Thanks,

Dejan



Hi Dejan,

that looks very good, thank you. I need to get the source and
compile... Do you know if there are any usage restrictions if the
rest of the cluster stack software is quite old (pacemaker 1.0.9.1,
corosync 1.2.1-4) ?


Hmm, v1.0.x. Where did you find such an old thing? :) Does it
come with crmsh (until v1.1.8, crmsh was part of pacemaker)? Even
so, I doubt that it has the history feature.  You could try to
build the v1.2.6 branch, but I'm not sure whether it's going to
work. I can also recall that colleagues of NTT Japan were
maintaining a version for some older pacemaker versions, but
don't know where do they keep the packages.

Thanks,

Dejan



ok, *sigh* ;) I'll kick my colleagues that we have to upgrade...;)
thx

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] group resources not grouped ?!?

2015-10-09 Thread zulucloud



On 10/08/2015 12:57 PM, Jorge Fábregas wrote:

On 10/08/2015 06:04 AM, zulucloud wrote:

  are there any other ways?


Hi,

You might want to check external/vmware or external/vcenter.  I've never
used them but apparently one is used to fence via the hypervisor (ESXi
itself) and the other thru vCenter.



Hello,
thank you very much, this looks interesting... maybe another option. 
I'll give it a closer look..


brgds

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org