[ClusterLabs] (no subject)

2017-07-19 Thread ArekW
Hi, How to properly unset a value with pcs? Set to false or null gives error:

# pcs stonith update vbox-fencing verbose=false --force
or
# pcs stonith update vbox-fencing verbose= --force

Jul 20 07:14:11 nfsnode1 stonith-ng[11097]: warning: fence_vbox[3092]
stderr: [ WARNING:root:Parse error: Ignoring option 'verbose' because
it does not have value ]

To surpress the message I have to delete resource and recteate it
without unwanted variable. I'am not sure if it concerns also other
variables or it's just this one.

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Antw: Re: DRBD or SAN ?

2017-07-19 Thread Dmitri Maziuk

On 7/19/2017 1:29 AM, Ulrich Windl wrote:


Maybe it's like with the cluster: Once you have set it up correctly,

it runs quite well, but the wy to get there may be painful. I quit my
experiments with dual-primary DRBD in some early SLES11 (SP1), because
it fenced a lot and refused to come up automatically after fencing. That
may have been a configuration problem, but with the docs at hand at that
time, I preferred to quit and try something else.

I'm losing the point of this thread very fast, it seems. How is drbd 
cluster that exports images on floating ip *not* a nas? Why would you 
need an active-active storage if you can't run the same vm on two hosts 
at once?


Anyway, +1: if I were building a vm infrastructure, I'd take a closer 
look at ceph myself. And openstack: it was a PITA when I played with it 
a couple of years ago but it sounds like a dual-primary drbd on top of 
lvm with corosync on the side's gonna be at least as bad.


Dima


___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Antw: Re: DRBD or SAN ?

2017-07-19 Thread Ulrich Windl
>>> Digimer  schrieb am 18.07.2017 um 18:15 in Nachricht
<06a7a06f-15f4-3194-00a8-ca043dbc7...@alteeve.ca>:

[...]
> 
> As for replication errors, well, you're judging something without using
> it, I have to conclude. In all our years using DRBD, we have never had a
> data corruption issue or any other problem induced by DRBD. We sure have
> been saved by in on several occasions.
[...]

Maybe it's like with the cluster: Once you have set it up correctly, it runs 
quite well, but the wy to get there may be painful. I quit my experiments with 
dual-primary DRBD in some early SLES11 (SP1), because it fenced a lot and 
refused to come up automatically after fencing. That may have been a 
configuration problem, but with the docs at hand at that time, I preferred to 
quit and try something else.

Regards,
Ulrich



___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] GFS2 Errors

2017-07-19 Thread Kristián Feldsam
Hello, I see today GFS2 errors in log and nothing about that is on net, so I 
writing to this mailing list.

node2   19.07.2017 01:11:55 kernel  kernerr vmscan: shrink_slab: 
gfs2_glock_shrink_scan+0x0/0x2f0 [gfs2] negative objects to delete 
nr=-4549568322848002755
node2   19.07.2017 01:10:56 kernel  kernerr vmscan: shrink_slab: 
gfs2_glock_shrink_scan+0x0/0x2f0 [gfs2] negative objects to delete 
nr=-8191295421473926116
node2   19.07.2017 01:10:48 kernel  kernerr vmscan: shrink_slab: 
gfs2_glock_shrink_scan+0x0/0x2f0 [gfs2] negative objects to delete 
nr=-8225402411152149004
node2   19.07.2017 01:10:47 kernel  kernerr vmscan: shrink_slab: 
gfs2_glock_shrink_scan+0x0/0x2f0 [gfs2] negative objects to delete 
nr=-8230186816585019317
node2   19.07.2017 01:10:45 kernel  kernerr vmscan: shrink_slab: 
gfs2_glock_shrink_scan+0x0/0x2f0 [gfs2] negative objects to delete 
nr=-8242007238441787628
node2   19.07.2017 01:10:39 kernel  kernerr vmscan: shrink_slab: 
gfs2_glock_shrink_scan+0x0/0x2f0 [gfs2] negative objects to delete 
nr=-8250926852732428536
node3   19.07.2017 00:16:02 kernel  kernerr vmscan: shrink_slab: 
gfs2_glock_shrink_scan+0x0/0x2f0 [gfs2] negative objects to delete 
nr=-5150933278940354602
node3   19.07.2017 00:16:02 kernel  kernerr vmscan: shrink_slab: 
gfs2_glock_shrink_scan+0x0/0x2f0 [gfs2] negative objects to delete nr=-64
node3   19.07.2017 00:16:02 kernel  kernerr vmscan: shrink_slab: 
gfs2_glock_shrink_scan+0x0/0x2f0 [gfs2] negative objects to delete nr=-64
Would somebody explain this errors? cluster is looks like working normally. I 
enabled vm.zone_reclaim_mode = 1 on nodes...

Thank you!

S pozdravem Kristián Feldsam
Tel.: +420 773 303 353, +421 944 137 535
E-mail.: supp...@feldhost.cz 

www.feldhost.cz  - FeldHost™ – profesionální 
hostingové a serverové služby za adekvátní ceny.

FELDSAM s.r.o.
V rohu 434/3
Praha 4 – Libuš, PSČ 142 00
IČ: 290 60 958, DIČ: CZ290 60 958
C 200350 vedená u Městského soudu v Praze

Banka: Fio banka a.s.
Číslo účtu: 2400330446/2010
BIC: FIOBCZPPXX
IBAN: CZ82 2010  0024 0033 0446


___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org