Re: [ClusterLabs] iSCSITarget problems with just add an iqn

2020-04-08 Thread Stefan K
> Yeah,  because there  is no such logic in the scripts.
can this somehow be implemented?

> Also when using multipath - your clients  should not realize that a restart 
> has occured.
thats true, but lsat time it doesn't work, and I've absolutly no glue why it 
fails last time.

I can't play with this parameter, because its a production machine


On Monday, April 6, 2020 1:16:42 PM CEST Strahil Nikolov wrote:
> On April 6, 2020 11:53:22 AM GMT+03:00, Stefan K  wrote:
> >> Note: When changing parameters  the cluster will restart the
> >resources, so keep that in mind.
> >and thats the problem, targetcli supports changees on the fly.. and
> >pacemaker restart all resources instead of only who was changed
> >
> >
> >
> >On Saturday, April 4, 2020 7:51:49 AM CEST Strahil Nikolov wrote:
> >> On April 3, 2020 1:09:15 PM GMT+03:00, Stefan K 
> >wrote:
> >> >Sorry I mean I changed/add a iqn to the allowed_initiators
> >> >
> >> >
> >> >
> >> >On Friday, April 3, 2020 10:52:06 AM CEST Stefan K wrote:
> >> >> Hello,
> >> >>
> >> >> ok first the versions:
> >> >> corosync: 2.4.2
> >> >> pacemaker: 1.1.16
> >> >> OS: Debian 9.4
> >> >>
> >> >> How I add an IQN:
> >> >> crm conf edit iscsi-server
> >> >> and then I add the iqn
> >> >>
> >> >>
> >> >>
> >> >> On Thursday, April 2, 2020 7:29:46 PM CEST Strahil Nikolov wrote:
> >> >> > On April 2, 2020 3:39:07 PM GMT+03:00, Stefan K
> >
> >> >wrote:
> >> >> > >Hello,
> >> >> > >
> >> >> > >yesterday I wanted to ust add an iqn in my setup, it works
> >2times
> >> >> > >before, but yesterday it fails and I don't know why (I attached
> >> >the log
> >> >> > >and config)..
> >> >> > >
> >> >> > >I use targetcli to configure iSCSI, with targetcli its possible
> >to
> >> >add
> >> >> > >an IQN on the fly.. pacemaker/ the iSCSITarget ressource
> >don'tuse
> >> >that,
> >> >> > >is it possible to change the script? Is it possible in
> >pacemaker
> >> >that I
> >> >> > >just get the changes and forward it to the iSCSIITarget?
> >> >> > >
> >> >> > >And last question, why stop/start pacemaker all ressouces and
> >not
> >> >just
> >> >> > >the ressource which was changed?
> >> >> > >
> >> >> > >thanks in advanced
> >> >> > >
> >> >> > >.
> >> >> > >config:
> >> >> > >crm conf sh
> >> >> > >node 1: zfs-serv3 \
> >> >> > >attributes
> >> >> > >node 2: zfs-serv4 \
> >> >> > >attributes
> >> >> > >primitive ha-ip IPaddr2 \
> >> >> > >params ip=192.168.2.10 cidr_netmask=24 nic=bond0 \
> >> >> > >op start interval=0s timeout=20s \
> >> >> > >op stop interval=0s timeout=20s \
> >> >> > >op monitor interval=10s timeout=20s \
> >> >> > >meta target-role=Started
> >> >> > >primitive iscsi-lun00 iSCSILogicalUnit \
> >> >> > >params implementation=lio-t
> >> >> >
> >>
> >>>target_iqn="iqn.2003-01.org.linux-iscsi.vm-storage.x8664:sn.cf6fa665ec23"
> >> >> > >lun=0 lio_iblock=0 scsi_sn=8a12f029
> >> >> > >path="/dev/zvol/vm_storage/zfs-vol1" \
> >> >> > >meta
> >> >> > >primitive iscsi-lun01 iSCSILogicalUnit \
> >> >> > >params implementation=lio-t
> >> >> >
> >>
> >>>target_iqn="iqn.2003-01.org.linux-iscsi.vm-storage.x8664:sn.cf6fa665ec23"
> >> >> > >lun=1 lio_iblock=1 scsi_sn=f0e7a755
> >> >> > >path="/dev/zvol/vm_storage/zfs-vol2" \
> >> >> > >meta
> >> >> > >primitive iscsi-lun02 iSCSILogicalUnit \
> >> >> > >params implementation=lio-t
> >> >> >
> >>
> >>>target_iqn="iqn.2003-01.org.linux-iscsi.vm-storage.

Re: [ClusterLabs] iSCSITarget problems with just add an iqn

2020-04-06 Thread Stefan K
> Note: When changing parameters  the cluster will restart the resources, so 
> keep that in mind.
and thats the problem, targetcli supports changees on the fly.. and pacemaker 
restart all resources instead of only who was changed



On Saturday, April 4, 2020 7:51:49 AM CEST Strahil Nikolov wrote:
> On April 3, 2020 1:09:15 PM GMT+03:00, Stefan K  wrote:
> >Sorry I mean I changed/add a iqn to the allowed_initiators
> >
> >
> >
> >On Friday, April 3, 2020 10:52:06 AM CEST Stefan K wrote:
> >> Hello,
> >>
> >> ok first the versions:
> >> corosync: 2.4.2
> >> pacemaker: 1.1.16
> >> OS: Debian 9.4
> >>
> >> How I add an IQN:
> >> crm conf edit iscsi-server
> >> and then I add the iqn
> >>
> >>
> >>
> >> On Thursday, April 2, 2020 7:29:46 PM CEST Strahil Nikolov wrote:
> >> > On April 2, 2020 3:39:07 PM GMT+03:00, Stefan K 
> >wrote:
> >> > >Hello,
> >> > >
> >> > >yesterday I wanted to ust add an iqn in my setup, it works 2times
> >> > >before, but yesterday it fails and I don't know why (I attached
> >the log
> >> > >and config)..
> >> > >
> >> > >I use targetcli to configure iSCSI, with targetcli its possible to
> >add
> >> > >an IQN on the fly.. pacemaker/ the iSCSITarget ressource don'tuse
> >that,
> >> > >is it possible to change the script? Is it possible in pacemaker
> >that I
> >> > >just get the changes and forward it to the iSCSIITarget?
> >> > >
> >> > >And last question, why stop/start pacemaker all ressouces and not
> >just
> >> > >the ressource which was changed?
> >> > >
> >> > >thanks in advanced
> >> > >
> >> > >.
> >> > >config:
> >> > >crm conf sh
> >> > >node 1: zfs-serv3 \
> >> > >attributes
> >> > >node 2: zfs-serv4 \
> >> > >attributes
> >> > >primitive ha-ip IPaddr2 \
> >> > >params ip=192.168.2.10 cidr_netmask=24 nic=bond0 \
> >> > >op start interval=0s timeout=20s \
> >> > >op stop interval=0s timeout=20s \
> >> > >op monitor interval=10s timeout=20s \
> >> > >meta target-role=Started
> >> > >primitive iscsi-lun00 iSCSILogicalUnit \
> >> > >params implementation=lio-t
> >> >
> >>target_iqn="iqn.2003-01.org.linux-iscsi.vm-storage.x8664:sn.cf6fa665ec23"
> >> > >lun=0 lio_iblock=0 scsi_sn=8a12f029
> >> > >path="/dev/zvol/vm_storage/zfs-vol1" \
> >> > >meta
> >> > >primitive iscsi-lun01 iSCSILogicalUnit \
> >> > >params implementation=lio-t
> >> >
> >>target_iqn="iqn.2003-01.org.linux-iscsi.vm-storage.x8664:sn.cf6fa665ec23"
> >> > >lun=1 lio_iblock=1 scsi_sn=f0e7a755
> >> > >path="/dev/zvol/vm_storage/zfs-vol2" \
> >> > >meta
> >> > >primitive iscsi-lun02 iSCSILogicalUnit \
> >> > >params implementation=lio-t
> >> >
> >>target_iqn="iqn.2003-01.org.linux-iscsi.vm-storage.x8664:sn.cf6fa665ec23"
> >> > >lun=2 lio_iblock=2 scsi_sn=6b45cc5f
> >> > >path="/dev/zvol/vm_storage/zfs-vol3" \
> >> > >meta
> >> > >primitive iscsi-server iSCSITarget \
> >> > >params implementation=lio-t
> >> > >iqn="iqn.2003-01.org.linux-iscsi.vm-storage.x8664:sn.cf6fa665ec23"
> >> > >portals="192.168.2.10:3260"
> >> > >allowed_initiators="iqn.1998-01.com.vmware:brainslug9-75488e35
> >> > >iqn.1998-01.com.vmware:brainslug8-05897f0c
> >> > >iqn.1998-01.com.vmware:brainslug7-592b0e73
> >> > >iqn.1998-01.com.vmware:brainslug10-5564c329
> >> > >iqn.1998-01.com.vmware:brainslug11-0214ef48
> >> > >iqn.1998-01.com.vmware:brainslug12-5d9f42e9"
> >> > >primitive resIPMI-zfs3 stonith:external/ipmi \
> >> > >params hostname=zfs-serv3 ipaddr=172.16.105.16 userid=stonith
> >> > >passwd=stonith_321 interface=lan priv=OPERATOR pcmk_delay_max=20 \
> >> > >op monitor interval=60s \
> >> > >meta
> >> > >primitive resIPMI-zfs4 stonith:external/ipmi \
> >&g

Re: [ClusterLabs] iSCSITarget problems with just add an iqn

2020-04-03 Thread Stefan K
Sorry I mean I changed/add a iqn to the allowed_initiators



On Friday, April 3, 2020 10:52:06 AM CEST Stefan K wrote:
> Hello,
>
> ok first the versions:
> corosync: 2.4.2
> pacemaker: 1.1.16
> OS: Debian 9.4
>
> How I add an IQN:
> crm conf edit iscsi-server
> and then I add the iqn
>
>
>
> On Thursday, April 2, 2020 7:29:46 PM CEST Strahil Nikolov wrote:
> > On April 2, 2020 3:39:07 PM GMT+03:00, Stefan K  wrote:
> > >Hello,
> > >
> > >yesterday I wanted to ust add an iqn in my setup, it works 2times
> > >before, but yesterday it fails and I don't know why (I attached the log
> > >and config)..
> > >
> > >I use targetcli to configure iSCSI, with targetcli its possible to add
> > >an IQN on the fly.. pacemaker/ the iSCSITarget ressource don'tuse that,
> > >is it possible to change the script? Is it possible in pacemaker that I
> > >just get the changes and forward it to the iSCSIITarget?
> > >
> > >And last question, why stop/start pacemaker all ressouces and not just
> > >the ressource which was changed?
> > >
> > >thanks in advanced
> > >
> > >.
> > >config:
> > >crm conf sh
> > >node 1: zfs-serv3 \
> > >attributes
> > >node 2: zfs-serv4 \
> > >attributes
> > >primitive ha-ip IPaddr2 \
> > >params ip=192.168.2.10 cidr_netmask=24 nic=bond0 \
> > >op start interval=0s timeout=20s \
> > >op stop interval=0s timeout=20s \
> > >op monitor interval=10s timeout=20s \
> > >meta target-role=Started
> > >primitive iscsi-lun00 iSCSILogicalUnit \
> > >params implementation=lio-t
> > >target_iqn="iqn.2003-01.org.linux-iscsi.vm-storage.x8664:sn.cf6fa665ec23"
> > >lun=0 lio_iblock=0 scsi_sn=8a12f029
> > >path="/dev/zvol/vm_storage/zfs-vol1" \
> > >meta
> > >primitive iscsi-lun01 iSCSILogicalUnit \
> > >params implementation=lio-t
> > >target_iqn="iqn.2003-01.org.linux-iscsi.vm-storage.x8664:sn.cf6fa665ec23"
> > >lun=1 lio_iblock=1 scsi_sn=f0e7a755
> > >path="/dev/zvol/vm_storage/zfs-vol2" \
> > >meta
> > >primitive iscsi-lun02 iSCSILogicalUnit \
> > >params implementation=lio-t
> > >target_iqn="iqn.2003-01.org.linux-iscsi.vm-storage.x8664:sn.cf6fa665ec23"
> > >lun=2 lio_iblock=2 scsi_sn=6b45cc5f
> > >path="/dev/zvol/vm_storage/zfs-vol3" \
> > >meta
> > >primitive iscsi-server iSCSITarget \
> > >params implementation=lio-t
> > >iqn="iqn.2003-01.org.linux-iscsi.vm-storage.x8664:sn.cf6fa665ec23"
> > >portals="192.168.2.10:3260"
> > >allowed_initiators="iqn.1998-01.com.vmware:brainslug9-75488e35
> > >iqn.1998-01.com.vmware:brainslug8-05897f0c
> > >iqn.1998-01.com.vmware:brainslug7-592b0e73
> > >iqn.1998-01.com.vmware:brainslug10-5564c329
> > >iqn.1998-01.com.vmware:brainslug11-0214ef48
> > >iqn.1998-01.com.vmware:brainslug12-5d9f42e9"
> > >primitive resIPMI-zfs3 stonith:external/ipmi \
> > >params hostname=zfs-serv3 ipaddr=172.16.105.16 userid=stonith
> > >passwd=stonith_321 interface=lan priv=OPERATOR pcmk_delay_max=20 \
> > >op monitor interval=60s \
> > >meta
> > >primitive resIPMI-zfs4 stonith:external/ipmi \
> > >params hostname=zfs-serv4 ipaddr=172.16.105.17 userid=stonith
> > >passwd=stonith_321 interface=lan priv=OPERATOR pcmk_delay_max=20 \
> > >op monitor interval=60s \
> > >meta
> > >primitive vm_storage ZFS \
> > >params pool=vm_storage importargs="-d /dev/disk/by-vdev/" \
> > >op monitor interval=5s timeout=30s \
> > >op start interval=0s timeout=90 \
> > >op stop interval=0s timeout=90 \
> > >meta target-role=Started
> > >location location-resIPMI-zfs3-zfs-serv3--INFINITY resIPMI-zfs3 -inf:
> > >zfs-serv3
> > >location location-resIPMI-zfs4-zfs-serv4--INFINITY resIPMI-zfs4 -inf:
> > >zfs-serv4
> > >colocation pcs_rsc_colocation_set_ha-ip_vm_storage_iscsi-server inf:
> > >ha-ip vm_storage iscsi-server iscsi-lun00 iscsi-lun01 iscsi-lun02
> > >order pcs_rsc_order_set_iscsi-server_vm_storage_ha-ip iscsi-server
> > >iscsi-lun00 iscsi-lun01 iscsi-lun02 ha-ip
> > >property cib-bootstrap-options: \
> > >have-watchdog=false \
> > >dc-version=1.1.16-94ff4df \
> > >cluster-infrastructure=corosync \
> > >cluster-name=zfs-vmstorage \
> > >no-quorum-policy=stop \
> > >stonith-enabled=true \
> > >last-lrm-refresh=1585662940
> > >rsc_defaults rsc_defaults-options: \
> > >resource-stickiness=100
> >
> > Yes, you can add a new initiator live.
> >
> > How did you add it (what command and it's  options)  ?
> >
> > What is your OS (Distro,version)  and corosync/pacemaker versions ?
> >
> >
> > Best Regards,
> > Strahil Nikolov
>
>
>
> ___
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> ClusterLabs home: https://www.clusterlabs.org/
>



___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] iSCSITarget problems with just add an iqn

2020-04-03 Thread Stefan K
Hello,

ok first the versions:
corosync: 2.4.2
pacemaker: 1.1.16
OS: Debian 9.4

How I add an IQN:
crm conf edit iscsi-server
and then I add the iqn



On Thursday, April 2, 2020 7:29:46 PM CEST Strahil Nikolov wrote:
> On April 2, 2020 3:39:07 PM GMT+03:00, Stefan K  wrote:
> >Hello,
> >
> >yesterday I wanted to ust add an iqn in my setup, it works 2times
> >before, but yesterday it fails and I don't know why (I attached the log
> >and config)..
> >
> >I use targetcli to configure iSCSI, with targetcli its possible to add
> >an IQN on the fly.. pacemaker/ the iSCSITarget ressource don'tuse that,
> >is it possible to change the script? Is it possible in pacemaker that I
> >just get the changes and forward it to the iSCSIITarget?
> >
> >And last question, why stop/start pacemaker all ressouces and not just
> >the ressource which was changed?
> >
> >thanks in advanced
> >
> >.
> >config:
> >crm conf sh
> >node 1: zfs-serv3 \
> >attributes
> >node 2: zfs-serv4 \
> >attributes
> >primitive ha-ip IPaddr2 \
> >params ip=192.168.2.10 cidr_netmask=24 nic=bond0 \
> >op start interval=0s timeout=20s \
> >op stop interval=0s timeout=20s \
> >op monitor interval=10s timeout=20s \
> >meta target-role=Started
> >primitive iscsi-lun00 iSCSILogicalUnit \
> >params implementation=lio-t
> >target_iqn="iqn.2003-01.org.linux-iscsi.vm-storage.x8664:sn.cf6fa665ec23"
> >lun=0 lio_iblock=0 scsi_sn=8a12f029
> >path="/dev/zvol/vm_storage/zfs-vol1" \
> >meta
> >primitive iscsi-lun01 iSCSILogicalUnit \
> >params implementation=lio-t
> >target_iqn="iqn.2003-01.org.linux-iscsi.vm-storage.x8664:sn.cf6fa665ec23"
> >lun=1 lio_iblock=1 scsi_sn=f0e7a755
> >path="/dev/zvol/vm_storage/zfs-vol2" \
> >meta
> >primitive iscsi-lun02 iSCSILogicalUnit \
> >params implementation=lio-t
> >target_iqn="iqn.2003-01.org.linux-iscsi.vm-storage.x8664:sn.cf6fa665ec23"
> >lun=2 lio_iblock=2 scsi_sn=6b45cc5f
> >path="/dev/zvol/vm_storage/zfs-vol3" \
> >meta
> >primitive iscsi-server iSCSITarget \
> >params implementation=lio-t
> >iqn="iqn.2003-01.org.linux-iscsi.vm-storage.x8664:sn.cf6fa665ec23"
> >portals="192.168.2.10:3260"
> >allowed_initiators="iqn.1998-01.com.vmware:brainslug9-75488e35
> >iqn.1998-01.com.vmware:brainslug8-05897f0c
> >iqn.1998-01.com.vmware:brainslug7-592b0e73
> >iqn.1998-01.com.vmware:brainslug10-5564c329
> >iqn.1998-01.com.vmware:brainslug11-0214ef48
> >iqn.1998-01.com.vmware:brainslug12-5d9f42e9"
> >primitive resIPMI-zfs3 stonith:external/ipmi \
> >params hostname=zfs-serv3 ipaddr=172.16.105.16 userid=stonith
> >passwd=stonith_321 interface=lan priv=OPERATOR pcmk_delay_max=20 \
> >op monitor interval=60s \
> >meta
> >primitive resIPMI-zfs4 stonith:external/ipmi \
> >params hostname=zfs-serv4 ipaddr=172.16.105.17 userid=stonith
> >passwd=stonith_321 interface=lan priv=OPERATOR pcmk_delay_max=20 \
> >op monitor interval=60s \
> >meta
> >primitive vm_storage ZFS \
> >params pool=vm_storage importargs="-d /dev/disk/by-vdev/" \
> >op monitor interval=5s timeout=30s \
> >op start interval=0s timeout=90 \
> >op stop interval=0s timeout=90 \
> >meta target-role=Started
> >location location-resIPMI-zfs3-zfs-serv3--INFINITY resIPMI-zfs3 -inf:
> >zfs-serv3
> >location location-resIPMI-zfs4-zfs-serv4--INFINITY resIPMI-zfs4 -inf:
> >zfs-serv4
> >colocation pcs_rsc_colocation_set_ha-ip_vm_storage_iscsi-server inf:
> >ha-ip vm_storage iscsi-server iscsi-lun00 iscsi-lun01 iscsi-lun02
> >order pcs_rsc_order_set_iscsi-server_vm_storage_ha-ip iscsi-server
> >iscsi-lun00 iscsi-lun01 iscsi-lun02 ha-ip
> >property cib-bootstrap-options: \
> >have-watchdog=false \
> >dc-version=1.1.16-94ff4df \
> >cluster-infrastructure=corosync \
> >cluster-name=zfs-vmstorage \
> >no-quorum-policy=stop \
> >stonith-enabled=true \
> >last-lrm-refresh=1585662940
> >rsc_defaults rsc_defaults-options: \
> >resource-stickiness=100
>
> Yes, you can add a new initiator live.
>
> How did you add it (what command and it's  options)  ?
>
> What is your OS (Distro,version)  and corosync/pacemaker versions ?
>
>
> Best Regards,
> Strahil Nikolov



___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] serious problem with iSCSILogicalUnit

2019-12-18 Thread Stefan K
Sorry for this.
I try to simplify my configuration and delete all unnecessary things and start 
only with one LUN, one allowed Initiators, and then step by step I add things..

now I ended up with the following setup:
node 1: ha-test1 \
attributes \
attributes standby=off maintenance=off
node 2: ha-test2 \
attributes \
attributes standby=off
primitive ha-ip IPaddr2 \
params ip=172.16.101.166 cidr_netmask=24 nic=ens192 \
op start interval=0s timeout=20s \
op stop interval=0s timeout=20s \
op monitor interval=10s timeout=20s \
meta target-role=Started
primitive iscsi-lun00 iSCSILogicalUnit \
params implementation=lio-t 
target_iqn="iqn.2003-01.org.linux-iscsi.vm-storage.x8664:sn.cf6fa66tgyh3" lun=0 
lio_iblock=0 path="/dev/loop1" \
meta
primitive iscsi-lun01 iSCSILogicalUnit \
params implementation=lio-t 
target_iqn="iqn.2003-01.org.linux-iscsi.vm-storage.x8664:sn.cf6fa66tgyh3" lun=1 
lio_iblock=1 path="/dev/loop2"
primitive iscsi-lun02 iSCSILogicalUnit \
params implementation=lio-t 
target_iqn="iqn.2003-01.org.linux-iscsi.vm-storage.x8664:sn.cf6fa66tgyh3" lun=2 
lio_iblock=2 path="/dev/loop3" \
meta
primitive iscsi-server iSCSITarget \
params implementation=lio-t 
iqn="iqn.2003-01.org.linux-iscsi.vm-storage.x8664:sn.cf6fa66tgyh3" 
portals="172.16.101.166:3260" 
allowed_initiators="iqn.1998-01.com.vmware:brainslug99-75u88a95 
iqn.1998-01.com.vmware:brainslug69-5564u4325 
iqn.1993-08.org.debian:01:fee35be01c4d 
iqn.1998-01.com.vmware:brainslug14-34a81jd123 
iqn.1998-01.com.vmware:brainslug66-778jau77" \
meta
colocation pcs_rsc_colocation_set_ha-ip_vm_storage_iscsi-server inf: ha-ip 
iscsi-server iscsi-lun00 iscsi-lun01 iscsi-lun02
order pcs_rsc_order_set_iscsi-server_vm_storage_ha-ip iscsi-server iscsi-lun00 
iscsi-lun01 iscsi-lun02 ha-ip
property cib-bootstrap-options: \
have-watchdog=false \
dc-version=1.1.16-94ff4df \
cluster-infrastructure=corosync \
cluster-name=ha-vmstorage \
no-quorum-policy=stop \
stonith-enabled=false \
last-lrm-refresh=1576502457
rsc_defaults rsc_defaults-options: \
resource-stickiness=100

before that I've the following order:
order pcs_rsc_order_set_ha-ip_iscsi-server_vm_storage ha-ip:stop 
iscsi-lun00:stop iscsi-lun01:stop iscsi-lun02:stop iscsi-server:stop 
symmetrical=false
order pcs_rsc_order_set_iscsi-server_vm_storage_ha-ip iscsi-server:start 
iscsi-lun00:start iscsi-lun01:start iscsi-lun02:start ha-ip:start 
symmetrical=false

and I thought that this and my current working constraint are the same, but 
with the 2line-configuration it doesn't work and with the one-liner it works 
fine, so can somebody explain me this? 


On Monday, December 16, 2019 6:38:44 PM CET Andrei Borzenkov wrote:
> 16.12.2019 18:26, Stefan K пишет:
> > I thnik I got it..
> > 
> > It looks like that (A)
> > order pcs_rsc_order_set_iscsi-server_haip iscsi-server:start 
> > iscsi-lun00:start iscsi-lun01:start iscsi-lun02:start ha-ip:start 
> > symmetrical=false
> 
> It is different from configuration you show originally.
> 
> > order pcs_rsc_order_set_haip_iscsi-server ha-ip:stop iscsi-lun02:stop 
> > iscsi-lun01:stop iscsi-lun00:stop iscsi-server:stop symmetrical=false
> > 
> > and (B)
> > order pcs_rsc_order_set_iscsi-server_vm_storage_ha-ip iscsi-server 
> > iscsi-lun00 iscsi-lun01 iscsi-lun02 ha-ip
> > 
> > doesn't have the same meaning?!
> > Because with A it doesn't work, but with B it works like expected, can 
> > somebody explain me this behavior?
> > 
> 
> Your original configuration was not symmetrical which may explain it.
> You never said anything about changing configuration so it is unclear
> what you tested - original statement or statement you show now.
> 
> > best regards
> > Stefan
> > 
> > 
> > On Thursday, December 12, 2019 4:19:19 PM CET Stefan K wrote:
> >> So it looks like that it restart the iSCSITarget but not the 
> >> iSCSILogicalUnit, that make sense - more or less because I change 
> >> something in the iSCSITarget, but it is necessary to have a working 
> >> iSCSI.. here a the log output when I change/add the iqn..
> >>
> >> Dec 12 16:08:21 [7051] ha-test1cib: info: cib_process_request: 
> >>  Forwarding cib_apply_diff operation for section 'all' to all 
> >> (origin=local/cibadmin/2)
> >> Dec 12 16:08:21 [7051] ha-test1cib: info: cib_perform_op:   
> >> Diff: --- 0.58.3 2
> >> Dec 12 16:08:21 [7051] ha-test1cib: info: cib_perform_op:   
> >> Diff: +++ 0

Re: [ClusterLabs] serious problem with iSCSILogicalUnit

2019-12-16 Thread Stefan K
I thnik I got it..

It looks like that (A)
order pcs_rsc_order_set_iscsi-server_haip iscsi-server:start iscsi-lun00:start 
iscsi-lun01:start iscsi-lun02:start ha-ip:start symmetrical=false
order pcs_rsc_order_set_haip_iscsi-server ha-ip:stop iscsi-lun02:stop 
iscsi-lun01:stop iscsi-lun00:stop iscsi-server:stop symmetrical=false

and (B)
order pcs_rsc_order_set_iscsi-server_vm_storage_ha-ip iscsi-server iscsi-lun00 
iscsi-lun01 iscsi-lun02 ha-ip

doesn't have the same meaning?!
Because with A it doesn't work, but with B it works like expected, can somebody 
explain me this behavior?

best regards
Stefan


On Thursday, December 12, 2019 4:19:19 PM CET Stefan K wrote:
> So it looks like that it restart the iSCSITarget but not the 
> iSCSILogicalUnit, that make sense - more or less because I change something 
> in the iSCSITarget, but it is necessary to have a working iSCSI.. here a the 
> log output when I change/add the iqn..
>
> Dec 12 16:08:21 [7051] ha-test1cib: info: cib_process_request:
>   Forwarding cib_apply_diff operation for section 'all' to all 
> (origin=local/cibadmin/2)
> Dec 12 16:08:21 [7051] ha-test1cib: info: cib_perform_op:   Diff: 
> --- 0.58.3 2
> Dec 12 16:08:21 [7051] ha-test1cib: info: cib_perform_op:   Diff: 
> +++ 0.59.0 c5423fcdc276ad43361aeb4c8081f7f4
> Dec 12 16:08:21 [7051] ha-test1cib: info: cib_perform_op:   +  
> /cib:  @epoch=59, @num_updates=0
> Dec 12 16:08:21 [7051] ha-test1cib: info: cib_perform_op:   +  
> /cib/configuration/resources/primitive[@id='iscsi-server']/instance_attributes[@id='iscsi-server-instance_attributes']/nvpair[@id='iscsi-server-instance_attributes-allowed_initiators']:
>   @value=iqn.1998-01.com.vmware:brainslug9-75488e35 
> iqn.1998-01.com.vmware:brainslug10-5564u4325 
> iqn.1993-08.org.debian:01:fee35be01c4d 
> iqn.1998-01.com.vmware:brainslug10-34ad648763 
> iqn.1998-01.com.vmware:brainslug66-75488e12 iqn.1998-01.com.vmware:brai
> Dec 12 16:08:21 [7051] ha-test1cib: info: cib_process_request:
>   Completed cib_apply_diff operation for section 'all': OK (rc=0, 
> origin=ha-test1/cibadmin/2, version=0.59.0)
> Dec 12 16:08:21 [7051] ha-test1cib: info: cib_file_backup:  
> Archived previous version as /var/lib/pacemaker/cib/cib-69.raw
> Dec 12 16:08:21 [7056] ha-test1   crmd: info: do_lrm_rsc_op:
> Performing key=12:116:0:9fd4e826-f0ba-4864-8861-2c585d644d1c 
> op=iscsi-server_stop_0
> Dec 12 16:08:21 [7053] ha-test1   lrmd: info: log_execute:  
> executing - rsc:iscsi-server action:stop call_id:62
> Dec 12 16:08:21 [7051] ha-test1cib: info: 
> cib_file_write_with_digest:   Wrote version 0.59.0 of the CIB to disk 
> (digest: dcbab759c4d0e7f38234434bfbe7ca8e)
> Dec 12 16:08:21 [7051] ha-test1cib: info: 
> cib_file_write_with_digest:   Reading cluster configuration file 
> /var/lib/pacemaker/cib/cib.6YgOXO (digest: /var/lib/pacemaker/cib/cib.gOKlWj)
> iSCSITarget(iscsi-server)[4524]:2019/12/12_16:08:22 INFO: Deleted 
> Target iqn.2003-01.org.linux-iscsi.vm-storage.x8664:sn.cf6fa66tgyh3.
> Dec 12 16:08:22 [7053] ha-test1   lrmd: info: log_finished: 
> finished - rsc:iscsi-server action:stop call_id:62 pid:4524 exit-code:0 
> exec-time:293ms queue-time:0ms
> Dec 12 16:08:22 [7056] ha-test1   crmd:   notice: process_lrm_event:  
>   Result of stop operation for iscsi-server on ha-test1: 0 (ok) | call=62 
> key=iscsi-server_stop_0 confirmed=true cib-update=58
> Dec 12 16:08:22 [7051] ha-test1cib: info: cib_process_request:
>   Forwarding cib_modify operation for section status to all 
> (origin=local/crmd/58)
> Dec 12 16:08:22 [7051] ha-test1cib: info: cib_perform_op:   Diff: 
> --- 0.59.0 2
> Dec 12 16:08:22 [7051] ha-test1cib: info: cib_perform_op:   Diff: 
> +++ 0.59.1 (null)
> Dec 12 16:08:22 [7051] ha-test1cib: info: cib_perform_op:   +  
> /cib:  @num_updates=1
> Dec 12 16:08:22 [7051] ha-test1cib: info: cib_perform_op:   +  
> /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='iscsi-server']/lrm_rsc_op[@id='iscsi-server_last_0']:
>   @operation_key=iscsi-server_stop_0, @operation=stop, 
> @transition-key=12:116:0:9fd4e826-f0ba-4864-8861-2c585d644d1c, 
> @transition-magic=0:0;12:116:0:9fd4e826-f0ba-4864-8861-2c585d644d1c, 
> @call-id=62, @last-run=1576163301, @last-rc-change=1576163301, @exec-time=293
> Dec 12 16:08:22 [7051] ha-test1cib: info: cib_process_request:
>   Completed cib_modify operation for section status: OK (rc=0, 
> origin=ha-test1/crmd/58, version=0.59.1)
> Dec 12 16:08:

Re: [ClusterLabs] serious problem with iSCSILogicalUnit

2019-12-12 Thread Stefan K
/12/12_16:08:23 INFO: Created Node 
ACL for iqn.1993-08.org.debian:01:fee35be01c4d
iSCSITarget(iscsi-server)[4564]:2019/12/12_16:08:23 INFO: Created Node 
ACL for iqn.1998-01.com.vmware:brainslug10-34ad648763
iSCSITarget(iscsi-server)[4564]:2019/12/12_16:08:23 INFO: Created Node 
ACL for iqn.1998-01.com.vmware:brainslug66-75488e12
iSCSITarget(iscsi-server)[4564]:2019/12/12_16:08:23 INFO: Created Node 
ACL for iqn.1998-01.com.vmware:brainslug99-5564u4123
iSCSITarget(iscsi-server)[4564]:2019/12/12_16:08:24 INFO: Parameter 
authentication is now '0'.
Dec 12 16:08:24 [7053] ha-test1   lrmd: info: log_finished: 
finished - rsc:iscsi-server action:start call_id:63 pid:4564 exit-code:0 
exec-time:1781ms queue-time:0ms
Dec 12 16:08:24 [7056] ha-test1   crmd: info: action_synced_wait:   
Managed iSCSITarget_meta-data_0 process 4695 exited with rc=0
Dec 12 16:08:24 [7051] ha-test1cib: info: cib_process_request:  
Forwarding cib_modify operation for section status to all (origin=local/crmd/59)
Dec 12 16:08:24 [7056] ha-test1   crmd:   notice: process_lrm_event:
Result of start operation for iscsi-server on ha-test1: 0 (ok) | call=63 
key=iscsi-server_start_0 confirmed=true cib-update=59
Dec 12 16:08:24 [7051] ha-test1cib: info: cib_perform_op:   Diff: 
--- 0.59.1 2
Dec 12 16:08:24 [7051] ha-test1cib: info: cib_perform_op:   Diff: 
+++ 0.59.2 (null)
Dec 12 16:08:24 [7051] ha-test1cib: info: cib_perform_op:   +  
/cib:  @num_updates=2
Dec 12 16:08:24 [7051] ha-test1cib: info: cib_perform_op:   +  
/cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='iscsi-server']/lrm_rsc_op[@id='iscsi-server_last_0']:
  @operation_key=iscsi-server_start_0, @operation=start, 
@transition-key=3:116:0:9fd4e826-f0ba-4864-8861-2c585d644d1c, 
@transition-magic=0:0;3:116:0:9fd4e826-f0ba-4864-8861-2c585d644d1c, 
@call-id=63, @last-run=1576163302, @last-rc-change=1576163302, @exec-time=1781, 
@op-digest=549b6bf42c2d944da4df2c1d2d675b
Dec 12 16:08:24 [7051] ha-test1cib: info: cib_process_request:  
Completed cib_modify operation for section status: OK (rc=0, 
origin=ha-test1/crmd/59, version=0.59.2)
Dec 12 16:08:29 [7051] ha-test1cib: info: cib_process_ping: 
Reporting our current digest to ha-test2: 8bf64451f91add76d89a608b6f51a214 for 
0.59.2 (0x55998d145df0 0)




On Wednesday, December 11, 2019 3:58:51 PM CET Stefan K wrote:
> Hello,
>
> I've a working HA-Setup with iSCSI an ZFS, but last week I add an iSCSI 
> allowed initiator, and than it happens - my hole VMware infrastructure fails 
> because iSCSI does not working anymore.. today I've time to get a closer look 
> into this..
>
> create 2 VMs an put the same (more or less) config into it.
> What I do:
> - I create a iSCSI-target with allowed initiators
> - I create iSCSI Logical units
>
> but I got this:
>
> targetcli
> targetcli shell version 2.1.fb43
> Copyright 2011-2013 by Datera, Inc and others.
> For help on commands, type 'help'.
>
> /> ls
> o- / 
> .
>  [...]
>   o- backstores 
> ..
>  [...]
>   | o- block 
> ..
>  [Storage Objects: 3]
>   | | o- iscsi-lun00 
> .. 
> [/dev/loop1 (1.0GiB) write-thru deactivated]
>   | | o- iscsi-lun01 
> .. 
> [/dev/loop2 (1.0GiB) write-thru deactivated]
>   | | o- iscsi-lun02 
> . [/dev/loop3 
> (0 bytes) write-thru deactivated]
>   | o- fileio 
> .
>  [Storage Objects: 0]
>   | o- pscsi 
> ..
>  [Storage Objects: 0]
>   | o- ramdisk 
> 
>  [Storage Objects: 0]
>   o- iscsi 
> 
>  [Targets: 1]
>   | o- iqn.2003-01.org.linux-iscsi.vm-storage.x8664:sn.cf6fa66tgyh3 
> .. [TPGs: 1]
>   |   o- tpg1 
> 

[ClusterLabs] serious problem with iSCSILogicalUnit

2019-12-11 Thread Stefan K
Hello,

I've a working HA-Setup with iSCSI an ZFS, but last week I add an iSCSI allowed 
initiator, and than it happens - my hole VMware infrastructure fails because 
iSCSI does not working anymore.. today I've time to get a closer look into 
this..

create 2 VMs an put the same (more or less) config into it.
What I do:
- I create a iSCSI-target with allowed initiators
- I create iSCSI Logical units

but I got this:

targetcli
targetcli shell version 2.1.fb43
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.

/> ls
o- / 
.
 [...]
  o- backstores 
..
 [...]
  | o- block 
..
 [Storage Objects: 3]
  | | o- iscsi-lun00 
.. [/dev/loop1 
(1.0GiB) write-thru deactivated]
  | | o- iscsi-lun01 
.. [/dev/loop2 
(1.0GiB) write-thru deactivated]
  | | o- iscsi-lun02 
. [/dev/loop3 
(0 bytes) write-thru deactivated]
  | o- fileio 
.
 [Storage Objects: 0]
  | o- pscsi 
..
 [Storage Objects: 0]
  | o- ramdisk 

 [Storage Objects: 0]
  o- iscsi 

 [Targets: 1]
  | o- iqn.2003-01.org.linux-iscsi.vm-storage.x8664:sn.cf6fa66tgyh3 
.. [TPGs: 1]
  |   o- tpg1 
...
 [no-gen-acls, no-auth]
  | o- acls 
..
 [ACLs: 4]
  | | o- iqn.1993-08.org.debian:01:fee35be01c4d 
... [Mapped LUNs: 0]
  | | o- iqn.1998-01.com.vmware:brainslug10-34ad648763 
 [Mapped LUNs: 0]
  | | o- iqn.1998-01.com.vmware:brainslug10-5564u4325 
. [Mapped LUNs: 0]
  | | o- iqn.1998-01.com.vmware:brainslug9-75488e35 
... [Mapped LUNs: 0]
  | o- luns 
..
 [LUNs: 0]
  | o- portals 

 [Portals: 1]
  |   o- 172.16.101.166:3260 
..
 [OK]
  o- loopback 
.
 [Targets: 0]
  o- vhost 

 [Targets: 0]


here you can see there are missing luns, when I move the ressource to the other 
node, it will shown the luns, if I then add/remove/change an 
"allowed_initiators" it will happen again - all luns are gone. And that is a 
very serious problem for us.

So my questions is, do i misconfigure something or is that a bug? My pacemaker 
config looks like the following:

crm conf sh
node 1: ha-test1 \
attributes \
attributes standby=off maintenance=off
node 2: ha-test2 \
attributes \
attributes standby=off
primitive ha-ip IPaddr2 \
params ip=172.16.101.166 cidr_netmask=24 nic=ens192 \
op start interval=0s timeout=20s \
op stop interval=0s timeout=20s \
op monitor interval=10s timeout=20s \
meta target-role=Started
primitive iscsi-lun00 iSCSILogicalUnit \
params implementation=lio-t 
target_iqn="iqn.2003-01.org.linux-iscsi.vm-storage.x8664:sn.cf6fa66tgyh3" lun=0 
lio_iblock=0 path="/dev/loop1" \
op start interval=0 trace_ra=1 \
op stop interval=0 trace_ra=1 \
meta target-role=Started
primitive iscsi-lun01 iSCSILogicalUnit \
params implementation=lio-t 
target_iqn="iqn.2003-01.org.linux-iscsi.vm-storage.x8664:sn.cf6fa66tgyh3" lun=1 
lio_iblock=1 path="/dev/loop2" \
meta
primitive iscsi-lun02 iSCSILogicalUnit \
params implementation=lio-t 
target_iqn="iqn.2003-01.org.linux-iscsi.vm-storage.x8664:sn.cf6fa66tgyh3" lun=2 
lio_iblock=2 path="/de

[ClusterLabs] ressources in an unmanaged status

2018-11-09 Thread Stefan K
Hello,

I've the following setup:

crm conf sh
node 1: zfs-serv3 \
attributes
node 2: zfs-serv4 \
attributes maintenance=on
primitive ha-ip IPaddr2 \
params ip=192.168.2.10 cidr_netmask=24 nic=bond0 \
op start interval=0s timeout=20s \
op stop interval=0s timeout=20s \
op monitor interval=10s timeout=20s \
meta target-role=Started
primitive iscsi-lun00 iSCSILogicalUnit \
params implementation=lio-t 
target_iqn="iqn.2003-01.org.linux-iscsi.vm-storage.x8664:sn.cf6fa665ec23" lun=0 
lio_iblock=0 path="/dev/zvol/vm_storage/zfs-vol1"
primitive iscsi-lun01 iSCSILogicalUnit \
params implementation=lio-t 
target_iqn="iqn.2003-01.org.linux-iscsi.vm-storage.x8664:sn.cf6fa665ec23" lun=1 
lio_iblock=1 path="/dev/zvol/vm_storage/zfs-vol2"
primitive iscsi-lun02 iSCSILogicalUnit \
params implementation=lio-t 
target_iqn="iqn.2003-01.org.linux-iscsi.vm-storage.x8664:sn.cf6fa665ec23" lun=2 
lio_iblock=2 path="/dev/zvol/vm_storage/zfs-vol3"
primitive iscsi-server iSCSITarget \
params implementation=lio-t 
iqn="iqn.2003-01.org.linux-iscsi.vm-storage.x8664:sn.cf6fa665ec23" 
portals="192.168.2.10:3260" 
allowed_initiators="iqn.1998-01.com.vmware:brainslug9-75488000 
iqn.1998-01.com.vmware:brainslug8-05897000 
iqn.1998-01.com.vmware:brainslug7-592b 
iqn.1998-01.com.vmware:brainslug10-5564c000" \
meta
primitive resIPMI-zfs3 stonith:external/ipmi \
params hostname=zfs-serv3 ipaddr=172.xx.xx.xx userid=user passwd=pw 
interface=lan priv=OPERATOR pcmk_delay_max=20 \
op monitor interval=60s \
meta
primitive resIPMI-zfs4 stonith:external/ipmi \
params hostname=zfs-serv4 ipaddr=172.xx.xx.xx userid=user passwd=pw 
interface=lan priv=OPERATOR pcmk_delay_max=20 \
op monitor interval=60s \
meta
primitive vm_storage ZFS \
params pool=vm_storage importargs="-d /dev/disk/by-vdev/" \
op monitor interval=5s timeout=30s \
op start interval=0s timeout=90 \
op stop interval=0s timeout=90 \
meta target-role=Started
location location-resIPMI-zfs3-zfs-serv3--INFINITY resIPMI-zfs3 -inf: zfs-serv3
location location-resIPMI-zfs4-zfs-serv4--INFINITY resIPMI-zfs4 -inf: zfs-serv4
colocation pcs_rsc_colocation_set_ha-ip_vm_storage_iscsi-server inf: ha-ip 
vm_storage iscsi-server iscsi-lun00 iscsi-lun01 iscsi-lun02
order pcs_rsc_order_set_ha-ip_iscsi-server_vm_storage ha-ip:stop 
iscsi-lun00:stop iscsi-lun01:stop iscsi-lun02:stop iscsi-server:stop 
vm_storage:stop symmetrical=false
order pcs_rsc_order_set_iscsi-server_vm_storage_ha-ip vm_storage:start 
iscsi-server:start iscsi-lun00:start iscsi-lun01:start iscsi-lun02:start 
ha-ip:start symmetrical=false
property cib-bootstrap-options: \
have-watchdog=false \
dc-version=1.1.16-94ff4df \
cluster-infrastructure=corosync \
cluster-name=zfs-vmstorage \
no-quorum-policy=stop \
stonith-enabled=true \
last-lrm-refresh=1541768433
rsc_defaults rsc_defaults-options: \
resource-stickiness=100


If I put a node into maintenance the ressources become unmanaged, If I shutdown 
a node the ressources will migrate correctly, can somebody tell me please what 
is wrong here? Here the logs from a node which I set to maintenance:

Nov 09 14:23:08 [30104] zfs-serv4   crmd: info: crm_timer_popped:   
PEngine Recheck Timer (I_PE_CALC) just popped (90ms) 
Nov 09 14:23:36 [30103] zfs-serv4pengine:   notice: process_pe_message: 
Calculated transition 60, saving inputs in 
/var/lib/pacemaker/pengine/pe-input-336.bz2 
at the same time on the other node:
Nov 09 14:08:53 [9630] zfs-serv3cib: info: cib_process_ping:
Reporting our current digest to zfs-serv4: 81c440e08f9ab611967de64ba7b6ce46 for 
0.270.0 (0x556faf4e9220 0) 

thanks for help!
best regards
Stefan
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Error in iSCSILogicalUnit.in

2018-10-25 Thread Stefan K
Hello,

I think there is an error in [1] at the end must be a "|| $OCF_ERR_GENERIC", 
otherwise it comes to an error if you have multiple LUNs, cause the SCSI 
serialnumber will be changed[2]. Qustion, is it possible to check if lio-t is 
used if we have the parameter 'lio_iblock' ? or is it possible to count this 
somehowand add always lio_iblock+1 ?

For me it looks like with every LUN you're create you have a lio_iblock+1, can 
somebody confirm this?

best regards 
Stefan


[1] 
https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/iSCSILogicalUnit.in#L410
[2] https://github.com/ClusterLabs/resource-agents/issues/1256
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] LIO iSCSI target fails to start

2018-10-12 Thread Stefan K
It looks like a Bug, I created a Pull-Request[1], now it works fine for me.

[1]https://github.com/ClusterLabs/resource-agents/pull/1239



On Thursday, October 11, 2018 11:08:47 PM CEST Valentin Vidic wrote:
> On Wed, Oct 10, 2018 at 02:36:21PM +0200, Stefan K wrote:
> > I think my config is correct, but it sill fails with "This Target
> > already exists in configFS" but "targetcli ls" shows nothing.
> 
> It seems to find something in /sys/kernel/config/target.  Maybe it
> was setup outside of pacemaker somehow?
> 
> 

___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] LIO iSCSI target fails to start

2018-10-10 Thread Stefan K
Hello,

I think my config is correct, but it sill fails with "This Target already 
exists in configFS" but "targetcli ls" shows nothing.

Can somebody tell me please what I'm doing wrong?
dpkg -l |egrep 'pacemaker|corosync|target-cli'  


node 1: zfs-serv3 \
attributes
node 2: zfs-serv4 \
attributes maintenance=on standby=on
primitive ha-ip IPaddr2 \
params ip=192.168.1.10 cidr_netmask=24 nic=bond0 \
op start interval=0s timeout=20s \
op stop interval=0s timeout=20s \
op monitor interval=10s timeout=20s \
meta target-role=Started
primitive iscsi-lun0 iSCSILogicalUnit \
params implementation=lio-t 
target_iqn="iqn.2003-01.org.linux-iscsi.vm-storage.x8664:sn.1decabc0bb80" lun=0 
path="/dev/zvol/vm_storage/zfs-vol1"
primitive iscsi-lun1 iSCSILogicalUnit \
params implementation=lio-t 
target_iqn="iqn.2003-01.org.linux-iscsi.vm-storage.x8664:sn.1decabc0bb80" lun=1 
path="/dev/zvol/vm_storage/zfs-vol2"
primitive iscsi-server iSCSITarget \
params implementation=lio-t 
iqn="iqn.2003-01.org.linux-iscsi.vm-storage.x8664:sn.1decabc0bb80." 
allowed_initiators="iqn.1998-01.com.vmware:brainslug9-abcde 
iqn.1998-01.com.vmware:brainslug8-efgh iqn.1998-01.com.vmware:brainslug7-ertyu 
iqn.1998-01.com.vmware:brainslug10-xcvbnmkj"
primitive resIPMI-zfs3 stonith:external/ipmi \
params hostname=zfs-serv3 ipaddr=172.xx.xx.xx userid= passwd= 
interface=lan priv=OPERATOR pcmk_delay_max=20 \
op monitor interval=60s \
meta
primitive resIPMI-zfs4 stonith:external/ipmi \
params hostname=zfs-serv4 ipaddr=172.xx.xx.xx userid= passwd= 
interface=lan priv=OPERATOR pcmk_delay_max=20 \
op monitor interval=60s \
meta
primitive vm_storage ZFS \
params pool=vm_storage importargs="-d /dev/disk/by-vdev/" \
op monitor interval=5s timeout=30s \
op start interval=0s timeout=90 \
op stop interval=0s timeout=90 \
meta target-role=Started
location cli-prefer-iscsi-server iscsi-server role=Started inf: zfs-serv3
location location-resIPMI-zfs3-zfs-serv3--INFINITY resIPMI-zfs3 -inf: zfs-serv3
location location-resIPMI-zfs4-zfs-serv4--INFINITY resIPMI-zfs4 -inf: zfs-serv4
colocation pcs_rsc_colocation_set_ha-ip_vm_storage_iscsi-server inf: ha-ip 
vm_storage iscsi-server
order pcs_rsc_order_set_ha-ip_iscsi-server_vm_storage ha-ip:stop 
iscsi-server:stop vm_storage:stop symmetrical=false
order pcs_rsc_order_set_iscsi-server_vm_storage_ha-ip vm_storage:start 
iscsi-server ha-ip:start symmetrical=false
property cib-bootstrap-options: \
have-watchdog=false \
dc-version=1.1.16-94ff4df \
cluster-infrastructure=corosync \
cluster-name=zfs-vmstorage \
no-quorum-policy=stop \
stonith-enabled=true \
last-lrm-refresh=1539171551
rsc_defaults rsc_defaults-options: \
resource-stickiness=100

___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] 2node cluster question

2018-08-15 Thread Stefan K
Hello,

what is the 'best' 2-node cluster config?
What I want, if it run on nodeA and nodeA goes in standby or shut down, 
everything must start at nodeB, if nodeA comes back, everything must still run 
on nodeB.

pacemaker looks like:
have-watchdog=false \
dc-version=1.1.16-94ff4df \
cluster-infrastructure=corosync \
cluster-name=zfs-vmstorage \
no-quorum-policy=stop \
stonith-enabled=true \
last-lrm-refresh=1528814481
rsc_defaults rsc_defaults-options: \
resource-stickiness=100

and the corosync.config:
totem {
version: 2
secauth: off
cluster_name: zfs-vmstorage
transport: udpu
rrp_mode: passive
}

nodelist {
node {
ring0_addr: zfs-serv3
ring1_addr: 192.168.251.1
nodeid: 1
}

node {
ring0_addr: zfs-serv4
ring1_addr: 192.168.251.2
nodeid: 2
}
}

quorum {
provider: corosync_votequorum
two_node: 1
}

logging {
to_logfile: yes
logfile: /var/log/corosync/corosync.log
to_syslog: yes
}

thanks in advance
and best regards
Stefan
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] cronjobs only on active node

2018-07-10 Thread Stefan K
Hello,

is it somehow possible to have a cronjob active only on the active node?

best regards
Stefan
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] difference between external/ipmi and fence_ipmilan

2018-06-27 Thread Stefan K
Hi Kristoffer,

ok I see, but why maintain both? Let a coin decide which one, but I think its 
not very helpful to maintain both.

best regards
Stefan

> Gesendet: Mittwoch, 27. Juni 2018 um 10:55 Uhr
> Von: "Kristoffer Grönlund" 
> An: "Stefan K" , users@clusterlabs.org
> Betreff: Re: [ClusterLabs] difference between external/ipmi and fence_ipmilan
>
> "Stefan K"  writes:
> 
> > OK I see, but it would be good if somebody mark one of this as deprecated 
> > and then delete it. So that noone get confused about these.
> >
> 
> The external/* agents are not deprecated, though. Future agents will be
> implemented in the fence-agents framework, but the existing agents are
> still being used (not by RH, but by SUSE at least).
> 
> Cheers,
> Kristoffer
> 
> > best regards
> > Stefan
> >
> >> Gesendet: Dienstag, 26. Juni 2018 um 18:26 Uhr
> >> Von: "Ken Gaillot" 
> >> An: "Cluster Labs - All topics related to open-source clustering welcomed" 
> >> 
> >> Betreff: Re: [ClusterLabs] difference between external/ipmi and 
> >> fence_ipmilan
> >>
> >> On Tue, 2018-06-26 at 12:00 +0200, Stefan K wrote:
> >> > Hello,
> >> > 
> >> > can somebody tell me the difference between external/ipmi and
> >> > fence_ipmilan? Are there preferences?
> >> > Is one of these more common or has some advantages? 
> >> > 
> >> > Thanks in advance!
> >> > best regards
> >> > Stefan
> >> 
> >> The distinction is mostly historical. At one time, there were two
> >> different open-source clustering environments, each with its own set of
> >> fence agents. The community eventually settled on Pacemaker as a sort
> >> of merged evolution of the earlier environments, and so it supports
> >> both styles of fence agents. Thus, you often see an "external/*" agent
> >> and a "fence_*" agent available for the same physical device.
> >> 
> >> However, they are completely different implementations, so there may be
> >> substantive differences as well. I'm not familiar enough with these two
> >> to address that, maybe someone else can.
> >> -- 
> >> Ken Gaillot 
> >> ___
> >> Users mailing list: Users@clusterlabs.org
> >> https://lists.clusterlabs.org/mailman/listinfo/users
> >> 
> >> Project Home: http://www.clusterlabs.org
> >> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> >> Bugs: http://bugs.clusterlabs.org
> >>
> > ___
> > Users mailing list: Users@clusterlabs.org
> > https://lists.clusterlabs.org/mailman/listinfo/users
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org
> 
> -- 
> // Kristoffer Grönlund
> // kgronl...@suse.com
>
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] difference between external/ipmi and fence_ipmilan

2018-06-27 Thread Stefan K
OK I see, but it would be good if somebody mark one of this as deprecated and 
then delete it. So that noone get confused about these.

best regards
Stefan

> Gesendet: Dienstag, 26. Juni 2018 um 18:26 Uhr
> Von: "Ken Gaillot" 
> An: "Cluster Labs - All topics related to open-source clustering welcomed" 
> 
> Betreff: Re: [ClusterLabs] difference between external/ipmi and fence_ipmilan
>
> On Tue, 2018-06-26 at 12:00 +0200, Stefan K wrote:
> > Hello,
> > 
> > can somebody tell me the difference between external/ipmi and
> > fence_ipmilan? Are there preferences?
> > Is one of these more common or has some advantages? 
> > 
> > Thanks in advance!
> > best regards
> > Stefan
> 
> The distinction is mostly historical. At one time, there were two
> different open-source clustering environments, each with its own set of
> fence agents. The community eventually settled on Pacemaker as a sort
> of merged evolution of the earlier environments, and so it supports
> both styles of fence agents. Thus, you often see an "external/*" agent
> and a "fence_*" agent available for the same physical device.
> 
> However, they are completely different implementations, so there may be
> substantive differences as well. I'm not familiar enough with these two
> to address that, maybe someone else can.
> -- 
> Ken Gaillot 
> ___
> Users mailing list: Users@clusterlabs.org
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] difference between external/ipmi and fence_ipmilan

2018-06-26 Thread Stefan K
Hello,

can somebody tell me the difference between external/ipmi and fence_ipmilan? 
Are there preferences?
Is one of these more common or has some advantages? 

Thanks in advance!
best regards
Stefan
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org