Re: [Users] ovirt / 2 iscsi storage domains / same LUN IDs

2013-03-06 Thread Alex Leonhardt
For those still interested, t
he timeout issue doesnt occur on the
m
ultipath side, nor ovirt
,
 but on the iscsid config side of things -

To shorten the timeout and fail a path faster, edit

/etc/iscsi/iscsid.conf


C
hange the value


node.session.timeo.replacement_timeout = 1
2
0

t
o something more useful, like I changed it to 10


Reloading iscsid's config does nothing
,
 you'll have to restart the host for it to work.

Alex



On 4 March 2013 17:11, Alex Leonhardt alex.t...@gmail.com wrote:

 Ok, so finally got this working, anyone know how to change the timeout for
 multipathd from say 120 seconds to ~10 seconds ?

 == /var/log/messages ==
 Mar  4 17:09:12 TESTHV01 kernel: session5: session recovery timed out
 after 120 secs
 Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline
 device
 Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request
 Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline
 device
 Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request
 Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline
 device
 Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request
 Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline
 device
 Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request
 Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline
 device
 Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request
 Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline
 device
 Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request
 Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline
 device
 Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] Unhandled error code
 Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] Result:
 hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
 Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] CDB: Read(10): 28 00
 00 04 08 00 00 00 08 00
 Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request
 Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline
 device
 Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] Unhandled error code
 Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] Result:
 hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
 Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] CDB: Read(10): 28 00
 00 00 00 00 00 00 08 00


 Alex




 On 4 March 2013 15:35, Alex Leonhardt alex.t...@gmail.com wrote:

 Hi,

 I just tested this with this config :


 target iqn.2013-02.local.vm:iscsi.lun1
 backing-store /vol/scsi.img
 vendor_id ISCSI-MULTIPATH
 scsi_id MULTIPATHTEST
 scsi_sn 9911
 lun 1
 /backing-store
 /target


 However, upon discovery / login, the LUN ID was again :

 1IET_11


 Alex




 On 3 March 2013 18:34, Ayal Baron aba...@redhat.com wrote:



 - Original Message -
 
 
 
 
 
  Hi there,
 
  I was doing some testing around ovirt and iscsi and found an issue
  where as when you use dd to create backing-stores for iscsi and
  you point ovirt to it to discover  login, it thinks the LUN ID is
  the same although the target is different and adds additional paths
  to the config (automagically?) bringing down the iSCSI storage
  domain.

 There is no question about the behaviour, it's not a bug, that is the
 way multipathing works (has nothing to do with oVirt).  The GUID of a LUN
 has to be unique.  multipathd seeing the same LUN ID across multiple
 targets assumes that it's the same LUN with multiple paths and that's how
 you get redundancy and load balancing.
 Why tgtd doesn't take care of this built in I could never grok, but what
 you need to do is edit your targets.conf and add the scsi_id and scsi_sn
 fields.

 Example:
 target MasterBackup
  allow-in-use yes
 backing-store /dev/vg0/MasterBackup
  lun 1
  scsi_id MasterBackup
  scsi_sn 4401
 /backing-store
 /target

 
  See attached screenshot of what I got when trying to a new iscsi san
  storage domain to ovirt. The Storage Domain is now down and I
  cannot get rid of the config (???) how do I force it to logout of
  the targets ??
 
 
  Also, anyone know how to deal with the duplicate LUN ID issue ?
 
 
  Thanks
  Alex
 
 
 
 
 
  --
 
 
 
  | RHCE | Senior Systems Engineer | www.vcore.co |
  | www.vsearchcloud.com |
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 




 --

 | RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |




 --

 | RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |




-- 
| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ovirt / 2 iscsi storage domains / same LUN IDs

2013-03-04 Thread Alex Leonhardt
Hi,

I just tested this with this config :


target iqn.2013-02.local.vm:iscsi.lun1
backing-store /vol/scsi.img
vendor_id ISCSI-MULTIPATH
scsi_id MULTIPATHTEST
scsi_sn 9911
lun 1
/backing-store
/target


However, upon discovery / login, the LUN ID was again :

1IET_11


Alex




On 3 March 2013 18:34, Ayal Baron aba...@redhat.com wrote:



 - Original Message -
 
 
 
 
 
  Hi there,
 
  I was doing some testing around ovirt and iscsi and found an issue
  where as when you use dd to create backing-stores for iscsi and
  you point ovirt to it to discover  login, it thinks the LUN ID is
  the same although the target is different and adds additional paths
  to the config (automagically?) bringing down the iSCSI storage
  domain.

 There is no question about the behaviour, it's not a bug, that is the way
 multipathing works (has nothing to do with oVirt).  The GUID of a LUN has
 to be unique.  multipathd seeing the same LUN ID across multiple targets
 assumes that it's the same LUN with multiple paths and that's how you get
 redundancy and load balancing.
 Why tgtd doesn't take care of this built in I could never grok, but what
 you need to do is edit your targets.conf and add the scsi_id and scsi_sn
 fields.

 Example:
 target MasterBackup
  allow-in-use yes
 backing-store /dev/vg0/MasterBackup
  lun 1
  scsi_id MasterBackup
  scsi_sn 4401
 /backing-store
 /target

 
  See attached screenshot of what I got when trying to a new iscsi san
  storage domain to ovirt. The Storage Domain is now down and I
  cannot get rid of the config (???) how do I force it to logout of
  the targets ??
 
 
  Also, anyone know how to deal with the duplicate LUN ID issue ?
 
 
  Thanks
  Alex
 
 
 
 
 
  --
 
 
 
  | RHCE | Senior Systems Engineer | www.vcore.co |
  | www.vsearchcloud.com |
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 




-- 

| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ovirt / 2 iscsi storage domains / same LUN IDs

2013-03-04 Thread Alex Leonhardt
Ok, so finally got this working, anyone know how to change the timeout for
multipathd from say 120 seconds to ~10 seconds ?

== /var/log/messages ==
Mar  4 17:09:12 TESTHV01 kernel: session5: session recovery timed out after
120 secs
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline
device
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline
device
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline
device
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline
device
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline
device
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline
device
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline
device
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] Unhandled error code
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] Result:
hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] CDB: Read(10): 28 00 00
04 08 00 00 00 08 00
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] killing request
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: rejecting I/O to offline
device
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] Unhandled error code
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] Result:
hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
Mar  4 17:09:12 TESTHV01 kernel: sd 7:0:0:99: [sdd] CDB: Read(10): 28 00 00
00 00 00 00 00 08 00


Alex




On 4 March 2013 15:35, Alex Leonhardt alex.t...@gmail.com wrote:

 Hi,

 I just tested this with this config :


 target iqn.2013-02.local.vm:iscsi.lun1
 backing-store /vol/scsi.img
 vendor_id ISCSI-MULTIPATH
 scsi_id MULTIPATHTEST
 scsi_sn 9911
 lun 1
 /backing-store
 /target


 However, upon discovery / login, the LUN ID was again :

 1IET_11


 Alex




 On 3 March 2013 18:34, Ayal Baron aba...@redhat.com wrote:



 - Original Message -
 
 
 
 
 
  Hi there,
 
  I was doing some testing around ovirt and iscsi and found an issue
  where as when you use dd to create backing-stores for iscsi and
  you point ovirt to it to discover  login, it thinks the LUN ID is
  the same although the target is different and adds additional paths
  to the config (automagically?) bringing down the iSCSI storage
  domain.

 There is no question about the behaviour, it's not a bug, that is the way
 multipathing works (has nothing to do with oVirt).  The GUID of a LUN has
 to be unique.  multipathd seeing the same LUN ID across multiple targets
 assumes that it's the same LUN with multiple paths and that's how you get
 redundancy and load balancing.
 Why tgtd doesn't take care of this built in I could never grok, but what
 you need to do is edit your targets.conf and add the scsi_id and scsi_sn
 fields.

 Example:
 target MasterBackup
  allow-in-use yes
 backing-store /dev/vg0/MasterBackup
  lun 1
  scsi_id MasterBackup
  scsi_sn 4401
 /backing-store
 /target

 
  See attached screenshot of what I got when trying to a new iscsi san
  storage domain to ovirt. The Storage Domain is now down and I
  cannot get rid of the config (???) how do I force it to logout of
  the targets ??
 
 
  Also, anyone know how to deal with the duplicate LUN ID issue ?
 
 
  Thanks
  Alex
 
 
 
 
 
  --
 
 
 
  | RHCE | Senior Systems Engineer | www.vcore.co |
  | www.vsearchcloud.com |
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 




 --

 | RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |




-- 

| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ovirt / 2 iscsi storage domains / same LUN IDs

2013-03-03 Thread Ayal Baron


- Original Message -
 
 
 
 
 
 Hi there,
 
 I was doing some testing around ovirt and iscsi and found an issue
 where as when you use dd to create backing-stores for iscsi and
 you point ovirt to it to discover  login, it thinks the LUN ID is
 the same although the target is different and adds additional paths
 to the config (automagically?) bringing down the iSCSI storage
 domain.

There is no question about the behaviour, it's not a bug, that is the way 
multipathing works (has nothing to do with oVirt).  The GUID of a LUN has to be 
unique.  multipathd seeing the same LUN ID across multiple targets assumes that 
it's the same LUN with multiple paths and that's how you get redundancy and 
load balancing.
Why tgtd doesn't take care of this built in I could never grok, but what you 
need to do is edit your targets.conf and add the scsi_id and scsi_sn fields.

Example:
target MasterBackup
 allow-in-use yes
backing-store /dev/vg0/MasterBackup
 lun 1
 scsi_id MasterBackup
 scsi_sn 4401
/backing-store
/target

 
 See attached screenshot of what I got when trying to a new iscsi san
 storage domain to ovirt. The Storage Domain is now down and I
 cannot get rid of the config (???) how do I force it to logout of
 the targets ??
 
 
 Also, anyone know how to deal with the duplicate LUN ID issue ?
 
 
 Thanks
 Alex
 
 
 
 
 
 --
 
 
 
 | RHCE | Senior Systems Engineer | www.vcore.co |
 | www.vsearchcloud.com |
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ovirt / 2 iscsi storage domains / same LUN IDs

2013-03-03 Thread Alex Leonhardt

Hi Ayal,

Thanks for that - I was thinking the same after I did some more testing 
last week - e.g. shared a image file from a different iscsi target and 
the same iscsi target, the lun on the same target changed, the one on 
the other target did again the same - so I figured it had something to 
do with tgtd.


Thanks for below, I was looking into those options to try next week :) ...

Thanks
Alex


On 03/03/2013 06:34 PM, Ayal Baron wrote:


- Original Message -





Hi there,

I was doing some testing around ovirt and iscsi and found an issue
where as when you use dd to create backing-stores for iscsi and
you point ovirt to it to discover  login, it thinks the LUN ID is
the same although the target is different and adds additional paths
to the config (automagically?) bringing down the iSCSI storage
domain.

There is no question about the behaviour, it's not a bug, that is the way 
multipathing works (has nothing to do with oVirt).  The GUID of a LUN has to be 
unique.  multipathd seeing the same LUN ID across multiple targets assumes that 
it's the same LUN with multiple paths and that's how you get redundancy and 
load balancing.
Why tgtd doesn't take care of this built in I could never grok, but what you 
need to do is edit your targets.conf and add the scsi_id and scsi_sn fields.

Example:
target MasterBackup
  allow-in-use yes
backing-store /dev/vg0/MasterBackup
  lun 1
  scsi_id MasterBackup
  scsi_sn 4401
/backing-store
/target


See attached screenshot of what I got when trying to a new iscsi san
storage domain to ovirt. The Storage Domain is now down and I
cannot get rid of the config (???) how do I force it to logout of
the targets ??


Also, anyone know how to deal with the duplicate LUN ID issue ?


Thanks
Alex





--



| RHCE | Senior Systems Engineer | www.vcore.co |
| www.vsearchcloud.com |

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ovirt / 2 iscsi storage domains / same LUN IDs

2013-03-01 Thread Alex Leonhardt
ok, so it looks like ovirt is creating PVs, VGs, and LVs associated to a
iSCSI disk ... now, when i try to add the other storage with the same LUN
ID - this is how it looks like :

[root@TESTHV01 ~]# pvs
  /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument
  /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument
  WARNING: Volume Group 7d0f78ff-aa25-4b64-a2ea-c8a65beda616 is not
consistent
  PVVG   Fmt  Attr
PSize   PFree
  /dev/mapper/1IET_00010001 7d0f78ff-aa25-4b64-a2ea-c8a65beda616 lvm2 a--
68.00g 54.12g
  /dev/sda2 vg_root  lvm2 a--
78.12g 0
  /dev/sda3 vg_vol   lvm2 a--
759.26g 0

[root@TESTHV01 ~]# vgs
  /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument
  /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument
  /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument
  /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument
  /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument
  WARNING: Inconsistent metadata found for VG
7d0f78ff-aa25-4b64-a2ea-c8a65beda616 - updating to use version 20
  /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument
  Automatic metadata correction failed
  Recovery of volume group 7d0f78ff-aa25-4b64-a2ea-c8a65beda616 failed.
  /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument
  VG  #PV #LV #SN Attr   VSize   VFree
  vg_root   1   2   0 wz--n-  78.12g0
  vg_vol1   1   0 wz--n- 759.26g0

[root@TESTHV01 ~]# lvs
  /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument
  /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument
  /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument
  /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument
  /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument
  WARNING: Inconsistent metadata found for VG
7d0f78ff-aa25-4b64-a2ea-c8a65beda616 - updating to use version 20
  /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument
  Automatic metadata correction failed
  Recovery of volume group 7d0f78ff-aa25-4b64-a2ea-c8a65beda616 failed.
  Skipping volume group 7d0f78ff-aa25-4b64-a2ea-c8a65beda616
  /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument
  LV  VG  Attr LSize   Pool Origin Data%  Move Log Copy%
Convert
  lv_root vg_root -wi-ao--
68.36g
  lv_swap vg_root -wi-ao--
9.77g
  lv_vol  vg_vol  -wi-ao--
759.26g

VS

[root@TESTHV01 ~]# pvs
l  PVVG   Fmt  Attr
PSize   PFree
  /dev/mapper/1IET_00010001 7d0f78ff-aa25-4b64-a2ea-c8a65beda616 lvm2 a--
68.00g 54.12g
  /dev/sda2 vg_root  lvm2 a--
78.12g 0
  /dev/sda3 vg_vol   lvm2 a--
759.26g 0
[root@TESTHV01 ~]# vgs
  VG   #PV #LV #SN Attr   VSize   VFree
  7d0f78ff-aa25-4b64-a2ea-c8a65beda616   1   7   0 wz--n-  68.00g 54.12g
  vg_root1   2   0 wz--n-  78.12g 0
  vg_vol 1   1   0 wz--n- 759.26g 0
[root@TESTHV01 ~]# lvs
  LV   VG
Attr LSize   Pool Origin Data%  Move Log Copy%  Convert
  fe6f8584-b6da-4ef0-8879-bf23022827d7 7d0f78ff-aa25-4b64-a2ea-c8a65beda616
-wi-  10.00g
  ids  7d0f78ff-aa25-4b64-a2ea-c8a65beda616
-wi-a--- 128.00m
  inbox7d0f78ff-aa25-4b64-a2ea-c8a65beda616
-wi-a--- 128.00m
  leases   7d0f78ff-aa25-4b64-a2ea-c8a65beda616
-wi-a---   2.00g
  master   7d0f78ff-aa25-4b64-a2ea-c8a65beda616
-wi-ao--   1.00g
  metadata 7d0f78ff-aa25-4b64-a2ea-c8a65beda616
-wi-a--- 512.00m
  outbox   7d0f78ff-aa25-4b64-a2ea-c8a65beda616
-wi-a--- 128.00m
  lv_root  vg_root
-wi-ao--  68.36g
  lv_swap  vg_root
-wi-ao--   9.77g
  lv_vol   vg_vol
-wi-ao-- 759.26g


Ovirt Node ( HV ) seems to use the LV created with in the same VG for the
storage domain as the disk for the VM. And this is how it'll look like -
various LVs are being created for the management of the storage domain (i
guess) :

Disk /dev/sdb: 73.4 GB, 7340032 bytes
255 heads, 63 sectors/track, 8923 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x


Disk /dev/mapper/1IET_00010001: 73.4 GB, 7340032 bytes
255 heads, 63 sectors/track, 8923 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 

Re: [Users] ovirt / 2 iscsi storage domains / same LUN IDs

2013-03-01 Thread Alex Leonhardt
The problem persists despite me trying to manually change LUN ID on the
target (scsi-target-utils installed on centos 6.3) .. does anyone know how
to make ovirt (or vdsm?) based on iqn + lun id ? I'd think it'd resolve the
issue I'm having.

Anyone ?

Thanks,
Alex


On 28 February 2013 16:30, Alex Leonhardt alex.t...@gmail.com wrote:

 FWIW, restarting the HV resolves the issue and brings the storage domain
 back up; but I dont know whether that is because that's where the target
 (and iscsi initiator) runs or whether vdsmd then clears its cache / routes
 to a/the target(s) ?

 Alex



 On 28 February 2013 16:25, Alex Leonhardt alex.t...@gmail.com wrote:

 another screenshot of how confused it can get

 alex


 On 28 February 2013 15:36, Alex Leonhardt alex.t...@gmail.com wrote:

 Hi there,

 I was doing some testing around ovirt and iscsi and found an issue where
 as when you use dd to create backing-stores for iscsi and you point
 ovirt to it to discover  login, it thinks the LUN ID is the same although
 the target is different and adds additional paths to the config
 (automagically?) bringing down the iSCSI storage domain.

 See attached screenshot of what I got when trying to a new iscsi san
 storage domain to ovirt. The Storage Domain is now down and I cannot get
 rid of the config (???) how do I force it to logout of the targets ??

 Also, anyone know how to deal with the duplicate LUN ID issue ?

 Thanks
 Alex

 --

 | RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com|




 --

 | RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |




 --

 | RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |




-- 

| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users