Re: umounting an iSCSI device with no connection

2008-06-02 Thread Tomasz Chmielewski

An Oneironaut schrieb:
 Hey all,
 
   So I'm having some problems using umount an my iSCSI device.  I
 have my timeout period set to a really long time and the no-op timers
 have been switched off.  If my system loses its connection to the
 iSCSI and I try to umount the device the command hangs.  I've tried
 umount with the 'l' flag and the 'f' flag but neither have worked.
 And when I do this I am not able to kill the process even with kill
 -9.  I've tried killing all the processes associate with the mount
 before umounting it.  But nothing seems to work.
 Even when I try to reboot my system will hang requiring a hard
 reset.  I've tried 'reboot -f' and 'reboot -n' but neither work.  Has
 anyone run into this before or have any ideas how I can achieve either
 a umount or a successful reboot?

Well, if you have a really long timeout and your connection is lost, 
umount and everything else will - not surprisingly - timeout with the 
period you specified.

Your solutions are:

- restore the connection
- if possible, try to decrease the timeout just before loosing the 
connection
- if the connection is lost already, you may also try to decrease the 
timeout, but I'm not sure it'll work,
- don't shut your network interfaces down when rebooting or halting the 
system (distro-specific; for Debian, look into /etc/init.d/halt and 
NETDOWN / $netdown; others may use $HALTARGS variable in 
/etc/init.d/halt etc.).


An example for setting the timeout to 120 seconds:

iscsiadm -m node -T iqn.2007-05.net.my:store.backup -o update -n 
node.session.timeo.replacement_timeout -v 120



-- 
Tomasz Chmielewski
http://wpkg.org

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: [PATCH] change iscsi discovery on suse's initd

2008-06-02 Thread Hannes Reinecke

Eli Dorfman wrote:
 Hi Amir,

 Mike Christie wrote:
 Amir Mehler wrote:
 Amir Mehler wrote:

 Hello,

 On open-iscsi, from April 2008 and on, when performing:

 (on suse)
/etc/init.d/open-iscsi start
 a discovery is done in line 126:
iscsiadm -m discovery -t st -p $TARGET_ADDR  /dev/null 21
 This line, and others, were added by Hannes Reinecke on Apr 09 2008.

 The problem with this discovery is that it modifies old session
 parameters of the nodes.
 Yes, it does with the stock open-iscsi distribution.
 For SUSE I've added a patch to change the default.
 
 Why do we have to run discovery on every open-iscsi start?
 Discovery process should be initiated by the user on demand.
 
Because.

This is (was?) required for static IP addresses, as some switches
might take quite some time to update their internal routing tables,
during which time any connect() attempt will return -EHOSTUNREACH.
And as the kernel code does has no concept of an initial connection
establishment it will fail the connection.
So we're doing a discovery first to setup the connection, thus
ensuring that the kernel will be able to log into the server properly.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke   zSeries  Storage
[EMAIL PROTECTED] +49 911 74053 688
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Markus Rex, HRB 16746 (AG Nürnberg)

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Feature: Support sysfs/firmware/ibft parsing.

2008-06-02 Thread Konrad Rzeszutek

 
 Konrad, could you convert this to the sysfs api in the open-iscsi git tree?

Sure. But it will take a bit of time (a bit of other things on my plate right 
now).

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Install XEN on CentOS iscsi Root

2008-06-02 Thread Konrad Rzeszutek

On Fri, May 30, 2008 at 05:12:03PM -0700, a s p a s i a wrote:
 
 Hey Konrad 
 
 my iscsiroot image did not boot ... i'm wondering if i had a corrupt
 installation ...
 
 it goes through the pxelinux config stage and then stops and says
 corrupt boot image, and the boot: prompt appears ...
 
 i will ponder further on this if you have any thoughts would be
 greatly appreciated 

Try running 'mkintrd -v /boot/initrd-2.6.18-53.1.21.el5xen.img 
2.6.18-53.1.21.el5xen`
and see if it generates it without any failures. You should see:
Found iscsi component


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Design Questions

2008-06-02 Thread Konrad Rzeszutek

On Mon, Jun 02, 2008 at 01:32:15PM -0300, Arturo 'Buanzo' Busleiman wrote:
 
 On May 30, 10:29 am, Konrad Rzeszutek [EMAIL PROTECTED] wrote:
   If you run iSCSI on each guests you end up with overhead.
 
 I decided the same. So I'm running iscsi on the host, and via 
 /dev/disk/by-path/whatever provide the guests with SAN storage.
 
 Bad thing there, for each virtual disk of the san, I get two /dev 
 entries, so I'm wondering how to setup the multipath over those two 
 /dev/disk/by-path entries (one over each controller).
 
 I also noticed the IPv^ thing, so I ended up specifying the IP by hand, 
 and using to iscsiadm commands for each discovery / session initiation. 

That works. Bit of a hack. Why not just use the IPv4 on both interfaces?

 Also, as you said, link aggregation makes no sense over crossover and 
 two different interfaces.

Well, you could use load balancing or play with ifaces with your two NICs
and take advantage of the two Ethernet cables from your target. 

What this means is that you can setup the /dev/sdc to go over
one of your NICs, and the /dev/sdf over the other. For that look in
the README file and read up about ifaces. This is the poor man fine-grained
NIC configuration.

Or you can use load balancing where you bond both interfaces in one, but
for that you need a switch..  And the same for link aggregation or
link failure.

But as said before, you are doing cross-over so go with ifaces to
take advantage of setting up two sessions on both NICs.

 
 What I'm NOT doing, is LVM. I wanted to go one layer at a time, and 
 adding LVM was too much for my limited time in here.
 
 So, currently, I have two remaining issues:
 
 1) setup multipath

That is pretty easy. Just install the package and the two block devices
(or four if you are using a dual-controller) will be /dev/mapper/some really
long UUID on which you can use LVM.

 2) **URGENT**: I've added a second virtual disk and mapped it to my host 
 (SAN is an MD3000i, host is ubuntu server with 2.6.24-17, i'm waiting 
 for 2.6.25 which fixes the skb broadcast bug it seems).

Huh? What skb broadcast bug?

 If I use hwinfo, I can see the virtual disk (over /dev/sdc and the 2nd 
 entry, /dev/sdf). Over fdisk, I get NOTHING.

fdisk -l doesn't give you data?

 
 Here's the dmesg output, hwinfo output, and fdisk output:
 
 HWINFO
 ==
 40: SCSI 300.1: 10600 Disk
   [Created at block.222]
   Unique ID: uBVf.EABbh0DH0_1
   SysFS ID: /block/sdc
   SysFS BusID: 3:0:0:1
   SysFS Device Link: /devices/platform/host3/session1/target3:0:0/3:0:0:1
   Hardware Class: disk
   Model: DELL MD3000i
   Vendor: DELL
   Device: MD3000i
   Revision: 0670
   Serial ID: 84L000I
   Driver: sd
   Device File: /dev/sdc (/dev/sg4)
   Device Files: /dev/sdc, 
 /dev/disk/by-id/scsi-36001e4f00043a3da04dc4843982f, 
 /dev/disk/by-path/ip-192.168.131.101:3260-iscsi-iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3-lun-1
   Device Number: block 8:32-8:47 (char 21:4)
   Geometry (Logical): CHS 102400/64/32
   Size: 209715200 sectors a 512 bytes
   Drive status: no medium
   Config Status: cfg=new, avail=yes, need=no, active=unknown
 
 That drive status: no medium drives me crazy. For comparison, this is 

Uh.. don't go crazy. Just install multipath and make sure you have this
configuration entry in mulitpath.conf file:

 device {
vendor  DELL
product MD3000i
product_blacklist   Universal Xport
features1 queue_if_no_path
path_checkerrdac
hardware_handler1 rdac
path_grouping_policygroup_by_prio
priordac
failbackimmediate
}   

Keep in mind that depending on what version of multipath you install
you might not have the 'rdac' path checker or that the path priority
program is called differently. Get the latest one and see what config
options you need.

(The above works with SLES10 SP2).

 the output for the first virtual disk I created, the one I can access:
 
 41: SCSI 300.0: 10600 Disk
   [Created at block.222]
   Unique ID: R0Fb.EABbh0DH0_1
   SysFS ID: /block/sdb
   SysFS BusID: 3:0:0:0
   SysFS Device Link: /devices/platform/host3/session1/target3:0:0/3:0:0:0
   Hardware Class: disk
   Model: DELL MD3000i
   Vendor: DELL
   Device: MD3000i
   Revision: 0670
   Serial ID: 84L000I
   Driver: sd
   Device File: /dev/sdb (/dev/sg3)
   Device Files: /dev/sdb, 
 /dev/disk/by-id/scsi-36001e4f0004326c105b3483e9c7a, 
 /dev/disk/by-path/ip-192.168.131.101:3260-iscsi-iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3-lun-0
   Device Number: block 8:16-8:31 (char 21:3)
   Geometry (Logical): CHS 261/255/63
   Size: 4194304 sectors a 512 bytes
   Config Status: cfg=new, avail=yes, need=no, active=unknown
 

That looks wrong. How many controllers do you have?  I wonder if this is 
related to
the other session that didn't login. Have you set the LUN1 disk to have
a preferred controller which is the one using IPv6 

Re: Design Questions

2008-06-02 Thread Arturo 'Buanzo' Busleiman

On May 30, 10:29 am, Konrad Rzeszutek [EMAIL PROTECTED] wrote:
  If you run iSCSI on each guests you end up with overhead.

I decided the same. So I'm running iscsi on the host, and via 
/dev/disk/by-path/whatever provide the guests with SAN storage.

Bad thing there, for each virtual disk of the san, I get two /dev 
entries, so I'm wondering how to setup the multipath over those two 
/dev/disk/by-path entries (one over each controller).

I also noticed the IPv^ thing, so I ended up specifying the IP by hand, 
and using to iscsiadm commands for each discovery / session initiation. 
Also, as you said, link aggregation makes no sense over crossover and 
two different interfaces.

What I'm NOT doing, is LVM. I wanted to go one layer at a time, and 
adding LVM was too much for my limited time in here.

So, currently, I have two remaining issues:

1) setup multipath
2) **URGENT**: I've added a second virtual disk and mapped it to my host 
(SAN is an MD3000i, host is ubuntu server with 2.6.24-17, i'm waiting 
for 2.6.25 which fixes the skb broadcast bug it seems).
If I use hwinfo, I can see the virtual disk (over /dev/sdc and the 2nd 
entry, /dev/sdf). Over fdisk, I get NOTHING.

Here's the dmesg output, hwinfo output, and fdisk output:

HWINFO
==
40: SCSI 300.1: 10600 Disk
  [Created at block.222]
  Unique ID: uBVf.EABbh0DH0_1
  SysFS ID: /block/sdc
  SysFS BusID: 3:0:0:1
  SysFS Device Link: /devices/platform/host3/session1/target3:0:0/3:0:0:1
  Hardware Class: disk
  Model: DELL MD3000i
  Vendor: DELL
  Device: MD3000i
  Revision: 0670
  Serial ID: 84L000I
  Driver: sd
  Device File: /dev/sdc (/dev/sg4)
  Device Files: /dev/sdc, 
/dev/disk/by-id/scsi-36001e4f00043a3da04dc4843982f, 
/dev/disk/by-path/ip-192.168.131.101:3260-iscsi-iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3-lun-1
  Device Number: block 8:32-8:47 (char 21:4)
  Geometry (Logical): CHS 102400/64/32
  Size: 209715200 sectors a 512 bytes
  Drive status: no medium
  Config Status: cfg=new, avail=yes, need=no, active=unknown

That drive status: no medium drives me crazy. For comparison, this is 
the output for the first virtual disk I created, the one I can access:

41: SCSI 300.0: 10600 Disk
  [Created at block.222]
  Unique ID: R0Fb.EABbh0DH0_1
  SysFS ID: /block/sdb
  SysFS BusID: 3:0:0:0
  SysFS Device Link: /devices/platform/host3/session1/target3:0:0/3:0:0:0
  Hardware Class: disk
  Model: DELL MD3000i
  Vendor: DELL
  Device: MD3000i
  Revision: 0670
  Serial ID: 84L000I
  Driver: sd
  Device File: /dev/sdb (/dev/sg3)
  Device Files: /dev/sdb, 
/dev/disk/by-id/scsi-36001e4f0004326c105b3483e9c7a, 
/dev/disk/by-path/ip-192.168.131.101:3260-iscsi-iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3-lun-0
  Device Number: block 8:16-8:31 (char 21:3)
  Geometry (Logical): CHS 261/255/63
  Size: 4194304 sectors a 512 bytes
  Config Status: cfg=new, avail=yes, need=no, active=unknown

DMESG
=

end_request: I/O error, dev sdc, sector 209715192
end_request: I/O error, dev sdc, sector 0
Buffer I/O error on device sdc, logical block 0
end_request: I/O error, dev sdc, sector 0
end_request: I/O error, dev sdc, sector 209715192
end_request: I/O error, dev sdc, sector 0
Buffer I/O error on device sdc, logical block 0
end_request: I/O error, dev sdc, sector 0
Buffer I/O error on device sdc, logical block 0
end_request: I/O error, dev sdc, sector 0
end_request: I/O error, dev sdc, sector 209715192
end_request: I/O error, dev sdc, sector 0
Buffer I/O error on device sdc, logical block 0
end_request: I/O error, dev sdc, sector 0
end_request: I/O error, dev sdc, sector 209715192
Buffer I/O error on device sdc, logical block 26214399
end_request: I/O error, dev sdc, sector 209715192
end_request: I/O error, dev sdc, sector 0
Buffer I/O error on device sdc, logical block 0
end_request: I/O error, dev sdc, sector 0
end_request: I/O error, dev sdc, sector 209715192
end_request: I/O error, dev sdc, sector 0

The attach-time messages got lost, but I remember this line:
Dev sdc: unable to read RDB block 0

FDISK
=
[EMAIL PROTECTED]:~# fdisk -l /dev/sdc
[EMAIL PROTECTED]:~# fdisk /dev/sdc

Unable to read /dev/sdc

Might the disk still be initializing? The Dell client says it's finished...

Thanks!!


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Design questions

2008-06-02 Thread Arturo 'Buanzo' Busleiman

Here is the output you ask for, thanks!

[EMAIL PROTECTED]:~# iscsiadm -m session -P 3
iSCSI Transport Class version 2.0-724
iscsiadm version 2.0-865
Target: iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3
Current Portal: 192.168.131.101:3260,1
Persistent Portal: 192.168.131.101:3260,1
**
Interface:
**
Iface Name: default
Iface Transport: tcp
Iface IPaddress: 192.168.131.1
Iface HWaddress: default
Iface Netdev: default
SID: 1
iSCSI Connection State: LOGGED IN
Internal iscsid Session State: NO CHANGE

Negotiated iSCSI params:

HeaderDigest: None
DataDigest: None
MaxRecvDataSegmentLength: 131072
MaxXmitDataSegmentLength: 65536
FirstBurstLength: 8192
MaxBurstLength: 262144
ImmediateData: Yes
InitialR2T: Yes
MaxOutstandingR2T: 1

Attached SCSI devices:

Host Number: 3  State: running
scsi3 Channel 00 Id 0 Lun: 0
Attached scsi disk sdb  State: running
scsi3 Channel 00 Id 0 Lun: 1
Attached scsi disk sdc  State: running
scsi3 Channel 00 Id 0 Lun: 31
Attached scsi disk sdd  State: running
Current Portal: 192.168.130.101:3260,1
Persistent Portal: 192.168.130.101:3260,1
**
Interface:
**
Iface Name: default
Iface Transport: tcp
Iface IPaddress: 192.168.130.2
Iface HWaddress: default
Iface Netdev: default
SID: 2
iSCSI Connection State: LOGGED IN
Internal iscsid Session State: NO CHANGE

Negotiated iSCSI params:

HeaderDigest: None
DataDigest: None
MaxRecvDataSegmentLength: 131072
MaxXmitDataSegmentLength: 65536
FirstBurstLength: 8192
MaxBurstLength: 262144
ImmediateData: Yes
InitialR2T: Yes
MaxOutstandingR2T: 1

Attached SCSI devices:

Host Number: 4  State: running
scsi4 Channel 00 Id 0 Lun: 0
Attached scsi disk sde  State: running
scsi4 Channel 00 Id 0 Lun: 1
Attached scsi disk sdf  State: running
scsi4 Channel 00 Id 0 Lun: 31
Attached scsi disk sdg  State: running


/dev/sdc == /dev/sdf - the one I can't use

[EMAIL PROTECTED]:~# iscsiadm -m discovery -t st -p 192.168.131.101
192.168.130.101:3260,1 
iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3
192.168.130.102:3260,2 
iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3
192.168.131.101:3260,1 
iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3
192.168.131.102:3260,2 
iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3
[fe80::::021e:4fff:fe43:26c3]:3260,1 
iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3
[fe80::::021e:4fff:fe43:26c5]:3260,1 
iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3
[fe80::::021e:4fff:fe43:a3dc]:3260,2 
iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3
[fe80::::021e:4fff:fe43:a3de]:3260,2 
iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3


Also, what about this:
Dev sdc: unable to read RDB block 0

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



RE: Problems accessing virtual disks on MD3000i, was: RE: Design Questions

2008-06-02 Thread Bryan Mclellan

You mean the preferred controller module on the MD3000i SAN, I assume.

I'd make sure you can ping all of the nodes (four if you have two controllers).
Discover them all via sendtargets and log in to all of them.

 iscsiadm -m discovery -t st -p 192.168.130.101
 iscsiadm -m node -l
 fdisk -l

You should end up with two or four devices (depending on number of 
controllers); one for each virtual disk mapped to that host for each node 
you've logged in to, provided you've removed the Access mapping, which should 
be just fine. Fdisk -l should print a partition table, or lack thereof, for all 
the disks it can read (which should be half of number of nodes)

It took me a while to figure out that I couldn't access the disks via the 
second controller and playing around with iscsiadm a lot is what finally clued 
me in to it. It helped that I already had a test partition on the virtual disk 
created elsewhere so cat '/proc/partitions' revealed that the partition was 
only visible on two of the disk devices not all four.

Bryan

As a side note, I'd double check your subnet configurations on the controllers. 
Each controller should only have one interface on a specific subnet. I don't 
think this is related to your current problem though.

-Original Message-
From: open-iscsi@googlegroups.com [mailto:[EMAIL PROTECTED] On Behalf Of Arturo 
'Buanzo' Busleiman
Sent: Monday, June 02, 2008 12:08 PM
To: open-iscsi@googlegroups.com
Subject: Re: Problems accessing virtual disks on MD3000i, was: RE: Design 
Questions


Hi Bryan,

I changed my setup to only initiate sessions to the primary domain
controller
. This is my dmesg output now:

snip

Apparently, no changes, except that there are no partitions sde through sdg.
hwinfo and fdisk still report the same.

Btw, I also removed the Access DellUtility partition. No difference
either, except that /dev/sdd disappeared :)

Any other ideas?




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Problems accessing virtual disks on MD3000i, was: RE: Design Questions

2008-06-02 Thread Arturo 'Buanzo' Busleiman

OK, don't ask me why, but I just added a 20gb virtual disk on LUN-3, and 
BANG, the host SAW it with no problems whatsoever.

Maybe 100gb and/or lun-1 makes any difference? i hope not!

Buanzo,
a very puzzled man.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Connection Errors

2008-06-02 Thread swejis

Anything else I can do/collect/test ?

Brgds Jonas


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Problems accessing virtual disks on MD3000i, was: RE: Design Questions

2008-06-02 Thread Konrad Rzeszutek

 Apparently, no changes, except that there are no partitions sde through sdg.
 hwinfo and fdisk still report the same.
 
 Btw, I also removed the Access DellUtility partition. No difference 
 either, except that /dev/sdd disappeared :)
 
 Any other ideas?

multipath-tools? Did you install it?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---