On Mon, Jun 02, 2008 at 01:32:15PM -0300, Arturo 'Buanzo' Busleiman wrote:
> 
> On May 30, 10:29 am, Konrad Rzeszutek <[EMAIL PROTECTED]> wrote:
>  > If you run iSCSI on each guests you end up with overhead.
> 
> I decided the same. So I'm running iscsi on the host, and via 
> /dev/disk/by-path/whatever provide the guests with SAN storage.
> 
> Bad thing there, for each virtual disk of the san, I get two /dev 
> entries, so I'm wondering how to setup the multipath over those two 
> /dev/disk/by-path entries (one over each controller).
> 
> I also noticed the IPv^ thing, so I ended up specifying the IP by hand, 
> and using to iscsiadm commands for each discovery / session initiation. 

That works. Bit of a hack. Why not just use the IPv4 on both interfaces?

> Also, as you said, link aggregation makes no sense over crossover and 
> two different interfaces.

Well, you could use load balancing or play with ifaces with your two NICs
and take advantage of the two Ethernet cables from your target. 

What this means is that you can setup the /dev/sdc to go over
one of your NICs, and the /dev/sdf over the other. For that look in
the README file and read up about ifaces. This is the poor man fine-grained
NIC configuration.

Or you can use load balancing where you bond both interfaces in one, but
for that you need a switch..  And the same for link aggregation or
link failure.

But as said before, you are doing cross-over so go with ifaces to
take advantage of setting up two sessions on both NICs.

> 
> What I'm NOT doing, is LVM. I wanted to go one layer at a time, and 
> adding LVM was too much for my limited time in here.
> 
> So, currently, I have two remaining issues:
> 
> 1) setup multipath

That is pretty easy. Just install the package and the two block devices
(or four if you are using a dual-controller) will be /dev/mapper/<some really
long UUID> on which you can use LVM.

> 2) **URGENT**: I've added a second virtual disk and mapped it to my host 
> (SAN is an MD3000i, host is ubuntu server with 2.6.24-17, i'm waiting 
> for 2.6.25 which fixes the skb broadcast bug it seems).

Huh? What skb broadcast bug?

> If I use hwinfo, I can see the virtual disk (over /dev/sdc and the 2nd 
> entry, /dev/sdf). Over fdisk, I get NOTHING.

fdisk -l doesn't give you data?

> 
> Here's the dmesg output, hwinfo output, and fdisk output:
> 
> HWINFO
> ======
> 40: SCSI 300.1: 10600 Disk
>   [Created at block.222]
>   Unique ID: uBVf.EABbh0DH0_1
>   SysFS ID: /block/sdc
>   SysFS BusID: 3:0:0:1
>   SysFS Device Link: /devices/platform/host3/session1/target3:0:0/3:0:0:1
>   Hardware Class: disk
>   Model: "DELL MD3000i"
>   Vendor: "DELL"
>   Device: "MD3000i"
>   Revision: "0670"
>   Serial ID: "84L000I"
>   Driver: "sd"
>   Device File: /dev/sdc (/dev/sg4)
>   Device Files: /dev/sdc, 
> /dev/disk/by-id/scsi-36001e4f00043a3da000004dc4843982f, 
> /dev/disk/by-path/ip-192.168.131.101:3260-iscsi-iqn.1984-05.com.dell:powervault.6001e4f0004326c100000000482127e3-lun-1
>   Device Number: block 8:32-8:47 (char 21:4)
>   Geometry (Logical): CHS 102400/64/32
>   Size: 209715200 sectors a 512 bytes
>   Drive status: no medium
>   Config Status: cfg=new, avail=yes, need=no, active=unknown
> 
> That "drive status: no medium" drives me crazy. For comparison, this is 

Uh.. don't go crazy. Just install multipath and make sure you have this
configuration entry in mulitpath.conf file:

 device {
        vendor          "DELL"
        product         "MD3000i"
        product_blacklist   "Universal Xport"
        features        "1 queue_if_no_path"
        path_checker    rdac
        hardware_handler    "1 rdac"
        path_grouping_policy    group_by_prio
        prio            "rdac"
        failback        immediate
    }   

Keep in mind that depending on what version of multipath you install
you might not have the 'rdac' path checker or that the path priority
program is called differently. Get the latest one and see what config
options you need.

(The above works with SLES10 SP2).

> the output for the first virtual disk I created, the one I can access:
> 
> 41: SCSI 300.0: 10600 Disk
>   [Created at block.222]
>   Unique ID: R0Fb.EABbh0DH0_1
>   SysFS ID: /block/sdb
>   SysFS BusID: 3:0:0:0
>   SysFS Device Link: /devices/platform/host3/session1/target3:0:0/3:0:0:0
>   Hardware Class: disk
>   Model: "DELL MD3000i"
>   Vendor: "DELL"
>   Device: "MD3000i"
>   Revision: "0670"
>   Serial ID: "84L000I"
>   Driver: "sd"
>   Device File: /dev/sdb (/dev/sg3)
>   Device Files: /dev/sdb, 
> /dev/disk/by-id/scsi-36001e4f0004326c1000005b3483e9c7a, 
> /dev/disk/by-path/ip-192.168.131.101:3260-iscsi-iqn.1984-05.com.dell:powervault.6001e4f0004326c100000000482127e3-lun-0
>   Device Number: block 8:16-8:31 (char 21:3)
>   Geometry (Logical): CHS 261/255/63
>   Size: 4194304 sectors a 512 bytes
>   Config Status: cfg=new, avail=yes, need=no, active=unknown
> 

That looks wrong. How many controllers do you have?  I wonder if this is 
related to
the other session that didn't login. Have you set the LUN1 disk to have
a preferred controller which is the one using IPv6 and the LUN0 to use the
controller which is using IPv4?

Can you attach the output of 'iscsiadm -m session -P 3'? And the
iscsiadm -m discovery -t st -p 192.168.131.101


> DMESG
> =====
> 
> end_request: I/O error, dev sdc, sector 209715192
> end_request: I/O error, dev sdc, sector 0
> Buffer I/O error on device sdc, logical block 0
> end_request: I/O error, dev sdc, sector 0
> end_request: I/O error, dev sdc, sector 209715192
> end_request: I/O error, dev sdc, sector 0
> Buffer I/O error on device sdc, logical block 0
> end_request: I/O error, dev sdc, sector 0
> Buffer I/O error on device sdc, logical block 0
> end_request: I/O error, dev sdc, sector 0
> end_request: I/O error, dev sdc, sector 209715192
> end_request: I/O error, dev sdc, sector 0
> Buffer I/O error on device sdc, logical block 0
> end_request: I/O error, dev sdc, sector 0
> end_request: I/O error, dev sdc, sector 209715192
> Buffer I/O error on device sdc, logical block 26214399
> end_request: I/O error, dev sdc, sector 209715192
> end_request: I/O error, dev sdc, sector 0
> Buffer I/O error on device sdc, logical block 0
> end_request: I/O error, dev sdc, sector 0
> end_request: I/O error, dev sdc, sector 209715192
> end_request: I/O error, dev sdc, sector 0
> 
> The attach-time messages got lost, but I remember this line:
> Dev sdc: unable to read RDB block 0
> 
> FDISK
> =====
> [EMAIL PROTECTED]:~# fdisk -l /dev/sdc
> [EMAIL PROTECTED]:~# fdisk /dev/sdc
> 
> Unable to read /dev/sdc
> 
> Might the disk still be initializing? The Dell client says it's finished...

Nope. That just means you need to get the latest multipath code
to take advantage of your RDAC controller. The other block device
isn't activated and if you use multipath it will be accessible (well, kind of
. It will be a "passive" disk - meaning if the connection to the
active dies the I/O will switch over to the passive disk).

> 
> Thanks!!
> 
> 
> 

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~----------~----~----~----~------~----~------~--~---

Reply via email to