Re: Design Questions

2008-06-02 Thread Konrad Rzeszutek

On Mon, Jun 02, 2008 at 01:32:15PM -0300, Arturo 'Buanzo' Busleiman wrote:
 
 On May 30, 10:29 am, Konrad Rzeszutek [EMAIL PROTECTED] wrote:
   If you run iSCSI on each guests you end up with overhead.
 
 I decided the same. So I'm running iscsi on the host, and via 
 /dev/disk/by-path/whatever provide the guests with SAN storage.
 
 Bad thing there, for each virtual disk of the san, I get two /dev 
 entries, so I'm wondering how to setup the multipath over those two 
 /dev/disk/by-path entries (one over each controller).
 
 I also noticed the IPv^ thing, so I ended up specifying the IP by hand, 
 and using to iscsiadm commands for each discovery / session initiation. 

That works. Bit of a hack. Why not just use the IPv4 on both interfaces?

 Also, as you said, link aggregation makes no sense over crossover and 
 two different interfaces.

Well, you could use load balancing or play with ifaces with your two NICs
and take advantage of the two Ethernet cables from your target. 

What this means is that you can setup the /dev/sdc to go over
one of your NICs, and the /dev/sdf over the other. For that look in
the README file and read up about ifaces. This is the poor man fine-grained
NIC configuration.

Or you can use load balancing where you bond both interfaces in one, but
for that you need a switch..  And the same for link aggregation or
link failure.

But as said before, you are doing cross-over so go with ifaces to
take advantage of setting up two sessions on both NICs.

 
 What I'm NOT doing, is LVM. I wanted to go one layer at a time, and 
 adding LVM was too much for my limited time in here.
 
 So, currently, I have two remaining issues:
 
 1) setup multipath

That is pretty easy. Just install the package and the two block devices
(or four if you are using a dual-controller) will be /dev/mapper/some really
long UUID on which you can use LVM.

 2) **URGENT**: I've added a second virtual disk and mapped it to my host 
 (SAN is an MD3000i, host is ubuntu server with 2.6.24-17, i'm waiting 
 for 2.6.25 which fixes the skb broadcast bug it seems).

Huh? What skb broadcast bug?

 If I use hwinfo, I can see the virtual disk (over /dev/sdc and the 2nd 
 entry, /dev/sdf). Over fdisk, I get NOTHING.

fdisk -l doesn't give you data?

 
 Here's the dmesg output, hwinfo output, and fdisk output:
 
 HWINFO
 ==
 40: SCSI 300.1: 10600 Disk
   [Created at block.222]
   Unique ID: uBVf.EABbh0DH0_1
   SysFS ID: /block/sdc
   SysFS BusID: 3:0:0:1
   SysFS Device Link: /devices/platform/host3/session1/target3:0:0/3:0:0:1
   Hardware Class: disk
   Model: DELL MD3000i
   Vendor: DELL
   Device: MD3000i
   Revision: 0670
   Serial ID: 84L000I
   Driver: sd
   Device File: /dev/sdc (/dev/sg4)
   Device Files: /dev/sdc, 
 /dev/disk/by-id/scsi-36001e4f00043a3da04dc4843982f, 
 /dev/disk/by-path/ip-192.168.131.101:3260-iscsi-iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3-lun-1
   Device Number: block 8:32-8:47 (char 21:4)
   Geometry (Logical): CHS 102400/64/32
   Size: 209715200 sectors a 512 bytes
   Drive status: no medium
   Config Status: cfg=new, avail=yes, need=no, active=unknown
 
 That drive status: no medium drives me crazy. For comparison, this is 

Uh.. don't go crazy. Just install multipath and make sure you have this
configuration entry in mulitpath.conf file:

 device {
vendor  DELL
product MD3000i
product_blacklist   Universal Xport
features1 queue_if_no_path
path_checkerrdac
hardware_handler1 rdac
path_grouping_policygroup_by_prio
priordac
failbackimmediate
}   

Keep in mind that depending on what version of multipath you install
you might not have the 'rdac' path checker or that the path priority
program is called differently. Get the latest one and see what config
options you need.

(The above works with SLES10 SP2).

 the output for the first virtual disk I created, the one I can access:
 
 41: SCSI 300.0: 10600 Disk
   [Created at block.222]
   Unique ID: R0Fb.EABbh0DH0_1
   SysFS ID: /block/sdb
   SysFS BusID: 3:0:0:0
   SysFS Device Link: /devices/platform/host3/session1/target3:0:0/3:0:0:0
   Hardware Class: disk
   Model: DELL MD3000i
   Vendor: DELL
   Device: MD3000i
   Revision: 0670
   Serial ID: 84L000I
   Driver: sd
   Device File: /dev/sdb (/dev/sg3)
   Device Files: /dev/sdb, 
 /dev/disk/by-id/scsi-36001e4f0004326c105b3483e9c7a, 
 /dev/disk/by-path/ip-192.168.131.101:3260-iscsi-iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3-lun-0
   Device Number: block 8:16-8:31 (char 21:3)
   Geometry (Logical): CHS 261/255/63
   Size: 4194304 sectors a 512 bytes
   Config Status: cfg=new, avail=yes, need=no, active=unknown
 

That looks wrong. How many controllers do you have?  I wonder if this is 
related to
the other session that didn't login. Have you set the LUN1 disk to have
a preferred controller which is the one using IPv6 

Re: Design Questions

2008-06-02 Thread Arturo 'Buanzo' Busleiman

On May 30, 10:29 am, Konrad Rzeszutek [EMAIL PROTECTED] wrote:
  If you run iSCSI on each guests you end up with overhead.

I decided the same. So I'm running iscsi on the host, and via 
/dev/disk/by-path/whatever provide the guests with SAN storage.

Bad thing there, for each virtual disk of the san, I get two /dev 
entries, so I'm wondering how to setup the multipath over those two 
/dev/disk/by-path entries (one over each controller).

I also noticed the IPv^ thing, so I ended up specifying the IP by hand, 
and using to iscsiadm commands for each discovery / session initiation. 
Also, as you said, link aggregation makes no sense over crossover and 
two different interfaces.

What I'm NOT doing, is LVM. I wanted to go one layer at a time, and 
adding LVM was too much for my limited time in here.

So, currently, I have two remaining issues:

1) setup multipath
2) **URGENT**: I've added a second virtual disk and mapped it to my host 
(SAN is an MD3000i, host is ubuntu server with 2.6.24-17, i'm waiting 
for 2.6.25 which fixes the skb broadcast bug it seems).
If I use hwinfo, I can see the virtual disk (over /dev/sdc and the 2nd 
entry, /dev/sdf). Over fdisk, I get NOTHING.

Here's the dmesg output, hwinfo output, and fdisk output:

HWINFO
==
40: SCSI 300.1: 10600 Disk
  [Created at block.222]
  Unique ID: uBVf.EABbh0DH0_1
  SysFS ID: /block/sdc
  SysFS BusID: 3:0:0:1
  SysFS Device Link: /devices/platform/host3/session1/target3:0:0/3:0:0:1
  Hardware Class: disk
  Model: DELL MD3000i
  Vendor: DELL
  Device: MD3000i
  Revision: 0670
  Serial ID: 84L000I
  Driver: sd
  Device File: /dev/sdc (/dev/sg4)
  Device Files: /dev/sdc, 
/dev/disk/by-id/scsi-36001e4f00043a3da04dc4843982f, 
/dev/disk/by-path/ip-192.168.131.101:3260-iscsi-iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3-lun-1
  Device Number: block 8:32-8:47 (char 21:4)
  Geometry (Logical): CHS 102400/64/32
  Size: 209715200 sectors a 512 bytes
  Drive status: no medium
  Config Status: cfg=new, avail=yes, need=no, active=unknown

That drive status: no medium drives me crazy. For comparison, this is 
the output for the first virtual disk I created, the one I can access:

41: SCSI 300.0: 10600 Disk
  [Created at block.222]
  Unique ID: R0Fb.EABbh0DH0_1
  SysFS ID: /block/sdb
  SysFS BusID: 3:0:0:0
  SysFS Device Link: /devices/platform/host3/session1/target3:0:0/3:0:0:0
  Hardware Class: disk
  Model: DELL MD3000i
  Vendor: DELL
  Device: MD3000i
  Revision: 0670
  Serial ID: 84L000I
  Driver: sd
  Device File: /dev/sdb (/dev/sg3)
  Device Files: /dev/sdb, 
/dev/disk/by-id/scsi-36001e4f0004326c105b3483e9c7a, 
/dev/disk/by-path/ip-192.168.131.101:3260-iscsi-iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3-lun-0
  Device Number: block 8:16-8:31 (char 21:3)
  Geometry (Logical): CHS 261/255/63
  Size: 4194304 sectors a 512 bytes
  Config Status: cfg=new, avail=yes, need=no, active=unknown

DMESG
=

end_request: I/O error, dev sdc, sector 209715192
end_request: I/O error, dev sdc, sector 0
Buffer I/O error on device sdc, logical block 0
end_request: I/O error, dev sdc, sector 0
end_request: I/O error, dev sdc, sector 209715192
end_request: I/O error, dev sdc, sector 0
Buffer I/O error on device sdc, logical block 0
end_request: I/O error, dev sdc, sector 0
Buffer I/O error on device sdc, logical block 0
end_request: I/O error, dev sdc, sector 0
end_request: I/O error, dev sdc, sector 209715192
end_request: I/O error, dev sdc, sector 0
Buffer I/O error on device sdc, logical block 0
end_request: I/O error, dev sdc, sector 0
end_request: I/O error, dev sdc, sector 209715192
Buffer I/O error on device sdc, logical block 26214399
end_request: I/O error, dev sdc, sector 209715192
end_request: I/O error, dev sdc, sector 0
Buffer I/O error on device sdc, logical block 0
end_request: I/O error, dev sdc, sector 0
end_request: I/O error, dev sdc, sector 209715192
end_request: I/O error, dev sdc, sector 0

The attach-time messages got lost, but I remember this line:
Dev sdc: unable to read RDB block 0

FDISK
=
[EMAIL PROTECTED]:~# fdisk -l /dev/sdc
[EMAIL PROTECTED]:~# fdisk /dev/sdc

Unable to read /dev/sdc

Might the disk still be initializing? The Dell client says it's finished...

Thanks!!


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Design questions

2008-06-02 Thread Arturo 'Buanzo' Busleiman

Here is the output you ask for, thanks!

[EMAIL PROTECTED]:~# iscsiadm -m session -P 3
iSCSI Transport Class version 2.0-724
iscsiadm version 2.0-865
Target: iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3
Current Portal: 192.168.131.101:3260,1
Persistent Portal: 192.168.131.101:3260,1
**
Interface:
**
Iface Name: default
Iface Transport: tcp
Iface IPaddress: 192.168.131.1
Iface HWaddress: default
Iface Netdev: default
SID: 1
iSCSI Connection State: LOGGED IN
Internal iscsid Session State: NO CHANGE

Negotiated iSCSI params:

HeaderDigest: None
DataDigest: None
MaxRecvDataSegmentLength: 131072
MaxXmitDataSegmentLength: 65536
FirstBurstLength: 8192
MaxBurstLength: 262144
ImmediateData: Yes
InitialR2T: Yes
MaxOutstandingR2T: 1

Attached SCSI devices:

Host Number: 3  State: running
scsi3 Channel 00 Id 0 Lun: 0
Attached scsi disk sdb  State: running
scsi3 Channel 00 Id 0 Lun: 1
Attached scsi disk sdc  State: running
scsi3 Channel 00 Id 0 Lun: 31
Attached scsi disk sdd  State: running
Current Portal: 192.168.130.101:3260,1
Persistent Portal: 192.168.130.101:3260,1
**
Interface:
**
Iface Name: default
Iface Transport: tcp
Iface IPaddress: 192.168.130.2
Iface HWaddress: default
Iface Netdev: default
SID: 2
iSCSI Connection State: LOGGED IN
Internal iscsid Session State: NO CHANGE

Negotiated iSCSI params:

HeaderDigest: None
DataDigest: None
MaxRecvDataSegmentLength: 131072
MaxXmitDataSegmentLength: 65536
FirstBurstLength: 8192
MaxBurstLength: 262144
ImmediateData: Yes
InitialR2T: Yes
MaxOutstandingR2T: 1

Attached SCSI devices:

Host Number: 4  State: running
scsi4 Channel 00 Id 0 Lun: 0
Attached scsi disk sde  State: running
scsi4 Channel 00 Id 0 Lun: 1
Attached scsi disk sdf  State: running
scsi4 Channel 00 Id 0 Lun: 31
Attached scsi disk sdg  State: running


/dev/sdc == /dev/sdf - the one I can't use

[EMAIL PROTECTED]:~# iscsiadm -m discovery -t st -p 192.168.131.101
192.168.130.101:3260,1 
iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3
192.168.130.102:3260,2 
iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3
192.168.131.101:3260,1 
iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3
192.168.131.102:3260,2 
iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3
[fe80::::021e:4fff:fe43:26c3]:3260,1 
iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3
[fe80::::021e:4fff:fe43:26c5]:3260,1 
iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3
[fe80::::021e:4fff:fe43:a3dc]:3260,2 
iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3
[fe80::::021e:4fff:fe43:a3de]:3260,2 
iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3


Also, what about this:
Dev sdc: unable to read RDB block 0

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



RE: Problems accessing virtual disks on MD3000i, was: RE: Design Questions

2008-06-02 Thread Bryan Mclellan

You mean the preferred controller module on the MD3000i SAN, I assume.

I'd make sure you can ping all of the nodes (four if you have two controllers).
Discover them all via sendtargets and log in to all of them.

 iscsiadm -m discovery -t st -p 192.168.130.101
 iscsiadm -m node -l
 fdisk -l

You should end up with two or four devices (depending on number of 
controllers); one for each virtual disk mapped to that host for each node 
you've logged in to, provided you've removed the Access mapping, which should 
be just fine. Fdisk -l should print a partition table, or lack thereof, for all 
the disks it can read (which should be half of number of nodes)

It took me a while to figure out that I couldn't access the disks via the 
second controller and playing around with iscsiadm a lot is what finally clued 
me in to it. It helped that I already had a test partition on the virtual disk 
created elsewhere so cat '/proc/partitions' revealed that the partition was 
only visible on two of the disk devices not all four.

Bryan

As a side note, I'd double check your subnet configurations on the controllers. 
Each controller should only have one interface on a specific subnet. I don't 
think this is related to your current problem though.

-Original Message-
From: open-iscsi@googlegroups.com [mailto:[EMAIL PROTECTED] On Behalf Of Arturo 
'Buanzo' Busleiman
Sent: Monday, June 02, 2008 12:08 PM
To: open-iscsi@googlegroups.com
Subject: Re: Problems accessing virtual disks on MD3000i, was: RE: Design 
Questions


Hi Bryan,

I changed my setup to only initiate sessions to the primary domain
controller
. This is my dmesg output now:

snip

Apparently, no changes, except that there are no partitions sde through sdg.
hwinfo and fdisk still report the same.

Btw, I also removed the Access DellUtility partition. No difference
either, except that /dev/sdd disappeared :)

Any other ideas?




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Problems accessing virtual disks on MD3000i, was: RE: Design Questions

2008-06-02 Thread Arturo 'Buanzo' Busleiman

OK, don't ask me why, but I just added a 20gb virtual disk on LUN-3, and 
BANG, the host SAW it with no problems whatsoever.

Maybe 100gb and/or lun-1 makes any difference? i hope not!

Buanzo,
a very puzzled man.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Problems accessing virtual disks on MD3000i, was: RE: Design Questions

2008-06-02 Thread Konrad Rzeszutek

 Apparently, no changes, except that there are no partitions sde through sdg.
 hwinfo and fdisk still report the same.
 
 Btw, I also removed the Access DellUtility partition. No difference 
 either, except that /dev/sdd disappeared :)
 
 Any other ideas?

multipath-tools? Did you install it?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Design Questions

2008-05-30 Thread Konrad Rzeszutek

On Thu, May 29, 2008 at 02:35:28PM -0300, Arturo 'Buanzo' Busleiman wrote:
 
 On May 28, 2:45 pm, Konrad Rzeszutek [EMAIL PROTECTED] wrote:
   I am not sure how you are partitioning your space. Does each guest
   have an iSCSI target (or LUN) assigned to it? Or is it one big
   drive that they run from? Also are you envisioning using this
   with LiveMigration (or whatever it is called with your virtualization
   system)?
 
 I'm using Vmware-Server (not ESX, just the free one).
 
 The guests themselves (the disk where the OS is installed) are stored as 
 vmdk's on a local folder.
 
 I want to provide application storage for each virtual machine, no 
 shared storage. I have 1.6TB total capacity, and plan on giving each 
 guest as much raid-5 storage space as they need.
 
 The iscsiadm discovery on my Host reports all available targets, over 
 both interfaces (broadcom and intel).
 
 So, basicly, I have these doubts / options:
 
 1) Login to each target on the host, and add raw disk access to the 
 guests to those host-devices.
 2) Don't use open-iscsi on the host, but use it on each guest to connect 
 to the targets.
 

If you run iSCSI on each guests you end up with overhead. Each guests will
have to do its own iSCSI packet assembling/disassembling, along with doing
socket operations (TCP, IP assembling) and your target will X-Guests
connections. Each guest would need to run the multipath suite which puts
I/O on the connection every 40 seconds (or less if a failure has occurred).

If on the other hand you make the connection on your host, setup
multipath there, create LVMs and assign them to each your guests you have:
 - less overhead (one OS doing the iSCSI packet assembling/disassembling),
   TCP/IP assembling.
 - one connection to the target. You can even purchase extra two NICs and
   create your own subnet for them and the target so that there is no
   traffic there (except iSCSI).
 - one machine running multipath and you can make it queue I/O from
   place if the network goes down. This will block the guests (you might
   need to change the SCSI timeout in the guests - no idea what registry
   key you need to change for this in Windows).
 - One place to zone out your huge capacity and you can resize them
   as you see fit from one place (using LVMs for guests and you can
   re-size them).

 And the main doubt: how does link aggregation / dualpath fit into those 
 options?

I can't give you an opinion about link aggregation as I don't have that
much experience in this field.

But in regards to multipath you are better of doing it on your host
than on the guest.
  
 Also, i find this error:
 
 [EMAIL PROTECTED]:~# iscsiadm -m node -L all
 Login session [iface: default, target: 
 iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3, 
 portal: 192.168.130.102,3260]
 Login session [iface: default, target: 
 iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3, 
 portal: fe80::::021e:4fff:fe43:26c3,3260]
 iscsiadm: initiator reported error (4 - encountered connection failure)
 iscsiadm: Could not log into all portals. Err 107.

Did you configure your ethX to use IPV6? The second target IP 
is in IPv6 format.

 
 I'm using crossover cables.

No switch? Then link-aggregation wouldn't matter I would think (since
the ARP requests aren't going to a switch).
 
 
  

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Design Questions

2008-05-30 Thread Mike Christie

Arturo 'Buanzo' Busleiman wrote:
 On May 28, 2:45 pm, Konrad Rzeszutek [EMAIL PROTECTED] wrote:
   I am not sure how you are partitioning your space. Does each guest
   have an iSCSI target (or LUN) assigned to it? Or is it one big
   drive that they run from? Also are you envisioning using this
   with LiveMigration (or whatever it is called with your virtualization
   system)?
 
 I'm using Vmware-Server (not ESX, just the free one).
 
 The guests themselves (the disk where the OS is installed) are stored as 
 vmdk's on a local folder.
 
 I want to provide application storage for each virtual machine, no 
 shared storage. I have 1.6TB total capacity, and plan on giving each 
 guest as much raid-5 storage space as they need.
 
 The iscsiadm discovery on my Host reports all available targets, over 
 both interfaces (broadcom and intel).
 
 So, basicly, I have these doubts / options:
 
 1) Login to each target on the host, and add raw disk access to the 
 guests to those host-devices.
 2) Don't use open-iscsi on the host, but use it on each guest to connect 
 to the targets.
 
 And the main doubt: how does link aggregation / dualpath fit into those 
 options?
  
 Also, i find this error:
 
 [EMAIL PROTECTED]:~# iscsiadm -m node -L all
 Login session [iface: default, target: 
 iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3, 
 portal: 192.168.130.102,3260]
 Login session [iface: default, target: 
 iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3, 
 portal: fe80::::021e:4fff:fe43:26c3,3260]
 iscsiadm: initiator reported error (4 - encountered connection failure)
 iscsiadm: Could not log into all portals. Err 107.
 

open-iscsi does not really support link local ipv6 adders yet. There is 
a way around this by setting up a iface for the net inerface (ethX) that 
it is local to then binding it, but it is a little complicated to do.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Design Questions

2008-05-29 Thread Arturo 'Buanzo' Busleiman

On May 28, 2:45 pm, Konrad Rzeszutek [EMAIL PROTECTED] wrote:
  I am not sure how you are partitioning your space. Does each guest
  have an iSCSI target (or LUN) assigned to it? Or is it one big
  drive that they run from? Also are you envisioning using this
  with LiveMigration (or whatever it is called with your virtualization
  system)?

I'm using Vmware-Server (not ESX, just the free one).

The guests themselves (the disk where the OS is installed) are stored as 
vmdk's on a local folder.

I want to provide application storage for each virtual machine, no 
shared storage. I have 1.6TB total capacity, and plan on giving each 
guest as much raid-5 storage space as they need.

The iscsiadm discovery on my Host reports all available targets, over 
both interfaces (broadcom and intel).

So, basicly, I have these doubts / options:

1) Login to each target on the host, and add raw disk access to the 
guests to those host-devices.
2) Don't use open-iscsi on the host, but use it on each guest to connect 
to the targets.

And the main doubt: how does link aggregation / dualpath fit into those 
options?
 
Also, i find this error:

[EMAIL PROTECTED]:~# iscsiadm -m node -L all
Login session [iface: default, target: 
iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3, 
portal: 192.168.130.102,3260]
Login session [iface: default, target: 
iqn.1984-05.com.dell:powervault.6001e4f0004326c1482127e3, 
portal: fe80::::021e:4fff:fe43:26c3,3260]
iscsiadm: initiator reported error (4 - encountered connection failure)
iscsiadm: Could not log into all portals. Err 107.

I'm using crossover cables.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Design Questions

2008-05-28 Thread Arturo 'Buanzo' Busleiman

Arturo 'Buanzo' Busleiman wrote:
 So, the obvious question here: I want to store the data in the SAN. 
 Should I get my sessions running in the host, or inside each virtual 
 machine?
If this is not the correct group to ask this question, I'd gladly accept 
suggestions for other groups! :)


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Design Questions

2008-05-28 Thread Konrad Rzeszutek

On Wed, May 28, 2008 at 01:15:36PM -0300, Arturo 'Buanzo' Busleiman wrote:
 
 Arturo 'Buanzo' Busleiman wrote:
  So, the obvious question here: I want to store the data in the SAN. 
  Should I get my sessions running in the host, or inside each virtual 
  machine?
 If this is not the correct group to ask this question, I'd gladly accept 
 suggestions for other groups! :)

I am not sure how you are partitioning your space. Does each guest
have an iSCSI target (or LUN) assigned to it? Or is it one big
drive that they run from? Also are you envisioning using this
with LiveMigration (or whatever it is called with your virtualization
system)?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Design Questions

2008-05-27 Thread Arturo 'Buanzo' Busleiman

Hi Group!

First of all: I've done my research in the eMail Archives before 
deciding to disturb in here. Maybe I'm not googling for the perfect 
keyword set. Anyway, here's my question:

I have:
* a PowerEdge 2950 with two nics with 4 ports total (e1000 and broadcom).
* a MD3000i PowerVault
* Vmware Server running on the poweredge.

I'm storing virtual machines in /vmstore (a
big partition on the poweredge's own RAID-5 virtual disk). The local 
storage is for the OS and applications. Not the data.

So, the obvious question here: I want to store the data in the SAN. 
Should I get my sessions running in the host, or inside each virtual 
machine?

Also, is it OK to use multi path (dual path) using one broadcom port, 
and one e1000 port (instead of using the two ports of the same nic)?

Any hints? Suggestions?


Thanks a LOT. I'm very new to this, but I'm doing my best effort.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---