need some clarity, if anyone has a minute

2010-06-23 Thread Christopher Barry
Hello,

I'm implementing some code to automagically configure iscsi connections
to a proprietary array. This array has it's own specific MPIO drivers,
and does not support DM-Multipath. I'm trying to get a handle on the
differences in redundancy provided by the various layers involved in the
connection from host to array, in a generic sense.

The array has two iSCSI ports per controller, and two controllers. The
targets can be seen through any of the ports. For simplicity, all ports
are on the same subnet.

I'll describe a series of scenarios, and maybe someone can speak their
level of usefulness, redundancy, gotchas, nuances, etc:

scenario #1
Single NIC, default iface, login to all controller portals.

scenario #2
Dual NIC, iface per NIC, login to all controller portals from each iface

scenario #3
Two bonded NICs in mode balance-alb
Single NIC, default iface, login to all controller portals.

scenario #4
Dual NIC, iface per NIC, MPIO driver, login to all controller portals
from each iface


Appreciate any advice,
Thanks,
-C

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: need some clarity, if anyone has a minute (correction)

2010-06-23 Thread Christopher Barry
correction inline:

On Wed, 2010-06-23 at 10:28 -0400, Christopher Barry wrote:
 Hello,
 
 I'm implementing some code to automagically configure iscsi connections
 to a proprietary array. This array has it's own specific MPIO drivers,
 and does not support DM-Multipath. I'm trying to get a handle on the
 differences in redundancy provided by the various layers involved in the
 connection from host to array, in a generic sense.
 
 The array has two iSCSI ports per controller, and two controllers. The
 targets can be seen through any of the ports. For simplicity, all ports
 are on the same subnet.
 
 I'll describe a series of scenarios, and maybe someone can speak their
 level of usefulness, redundancy, gotchas, nuances, etc:
 
 scenario #1
 Single NIC, default iface, login to all controller portals.
 
 scenario #2
 Dual NIC, iface per NIC, login to all controller portals from each iface
 
 scenario #3
 Two bonded NICs in mode balance-alb
 Single NIC, default iface, login to all controller portals.
single bonded interface, not single NIC.
 
 scenario #4
 Dual NIC, iface per NIC, MPIO driver, login to all controller portals
 from each iface
 
 
 Appreciate any advice,
 Thanks,
 -C
 



-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: need some clarity, if anyone has a minute

2010-06-23 Thread Patrick
On Jun 23, 7:28 am, Christopher Barry
christopher.ba...@rackwareinc.com wrote:
 This array has it's own specific MPIO drivers,
 and does not support DM-Multipath. I'm trying to get a handle on the
 differences in redundancy provided by the various layers involved in the
 connection from host to array, in a generic sense.

What kind of array is it?  Are you certain it does not support
multipath I/O?  Multipath I/O is pretty generic...

 For simplicity, all ports are on the same subnet.

I actually would not do that.  The design is cleaner and easier to
visualize (IMO) if you put the ports onto different subnets/VLANs.
Even better is to put each one on a different physical switch so you
can tolerate the failure of a switch.

 scenario #1
 Single (bonded) NIC, default iface, login to all controller portals.

Here you are at the mercy of the load balancing performed by the
bonding, which is probably worse than the load-balancing performed at
higher levels.  But I admit I have not tried it, so if you decide to
do some performance comparisons, please let me know what you
find.  :-)

I will skip right down to...

 scenario #4
 Dual NIC, iface per NIC, MPIO driver, login to all controller portals
 from each iface

Why log into all portals from each interface?  It buys you nothing and
makes the setup more complex.  Just log into one target portal from
each interface and do multi-pathing among them.  This will also make
your automation (much) simpler.

Again, I would recommend assigning one subnet to each interface.  It
is hard to convince Linux to behave sanely when you have multiple
interfaces connected to the same subnet.  (Linux will tend to send all
traffic for that subnet via the same interface.  Yes, you can hack
around this.  But why?)

In other words, I would do eth0 - subnet 0 - portal 0, eth1 -
subnet 1 - portal 1, eth2 - subnet 2 - portal 2, etc.  This is very
easy to draw, explain, and reason about.  Then set up multipath I/O
and you are done.

In fact, this is exactly what I am doing myself.  I have multiple
clients and multiple hardware iSCSI RAID units (Infortrend); each
interface on each client and RAID connects to a single subnet.  Then I
am using cLVM to stripe among the hardware RAIDs.  I am obtaining
sustained read speeds of ~1200 megabytes/second (yes, sustained; no
cache).  Plus I have the redundancy of multipath I/O.

Trying the port bonding approach is on my to do list, but this setup
is working so well I have not bothered yet.

 - Pat

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



iscsiadm: no records found

2010-06-23 Thread samba
hi
1. I created the five logical volumes in  volume group (rac1).

2. Created  iscsi targets for each of these five volumes
3. Create New Target IQN for 5 logical volumes with 5 names
4.Done LUN Mapping  Network ACL

5. iSCSI service is started, use the iscsiadm command-line interface
to discover all available targets on the network storage server
iscsiadm -m discovery -t sendtargets -p openfiler1-priv
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.asm1
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.asm2
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.asm3
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.asm4
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.crs

6.when i Manually Login to iSCSI Targets i got messgae like

iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.asm1 -p
192.168.2.195 -l
 
.asm2
 
.asm3
 
.asm4

iscsiadm: no records found!

when i run
iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.crs  -p
192.168.2.195

it show s nothing..

please tell me what i did mistake


.

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



invalid session in ARM Marvell Kirkwood board

2010-06-23 Thread EK Shen
Sorry! I am not sure if my post yesterday is received or not so post
it again.
You may ignore this post if you already receive it.
===
Hi!
I am porting open-iscsi to Marvell Kirkwood (6281) board and
encountered invalid session problem. The Linux kernel version is
2.6.22.18. I have tried open-iscsi 2.0-871, 870, 869 and got the same
error.
I tried the same code in my Linux desktop running Fedora 11. It is
working fine.
Is there anyone who met similar problem before?

I traced the code and found that the session is created successfully
but the connection failed to be created because the session lookup
fails.
It should be that sid 1 is created but sid 2 is provided for
session lookup.
I suspect some data transmission problem occurs between user space and
kernel space.
This problem is very wierd for me.
Could anyone give me some feedback or share experience?
Thanks a lot!


# iscsid -d 8 -f 
# iscsiadm -m discovery --type sendtargets --portal 192.168.107.150 -P
1
...
Target: iqn.2001-04.com.example:storage.disk2.sys1.xyz
Portal: 192.168.107.150:3260,1
Iface Name: default
Portal: 192.168.122.1:3260,1
Iface Name: default
# iscsiadm -d 8 -m node -l
iscsiadm: Max file limits 1024 1024
...
iscsiadm: to [iqn.2001-04.com.example:storage.disk2.sys1.xyz,
192.168.122.1,3260][d]
Logging in to [iface: default, target: iqn.
2001-04.com.example:storage.disk2.sys1.]
iscsid: poll result 1
...
iscsid: Allocted session 0x3bbf0
iscsid: no authentication configured...
iscsid: resolved 192.168.107.150 to 192.168.107.150
...
iscsid: connecting to 192.168.107.150:3260
iscsid: sched conn context 0x43048 evescsi2 : iSCSI Initiator over TCP/
IP
nt 2, tmo 0
iscsid: thread 0x43048 schedule: delay 0 state 3
iiscsi: invalid session 2.
scsid: Setting login timer 0x40fb0 timeout 15
iscsid: thread 0x40fb0 schedule: delay 60 state 3
Logging in to [iface: default, target: iqn.
2001-04.com.example:storage.disk2.sys1.]
iscsid: exec thread 00043048 callback
iscsid: put conn context 0x43048
iscsid: connected local port 59347 to 192.168.107.150:3260
iscsid: in kcreate_session
iscsid: in __kipc_call
iscsid: in kwritev
iscsid: in nlpayload_read
iscsid: expecting event 11, got 106, handling...
iscsid: in ctldev_handle
iscsid: in nl_read
iscsid: ctldev_handle got event type 106

iscsid: in nlpayload_read
iscsid: in nlpayload_read
iscsid: Could not set session2 priority. READ/WRITE throughout and
latency could b.

iscsid: created new iSCSI session sid 2 host no 0
iscsid: in kcreate_conn
iscsid: in __kipc_call
iscsid: in kwritev
iscsid: in nlpayload_read
iscsid: expecting event 13, got 103, handling...
iscsid: in nlpayload_read
iscsid: Received iferror -22
iscsid: returned -22
iscsid: can't create connection (0)
iscsid: disconnect conn
...

-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: Linux IO scalability and pushing over million IOPS over software iSCSI?

2010-06-23 Thread Jiahua
Maybe a native question, but why need 50 targets? Each target can only
serve about 25K IOPS? A single ramdisk should be able to handle this.
Where is the bottleneck?

We had a similar experiment but with Infiniband and Lustre. It turn
out Lustre has a rate limit in the RPC handling layer. Is it the same
problem here?

Jiahua



On Tue, Jun 22, 2010 at 6:44 AM, Pasi Kärkkäinen pa...@iki.fi wrote:
 Hello,

 Recently Intel and Microsoft demonstrated pushing over 1.25 million IOPS 
 using software iSCSI and a single 10 Gbit NIC:
 http://communities.intel.com/community/wired/blog/2010/04/22/1-million-iops-how-about-125-million

 Earlier they achieved one (1.0) million IOPS:
 http://blog.fosketts.net/2010/01/14/microsoft-intel-push-million-iscsi-iops/
 http://communities.intel.com/community/openportit/server/blog/2010/01/19/100-iops-with-iscsi--thats-not-a-typo

 The benchmark setup explained:
 http://communities.intel.com/community/wired/blog/2010/04/20/1-million-iop-article-explained
 http://dlbmodigital.microsoft.com/ppt/TN-100114-JSchwartz_SMorgan_JPlawner-1032432956-FINAL.pdf


 So the question is.. does someone have enough new hardware to try this with 
 Linux?
 Can Linux scale to over 1 million IO operations per second?


 Intel and Microsoft used the following for the benchmark:

        - Single Windows 2008 R2 system with Intel Xeon 5600 series CPU,
          single-port Intel 82599 10 Gbit NIC and MS software-iSCSI initiator
          connecting to 50x iSCSI LUNs.
        - IOmeter to benchmark all the 50x iSCSI LUNs concurrently.

        - 10 servers as iSCSI targets, each having 5x ramdisk LUNs, total of 
 50x ramdisk LUNs.
        - iSCSI target server also used 10 Gbit NICs, and StarWind iSCSI 
 target software.
        - Cisco 10 Gbit switch (Nexus) connecting the servers.

        - For the 1.25 million IOPS result they used 512 bytes/IO benchmark, 
 outstanding IOs=20.
        - No jumbo frames, just the standard MTU=1500.

 They used many LUNs so they can scale the iSCSI connections to multiple CPU 
 cores
 using RSS (Receive Side Scaling) and MSI-X interrupts.

 So.. Who wants to try this? :) I don't unfortunately have 11x extra computers 
 with 10 Gbit NICs atm to try it myself..

 This test covers networking, block layer, and software iSCSI initiator..
 so it would be a nice to see if we find any bottlenecks from current Linux 
 kernel.

 Comments please!

 -- Pasi

 --
 To unsubscribe from this list: send the line unsubscribe linux-kernel in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 Please read the FAQ at  http://www.tux.org/lkml/


-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: need some clarity, if anyone has a minute (correction)

2010-06-23 Thread Mike Christie

On 06/23/2010 09:34 AM, Christopher Barry wrote:

correction inline:

On Wed, 2010-06-23 at 10:28 -0400, Christopher Barry wrote:

Hello,

I'm implementing some code to automagically configure iscsi connections
to a proprietary array. This array has it's own specific MPIO drivers,
and does not support DM-Multipath. I'm trying to get a handle on the
differences in redundancy provided by the various layers involved in the
connection from host to array, in a generic sense.

The array has two iSCSI ports per controller, and two controllers. The
targets can be seen through any of the ports. For simplicity, all ports
are on the same subnet.

I'll describe a series of scenarios, and maybe someone can speak their
level of usefulness, redundancy, gotchas, nuances, etc:

scenario #1
Single NIC, default iface, login to all controller portals.


Of course with this there is no redundancy on the initiator side. If the 
nic on the initiator side dies, you are in trouble. And if you are not 
using multipath software in the block/scsi or net layer then logging 
into all the portals is no use.


Using dm-multipath across the target ports for most targets works well, 
and would allow you to take advantage of redundancy there.  Can't say 
anyhting about your mpio code you are using or your target since I do 
not know what they are.





scenario #2
Dual NIC, iface per NIC, login to all controller portals from each iface


Without some multipath software then this is pretty useless too. If you 
use something like dm-multipath then it can round robin or failover over 
all the paths that will get created.





scenario #3
Two bonded NICs in mode balance-alb
Single NIC, default iface, login to all controller portals.

single bonded interface, not single NIC.


I do not have a lot of experience with this.




scenario #4
Dual NIC, iface per NIC, MPIO driver, login to all controller portals
from each iface



The iscsi/scsi/block layers provide what dm-multipath needs for this, 
and for most targets it should work. Again, I have no idea what your 
MPIO driver does and needs so I cannot say if it will work well for you.


--
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: iscsiadm: no records found

2010-06-23 Thread Mike Christie

On 06/21/2010 11:01 PM, samba wrote:

hi
1. I created the five logical volumes in  volume group (rac1).

2. Created  iscsi targets for each of these five volumes
3. Create New Target IQN for 5 logical volumes with 5 names
4.Done LUN Mapping  Network ACL

5. iSCSI service is started, use the iscsiadm command-line interface
to discover all available targets on the network storage server
iscsiadm -m discovery -t sendtargets -p openfiler1-priv
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.asm1
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.asm2
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.asm3
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.asm4
192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.crs

6.when i Manually Login to iSCSI Targets i got messgae like

iscsiadm -m node -T iqn.2006-01.com.openfiler:racdb.asm1 -p
192.168.2.195 -l
  
.asm2
  
.asm3
  
.asm4


So does iscsiadm actually print out aht .asm2 over there?

What version of open-iscsi are you using?

Do you have multiple versions installed? If you do whereis iscsiadm do 
you see it installed in multiple places?



If after running the discovery command you do

iscsiadm -m node -P 1

Do you see the targets and portals?


If you do, then if you run

iscsiadm -m node -l

Does that log into all the targets ok?

--
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: invalid session in ARM Marvell Kirkwood board

2010-06-23 Thread Mike Christie

On 06/21/2010 01:27 AM, EK Shen wrote:

Hi!
I am porting open-iscsi to Marvell Kirkwood (6281) board and
encountered invalid session problem. The Linux kernel version is
2.6.22.18. I have tried open-iscsi 2.0-871, 870, 869 and got the same
error.
I tried the same code in my Linux desktop running Fedora 11. It is
working fine.
Is there anyone who met similar problem before?

I traced the code and found that the session is created successfully
but the connection failed to be created because the session lookup
fails.
It should be that sid 1 is created but sid 2 is provided for
session lookup.


What arch is this? Is it 64 bit? Are you doing 64 bit kernels and 32bit 
userspace? There is a bug where if using 32bit userspace but 64 bit 
kernels then data transmitted between the kernel and userspace gets 
messed up and you end up with lots of weird bugs like this?


Are you using the tools and kernel from the open-iscsi.org target ball 
or do you use the tools from open-iscsi.org then the kernel modules from 
2.6.22?





I suspect some data transmission problem occurs between user space and
kernel space.
This problem is very wierd for me.
Could anyone give me some feedback or share experience?
Thanks a lot!


# iscsid -d 8 -f
# iscsiadm -m discovery --type sendtargets --portal 192.168.107.150 -P
1
...
Target: iqn.2001-04.com.example:storage.disk2.sys1.xyz
 Portal: 192.168.107.150:3260,1
 Iface Name: default
 Portal: 192.168.122.1:3260,1
 Iface Name: default
# iscsiadm -d 8 -m node -l
iscsiadm: Max file limits 1024 1024
...
iscsiadm: to [iqn.2001-04.com.example:storage.disk2.sys1.xyz,
192.168.122.1,3260][d]
Logging in to [iface: default, target: iqn.
2001-04.com.example:storage.disk2.sys1.]
iscsid: poll result 1
...
iscsid: Allocted session 0x3bbf0
iscsid: no authentication configured...
iscsid: resolved 192.168.107.150 to 192.168.107.150
...
iscsid: connecting to 192.168.107.150:3260
iscsid: sched conn context 0x43048 evescsi2 : iSCSI Initiator over TCP/
IP
nt 2, tmo 0
iscsid: thread 0x43048 schedule: delay 0 state 3
iiscsi: invalid session 2.
scsid: Setting login timer 0x40fb0 timeout 15
iscsid: thread 0x40fb0 schedule: delay 60 state 3
Logging in to [iface: default, target: iqn.
2001-04.com.example:storage.disk2.sys1.]
iscsid: exec thread 00043048 callback
iscsid: put conn context 0x43048
iscsid: connected local port 59347 to 192.168.107.150:3260
iscsid: in kcreate_session
iscsid: in __kipc_call
iscsid: in kwritev
iscsid: in nlpayload_read
iscsid: expecting event 11, got 106, handling...
iscsid: in ctldev_handle
iscsid: in nl_read
iscsid: ctldev_handle got event type 106

iscsid: in nlpayload_read
iscsid: in nlpayload_read
iscsid: Could not set session2 priority. READ/WRITE throughout and
latency could b.

iscsid: created new iSCSI session sid 2 host no 0
iscsid: in kcreate_conn
iscsid: in __kipc_call
iscsid: in kwritev
iscsid: in nlpayload_read
iscsid: expecting event 13, got 103, handling...
iscsid: in nlpayload_read
iscsid: Received iferror -22
iscsid: returned -22
iscsid: can't create connection (0)
iscsid: disconnect conn
...




--
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: need some clarity, if anyone has a minute

2010-06-23 Thread Christopher Barry
Thanks Patrick. please see inline.

On Wed, 2010-06-23 at 08:04 -0700, Patrick wrote: 
 On Jun 23, 7:28 am, Christopher Barry
 christopher.ba...@rackwareinc.com wrote:
  This array has it's own specific MPIO drivers,
  and does not support DM-Multipath. I'm trying to get a handle on the
  differences in redundancy provided by the various layers involved in the
  connection from host to array, in a generic sense.
 
 What kind of array is it?  Are you certain it does not support
 multipath I/O?  Multipath I/O is pretty generic...
 
  For simplicity, all ports are on the same subnet.
 
 I actually would not do that.  The design is cleaner and easier to
 visualize (IMO) if you put the ports onto different subnets/VLANs.
 Even better is to put each one on a different physical switch so you
 can tolerate the failure of a switch.

Absolutely correct. What I was looking for were comparisons of the
methods below, and wanted subnet stuff out of the way while discussing
that.

 
  scenario #1
  Single (bonded) NIC, default iface, login to all controller portals.
 
 Here you are at the mercy of the load balancing performed by the
 bonding, which is probably worse than the load-balancing performed at
 higher levels.  But I admit I have not tried it, so if you decide to
 do some performance comparisons, please let me know what you
 find.  :-)
 
 I will skip right down to...
 
  scenario #4
  Dual NIC, iface per NIC, MPIO driver, login to all controller portals
  from each iface
 
 Why log into all portals from each interface?  It buys you nothing and
 makes the setup more complex.  Just log into one target portal from
 each interface and do multi-pathing among them.  This will also make
 your automation (much) simpler.

Here I do not understand your reasoning. My understanding was I would
need a session per iface to each portal to survive a controller port
failure. If this assumption is wrong, please explain.

 
 Again, I would recommend assigning one subnet to each interface.  It
 is hard to convince Linux to behave sanely when you have multiple
 interfaces connected to the same subnet.  (Linux will tend to send all
 traffic for that subnet via the same interface.  Yes, you can hack
 around this.  But why?)
 
 In other words, I would do eth0 - subnet 0 - portal 0, eth1 -
 subnet 1 - portal 1, eth2 - subnet 2 - portal 2, etc.  This is very
 easy to draw, explain, and reason about.  Then set up multipath I/O
 and you are done.
 
 In fact, this is exactly what I am doing myself.  I have multiple
 clients and multiple hardware iSCSI RAID units (Infortrend); each
 interface on each client and RAID connects to a single subnet.  Then I
 am using cLVM to stripe among the hardware RAIDs.  I am obtaining
 sustained read speeds of ~1200 megabytes/second (yes, sustained; no
 cache).  Plus I have the redundancy of multipath I/O.
 
 Trying the port bonding approach is on my to do list, but this setup
 is working so well I have not bothered yet.

this is also something I am uncertain about. For instance, in the
balance-alb mode, each slave will communicate with a remote ip
consistently. In the case of two slaves, and two portals how would the
traffic be apportioned? would it write to both simultaneously? could
this corrupt the disk in any way? would it always only use a single
slave/portal?

 
  - Pat
 




-- 
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: PROBLEM: Error Adding Filesystem to iSCSI Volume

2010-06-23 Thread Shyam Iyer

On 06/21/2010 02:27 PM, Mike Christie wrote:

On 06/17/2010 01:02 PM, pat_dig...@dell.com wrote:

Hi,

Sorry if this is the wrong list, but open-iscsi is listed as the 
maintainer for the file this occurred in...


When trying to trying to format an iSCSI volume, the kernel throws a 
null pointer exception and the thread goes into an infinite loop or 
is otherwise locked up.  The thread doing the filesystem creation 
(for example: mke2fs -j) cannot be killed, even with kill -9.


I received a kernel oops, which is at the bottom with some other 
useful information


Kernel version: Tested with Fedora's 2.6.33.5-124 (Fedora 13) as well 
as linux-next, as well as a modified version of linus' tree.




What modifications were in your linus tree? I tried upstream 2.6.33 
and Linus's tree and I could not replicate this.




Mike - Looks like we have a few variables out here with a *not yet 
released* array firmware and the possibility of this not being an 
open-iscsi problem. Sorry for the churn..



-Shyam

--
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.