Re: PowerVault MD3200i/MD3220i

2011-08-28 Thread Pasi Kärkkäinen
On Fri, Aug 26, 2011 at 03:46:40AM -0700, swejis wrote:
> Greetings.
> 
> Considering buying a PowerVault MD3200i/MD3220i. Anyone here using
> that target with open-iscsi ? And if so what is your experience ?
> 

If you can afford Dell Equallogic go for those instead.
*Much* better than MD3200/MD3220.

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: multiple initiator instances on same machine.

2011-03-19 Thread Pasi Kärkkäinen
On Sat, Mar 19, 2011 at 06:59:55PM +0530, rahul gupta wrote:
>Yes, I require two different IQN names for the initiator.
>

Can you please explain in what kind of scenario two different
iqn names are needed for a single initiator? I haven't personally
had that requirement so far.. so I'm trying to figure out when it's needed.


>I know, if I login into different targets or same target multiple times
>(on different portals) then separate sessions and thus multiple
>connections are created.
>

Yeah, and even if you have just a single target/portal you can create
multiple session to it from a single initiator by using different sid values.

-- Pasi

>Best Regards,
>Rahul.
>On Sat, Mar 19, 2011 at 6:40 PM, Pasi Kärkkäinen <[1]pa...@iki.fi> wrote:
> 
>  On Fri, Mar 18, 2011 at 03:41:32PM +0530, rahul gupta wrote:
>  >Hi,
>  >
>  >I would like to know, how to configure multiple iqn names
>  (initiator
>  >instances running) on same machine using open-iSCSI (iscsi_tcp as
>  >transport).
>  >
>  >Under a test scenario as follows, I am able to create multiple
>  initiator
>  >instances on the same machine, (but may be its not a cool way to do
>  so):-
>  >
>  >1. Start initiator with a iqn name and login into target.
>  >2. On initiator, mount lun and run some IO.
>  >3. Now let the IO to continue but kill iscsid.
>  >4. Now change the initiator iqn name and again start the daemon.
>  >5. Login into the target and start IO.
>  >6. On running iscsiadm -m session -P 3
>  >Its observed that both the sessions are eastablished with different
>  >initiator iqn name and IO is going fine on both sessions.
>  >
>  >Is it really whats expected?
>  >
> 
>  Do you absolutely require different IQN names for the initiator?
> 
>  You should be able to specify separate session ID (iirc SID) to create
>  multiple connections from a single initiator.
> 
>  -- Pasi
>  --
>  You received this message because you are subscribed to the Google
>  Groups "open-iscsi" group.
>  To post to this group, send email to [2]open-iscsi@googlegroups.com.
>  To unsubscribe from this group, send email to
>  [3]open-iscsi+unsubscr...@googlegroups.com.
>  For more options, visit this group at
>  [4]http://groups.google.com/group/open-iscsi?hl=en.
> 
> References
> 
>Visible links
>1. mailto:pa...@iki.fi
>2. mailto:open-iscsi@googlegroups.com
>3. mailto:open-iscsi%2bunsubscr...@googlegroups.com
>4. http://groups.google.com/group/open-iscsi?hl=en

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: multiple initiator instances on same machine.

2011-03-19 Thread Pasi Kärkkäinen
On Fri, Mar 18, 2011 at 03:41:32PM +0530, rahul gupta wrote:
>Hi,
> 
>I would like to know, how to configure multiple iqn names (initiator
>instances running) on same machine using open-iSCSI (iscsi_tcp as
>transport).
> 
>Under a test scenario as follows, I am able to create multiple initiator
>instances on the same machine, (but may be its not a cool way to do so):-
> 
>1. Start initiator with a iqn name and login into target.
>2. On initiator, mount lun and run some IO.
>3. Now let the IO to continue but kill iscsid.
>4. Now change the initiator iqn name and again start the daemon.
>5. Login into the target and start IO.
>6. On running iscsiadm -m session -P 3
>Its observed that both the sessions are eastablished with different
>initiator iqn name and IO is going fine on both sessions.
> 
>Is it really whats expected?
> 

Do you absolutely require different IQN names for the initiator?

You should be able to specify separate session ID (iirc SID) to create
multiple connections from a single initiator.

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: [PATCH 2.6.24.4 0/4] I/OAT patches for open-iscsi

2011-03-03 Thread Pasi Kärkkäinen
On Thu, Mar 03, 2011 at 06:22:36AM -0600, Mike Christie wrote:
> On 03/03/2011 06:20 AM, Mike Christie wrote:
>> If you have a fast system with ioatdma please try it out.
>>
>
> Oh yeah, starting vacation .. Now :)

Thanks for the heads up!

Actually I have some new hardware with 10gig nics 
so I can probably try this stuff..

Enjoy your vacation!

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: Antw: [PATCH 0/3] Add initial DCB support

2011-02-01 Thread Pasi Kärkkäinen
On Tue, Feb 01, 2011 at 08:51:25AM +0100, Ulrich Windl wrote:
> >>> Mark Rustad  schrieb am 31.01.2011 um 18:31 in
> Nachricht <20110131172745.28218.25755.stgit@localhost6.localdomain6>:
> > This patch series adds initial DCB support to open-iscsi. In this
> [...]
> 
> Hi!
> 
> I may out myself as uneducated, but I was following the thread on "DCB" for a 
> while, even trying to find out what that might be using Wikipedia, but I 
> failed to succeed.
> 
> Would anybody care to explain the acronym?
> 

DCB = Data Center Bridging.

http://en.wikipedia.org/wiki/Data_center_bridging

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: iscsi and virtualization

2011-01-21 Thread Pasi Kärkkäinen
On Thu, Jan 20, 2011 at 07:15:52AM -0800, jbygden wrote:
> Hi!
> 
> Is there any best practice on how to use iscsi with virtualization?
> 
> I have a CentOS 5.5 server running as a KVM host for a couple of
> guests. I have iscsi-initiator-utils (iscsi-initiator-
> utils-6.2.0.871-0.16.el5) installed on this host.
> 
> We have bought an Equallogic PS6000 iscsi array which is not yet fully
> utilized.
> 
> I thought that I'd create volumes in the PS6000 when I needed them,
> populate them to the CentOS KVM host and attach the disks to the
> virtual guests as needed.
> 
> This is apparently not a good idea - since, as far as I can tell at
> least, I have to restart the iscsi subsystem every time I present a
> new disk from the PS6000. A "service iscsi restart" effectively
> destroys all KVM guests using existing, already logged in iscsi disks
> will hang indefinitely wondering where their disks disappeared to.
> 
> So, is there any best practice regarding using iscsi disks with
> virtualization? Or do I have to make up my mind immediately and create
> all iscsi disks beforehand and not ever do any changes?
> 

No, you don't need to restart iscsi services.

Just rescan.

Also: did you consider using LVM on the iSCSI LUNs?

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: bnx2 (not offloaded) - multiple ifaces - iscsi connection timeouts - dell m610 - equallogic ps6000

2011-01-20 Thread Pasi Kärkkäinen
On Thu, Jan 20, 2011 at 02:22:24PM -0600, Mike Christie wrote:
> On 01/19/2011 01:13 PM, Joe Hoot wrote:
>> # To control how many commands the session will queue set
>> # node.session.cmds_max to an integer between 2 and 2048 that is also
>> # a power of 2. The default is 128.
>> node.session.cmds_max = 128
>> # To control the device's queue depth set node.session.queue_depth
>> # to a value between 1 and 1024. The default is 32.
>> node.session.queue_depth = 32
>>
>
> Hey, I am not sure if we are hitting a bug in the kernel but some other  
> user reported that if they increase cmds_max and queue_depth to  
> something like cmds_max=1024 and queue_depth=128, then they can use the  
> default noop settings and not see any timeouts.
>
> If that helps you too, then we might have a bug in the kernel fifo code  
> or our use of it.
>
>
>>
>> 2) *cmds_max and queue_depth* - I haven't adjusted those settings yet.  What
>> is the advantage and disadvantage of raising those?  I am using dm-multipath
>> with rr_min_io currently set to something like 200.  So every 200 i/o's are
>> going to the other path at the dm-multipath layer.  I am also using 9000 mtu
>> size.  So I'm not sure how that plays into this -- specific to these queue
>> depths and max_cmds, that is.  Also, how do the cmds_max and queue_depth
>> relate?  From what I'm reading, it seems like the queue_depth is each
>> iface's buffer and that the cmds_max is specific to each session?
>>
>
> cmds_max is the max number of commands the initiator will send down in  
> each session. queue_depth is the max number of commands it will send  
> down to a device/LU.
>
> So if you had the settings above and 5 devices/LUs on a target then the  
> initiator could end up sending 32 cmds to 4 devices (because 32 * 4 =  
> 128 and that hits the cmds_max setting), and 1 device would have to wait  
> for some commands to finish before the initiator would send it some.
>
> The target also has its own limit that it tells us about, and we will  
> not send more commands than it says it can take. So if cmds_max is  
> larger then the target's limit we will obey the target limit.
>
> For EQL boxes, you always get one device/LU per target, and you end up  
> with lots of targets. So your instinct might be to just set them to the  
> same value. However, you would still want to set the cmds_max a little  
> higher than queue_depth because cmds_max covers scsi/block commands and  
> also internal iSCSI commands and scsi eh tasks like nops or task  
> management commands like aborts.
>
> I am not sure what to set rr_min_io to. It depends on if you are using  
> bio based or request based multipath. For bio based if you are sending  
> lots of small IO then you would want to set rr_min_io higher to make  
> sure lots of small bios are sent to the same path so that they get  
> merged into one nice big command/request. If you are sending lots of  
> large IOs then you could set rr_min_io closer to queue_depth. For  
> request based multipath you could set rr_min_io closer to queue_depth  
> because the requests should be merged already and so the request is  
> going to go out as that command.
>

For VMware ESX/ESXi EQL recommends rr_min_io value of 3,
to utilize all the paths simultaneously..

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: qlogic hba: Could not offload sendtargets

2011-01-03 Thread Pasi Kärkkäinen
On Tue, Dec 28, 2010 at 03:39:18PM +0530, rahul wrote:
> On Monday 27 December 2010 07:31 PM, Pasi Kärkkäinen wrote:
>> On Sun, Dec 26, 2010 at 10:14:21AM -0800, Rudy Gevaert wrote:
>>> Hello,
>>>
>>> I'm trying to use a qlogic hba to do offloading.  I don't fully 
>>> understand
>>> anything.  I am following the readme file.  The following things aren't
>>> clear to me:
>>>
>>> 1) I get an extra interface (eth1).  Should I give that interface an ip
>>> address?  If I don't, I can't ping the target.  What is this interface
>>> needed for?
>>> 2) I set an ip address on the iface reported by iscsiadm:
>>> r...@isis:/usr/share/doc/open-iscsi# iscsiadm -m iface
>>> default tcp
>>> iser iser
>>> qla4xxx.00:c0:dd:0e:e3:ad
>>> qla4xxx,00:c0:dd:0e:e3:ad,192.168.201.4,,
>>>
>>> r...@isis:/usr/share/doc/open-iscsi# iscsiadm -m iface -I
>>> qla4xxx.00:c0:dd:0e:e3:ad
>>> # BEGIN RECORD 2.0-871
>>> iface.iscsi_ifacename = qla4xxx.00:c0:dd:0e:e3:ad
>>> iface.net_ifacename =
>>> iface.ipaddress = 192.168.201.4
>>> iface.hwaddress = 00:c0:dd:0e:e3:ad
>>> iface.transport_name = qla4xxx
>>> iface.initiatorname =
>>> # END RECORD
>>>
>>> However, then discovery doesn't work:
>>>
>>> r...@isis:/usr/share/doc/open-iscsi#  iscsiadm -m discovery -t st -p
>>> 192.168.201.200:3260 -I qla4xxx.00:c0:dd:0e:e3:ad
>>> iscsiadm: Could not offload sendtargets to 192.168.201.200.
>>>
>>> iscsiadm: initiator reported error (1 - unknown error)
>>>
>>> I'm running 2.6.32-5-xen-amd64 (Debian Squeeze) and 2.0.871.3-2squeeze1 
>>> of
>>> open-iscsi..
>>>
>>> Any help in pointing me in the right direction is greatly appreciated.
>>>
>> Did you configure the qla4xxx HBA using the qlogic tools?
>> I don't think you can configure it using open-iscsi/iscsiadm ..
>>
>> -- Pasi
>>
> ip address can be configured using bios.
>

Yeah, configuring qla4xxx HBAs is usually a combination
of doing the initial configuration from the HBA BIOS during system power on,
and then doing additional configuration using Qlogic iSCSI SanSurfer
or the qlogic cmdline tool, after installing qlogic drivers (and the agent) for 
the HBA.

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: qlogic hba: Could not offload sendtargets

2010-12-27 Thread Pasi Kärkkäinen
On Sun, Dec 26, 2010 at 10:14:21AM -0800, Rudy Gevaert wrote:
>Hello,
> 
>I'm trying to use a qlogic hba to do offloading.  I don't fully understand
>anything.  I am following the readme file.  The following things aren't
>clear to me:
> 
>1) I get an extra interface (eth1).  Should I give that interface an ip
>address?  If I don't, I can't ping the target.  What is this interface
>needed for?
>2) I set an ip address on the iface reported by iscsiadm:
>r...@isis:/usr/share/doc/open-iscsi# iscsiadm -m iface
>default tcp
>iser iser
>qla4xxx.00:c0:dd:0e:e3:ad
>qla4xxx,00:c0:dd:0e:e3:ad,192.168.201.4,,
> 
>r...@isis:/usr/share/doc/open-iscsi# iscsiadm -m iface -I
>qla4xxx.00:c0:dd:0e:e3:ad
># BEGIN RECORD 2.0-871
>iface.iscsi_ifacename = qla4xxx.00:c0:dd:0e:e3:ad
>iface.net_ifacename = 
>iface.ipaddress = 192.168.201.4
>iface.hwaddress = 00:c0:dd:0e:e3:ad
>iface.transport_name = qla4xxx
>iface.initiatorname = 
># END RECORD
> 
>However, then discovery doesn't work:
> 
>r...@isis:/usr/share/doc/open-iscsi#  iscsiadm -m discovery -t st -p
>192.168.201.200:3260 -I qla4xxx.00:c0:dd:0e:e3:ad
>iscsiadm: Could not offload sendtargets to 192.168.201.200.
> 
>iscsiadm: initiator reported error (1 - unknown error)
> 
>I'm running 2.6.32-5-xen-amd64 (Debian Squeeze) and 2.0.871.3-2squeeze1 of
>open-iscsi..
> 
>Any help in pointing me in the right direction is greatly appreciated.
> 

Did you configure the qla4xxx HBA using the qlogic tools? 
I don't think you can configure it using open-iscsi/iscsiadm ..

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: equallogic and double connections for multipathing with Debian 5

2010-08-11 Thread Pasi Kärkkäinen
On Wed, Aug 04, 2010 at 10:54:48AM -0700, Mike Vallaly wrote:
> Sorry for the lateness in my reply. Just stumbled across this
> thread.. ;)
> 
> Part of the problem with MPIO in linux with two (or more) interfaces
> connected to the same Ethernet segment is "arp flux". Essentially all
> traffic will by default only exit out one path (mac address) on a
> multi-homed network. The fix for this is to explicitly tie the
> interface to a route rule which ensures traffic leaves via the
> interface the application intended.
> 
> Here is the script we use in Debian to properly set the interface
> routes for MPIO with equallogic. (IE: /etc/scripts/san-interface)
> 

Uhm, why not just use open-iscsi 'ifaces' feature to bind each 
iscsi iface/session to specific ethernet interface?

-- Pasi

> 
> #!/bin/bash
> 
> # Michael Vallaly (Feb 2008)
> 
> # This script configures local interfaces for use CNU's MP-IO iSCSI
> SAN Network
> # NOTE: This script may be called from /etc/network/interfaces without
> parameters
> 
> SAN_NETWORKS="10.99.99.0/24"
> SAN_MTU="9000"
> 
> IP_BIN="/bin/ip"
> ETHTOOL_BIN="/usr/sbin/ethtool"
> 
> 
> 
> # Check for required binaries
> for req_bin in $IP_BIN $ETHTOOL_BIN; do
>   if [ ! -x "$req_bin" ]; then
> echo "Can't execute ${req_bin}! Aborting.."
> exit 1
>   fi
> done
> 
> usage="Usage: $0 -i  -m "
> 
> while getopts "i:m:" options; do
>   case $options in
> i ) interfaces+=" $OPTARG";;
> m ) action=$OPTARG;;
> \? ) echo $usage
>  exit 1;;
>  * ) echo $usage
>   exit 1;;
>   esac
> done
> 
> # Check for ifup/down enviornment variables
> if [[ -n $MODE && -n $IFACE ]]; then
>interfaces=$IFACE
>action=$MODE
> fi
> 
> # Figure out what we are doing
> case $action in
>start ) action="add";;
>  add ) action="add";;
> stop ) action="del";;
>  del ) action="del";;
>* ) echo $usage
>exit 1;;
> esac
> 
> for interface in $interfaces; do
> 
>   # Check that the interface exists before we go playing with it
>   if ! ($IP_BIN addr |egrep -nqe "inet.*$interface" && $IP_BIN link |
> egrep -nqe "$interface.*,UP"); then
> continue
>   fi
>   table_num=$((`echo ${interface} |tr -d [[:alpha:]]` + 10))
>   interface_ip=`$IP_BIN route show scope link proto kernel dev $
> {interface} |awk '{print $3}'`
> 
>   if [ $table_num -gt 252 ]; then
> echo "Invalid SAN interface (${table_num}) specified!"
> exit 1
>   fi
> 
>   for network in $SAN_NETWORKS; do
> 
> # Configure our remote SAN networks
> 
> localnet=`$IP_BIN route |grep "${interface}  proto kernel" |cut -
> d" " -f1`
> existing_san_iface=`$IP_BIN route show ${network} |grep -we "via" |
> awk '{print $5}'`
> 
> # Don't add networks if they are locally connected
> if [[ "$localnet" == "$network" ]]; then
>   continue
> else
> 
>   # Set our default gateway for remote networks
>   local=`echo $localnet |cut -d. -f1-3`
> 
>   if [[ "$action" == "add" ]]; then
> 
> # Create a unique route table
> $IP_BIN route add ${localnet} dev ${interface} table $
> {table_num}
> $IP_BIN route add ${network} via ${local}.1 dev ${interface}
> table ${table_num}
> $IP_BIN rule add from ${interface_ip}/32 lookup ${table_num}
> route_match=`echo $existing_san_iface $interface |tr -t ' '
> '\n'|sort -u`
> 
>   else
> 
> # Delete the route table
> $IP_BIN rule del from ${interface_ip}/32 lookup ${table_num}
> 2> /dev/null
> route_match=`echo $existing_san_iface $interface |tr -t ' '
> '\n'|sort -u |grep -v ${interface}`
> 
>   fi
> 
>   # Generate required next hops
>   route_opt=""
>   for dev in $route_match; do
> route_opt="$route_opt nexthop via ${local}.1 dev ${dev}"
>   done
> 
>   # Cleanup default route
>   $IP_BIN route del ${network} via ${local}.1 2> /dev/null
> 
>   # Add/ReAdd the default route
>   if [ "${route_opt}" != "" ]; then
> eval $IP_BIN route add ${network} scope global ${route_opt}
>   fi
> 
> fi
> 
>   done
> 
>   # Flush the routing cache
>   $IP_BIN route flush cache
> 
>   # Configure our local network interfaces
>   # Configure our local network interfaces
> 
>   if [[ "$action" == "add" ]]; then
> 
> # Set the proper MTU for the network interface (note this may take
> the interface offline!)
> if [ "$($IP_BIN link show $interface |grep "mtu" |cut -d" " -f
> 5)" != "9000" ]; then
>   $IP_BIN link set $interface mtu $SAN_MTU
> fi
> 
> # Force flowcontrol on
> $ETHTOOL_BIN --pause $interface autoneg off rx on tx on
> 
> # Only ARP for local interface
> echo "1" > /proc/sys/net/ipv4/conf/${interface}/arp_ignore
> 
> else
> 
> # Set the proper MTU for the network interface (note this may take
> the interface offline!)
> if [ "$($IP_BIN link show $interface |grep "mtu" 

Re: Antw: Re: iscsi_tcp: datalen error while doing IO on sles 10 sp2

2010-08-02 Thread Pasi Kärkkäinen
On Thu, Jul 29, 2010 at 12:54:22AM -0700, Anil wrote:
> Hi,
> 
> I've tested few things and noticed the following:
> 
> 1)Written a simple module which does nothing but get the bdev and call
> generic_make_request. Result was that, I still got iscsi connection
> 1011 state 3 errors, continually.
> 
> 2) configured a system with ietadm iscsitarget. dd to a file, exported
> it as a iscsi disk to the initiator. Now, the open-iscsi recognized
> the disk and when I do IO, still got the errors:
> 

Are you doing these tests from a guest VM? 

Have you verified your network is OK from the VM? 
Try using iperf, in both udp and tcp modes..

-- Pasi

> iscsi: received itt 0 expected session age (0)
> connection1:0: iscsi: detected conn error (1011)
> connection1:0 is operational after recovery (1 attempts)
> Kernel reported iSCSI connection 1:0 error (1011) state (3)
> iscsi: received itt 0 expected session age (0)
>  so on
> 
> 3) put both the target and initiator in the same subnet , still go the
> same errors.
> 
> 4) DS-1-29-SLES10-SP2-64bit-Xen:~ # cat /etc/sysconfig/network/ifcfg-
> eth-id-5a\:18\:2b\:6f\:83\:60
> BOOTPROTO='static'
> BROADCAST=''
> ETHTOOL_OPTIONS=''
> IPADDR='192.168.1.29'
> MTU=''
> NAME='Xen Virtual Ethernet Card 0'
> NETMASK='255.255.254.0'
> NETWORK=''
> REMOTE_IPADDR=''
> STARTMODE='auto'
> UNIQUE='+jsg.Sd+ykfyvlK4'
> USERCONTROL='no'
> _nm_name='bus-none-vif-0'
> 
> 
> Changed the MTU to 9000 and did service network restart and got the
> errors.
> 
> 5) I see this in my log messages "iscsid: transport class version
> 2.0-724. iscsid version 2.0-868"
> 
> sometimes I see the errors when no IO happens.
> 
> DS-1-29-SLES10-SP2-64bit-Xen:~/open-iscsi-2.0-871 # uname -a
> Linux DS-1-29-SLES10-SP2-64bit-Xen 2.6.16.60-0.39.3-xen #1 SMP Mon May
> 11 11:46:34 UTC 2009 x86_64 x86_64 x86_64 GNU/Linux
> 
> I tried compiling the latest open-iscsi as I didnt have the debug
> options for open-iscsi but ended up getting errors like below while
> compiling the kernel part of the package:
> 
> DS-1-29-SLES10-SP2-64bit-Xen:~/open-iscsi-2.0-871 # make KSRC=/usr/src/
> linux-2.6.16.60-0.39.3 KBUILD_OUTPUT=/usr/src/linux-2.6.16.60-0.39.3-
> obj/x86_64/xen
> make -C utils/sysdeps
> make[1]: Entering directory `/root/open-iscsi-2.0-871/utils/sysdeps'
> cc   -O2 -fno-inline -Wall -Wstrict-prototypes -g   -c -o sysdeps.o
> sysdeps.c
> make[1]: Leaving directory `/root/open-iscsi-2.0-871/utils/sysdeps'
> make -C utils/fwparam_ibft
> make[1]: Entering directory `/root/open-iscsi-2.0-871/utils/
> fwparam_ibft'
> cc -O2 -g -fPIC -Wall -Wstrict-prototypes -I../../include -I../../
> usr   -c -o fw
> cc -O2 -g -fPIC -Wall -Wstrict-prototypes -I../../include -I../../
> usr   -c -o fw
> cc -O2 -g -fPIC -Wall -Wstrict-prototypes -I../../include -I../../
> usr   -c -o pr
> :1622: warning: āyyunputā defined but not used
> cc -O2 -g -fPIC -Wall -Wstrict-prototypes -I../../include -I../../
> usr   -c -o pr
> cc -O2 -g -fPIC -Wall -Wstrict-prototypes -I../../include -I../../
> usr   -c -o fw
> fwparam_ppc.c: In function āloop_devsā:
> fwparam_ppc.c:358: warning: passing argument 4 of āqsortā from
> incompatible poin
> make[1]: Leaving directory `/root/open-iscsi-2.0-871/utils/
> fwparam_ibft'
> make -C usr
> make[1]: Entering directory `/root/open-iscsi-2.0-871/usr'
> cc -O2 -g -Wall -Wstrict-prototypes -I../include -I. -DLinux -
> DNETLINK_ISCSI=8 -
> cc -O2 -g -Wall -Wstrict-prototypes -I../include -I. -DLinux -
> DNETLINK_ISCSI=8 -
> cc -O2 -g -Wall -Wstrict-prototypes -I../include -I. -DLinux -
> DNETLINK_ISCSI=8 -
> cc -O2 -g -Wall -Wstrict-prototypes -I../include -I. -DLinux -
> DNETLINK_ISCSI=8 -
> cc -O2 -g -Wall -Wstrict-prototypes -I../include -I. -DLinux -
> DNETLINK_ISCSI=8 -
> log.c:334: warning: ā__dump_charā defined but not used
> cc -O2 -g -Wall -Wstrict-prototypes -I../include -I. -DLinux -
> DNETLINK_ISCSI=8 -
> cc -O2 -g -Wall -Wstrict-prototypes -I../include -I. -DLinux -
> DNETLINK_ISCSI=8 -
> cc -O2 -g -Wall -Wstrict-prototypes -I../include -I. -DLinux -
> DNETLINK_ISCSI=8 -D_GNU_SOURCE   -c -o iface.o iface.c
> iface.c:312: warning: āiface_get_next_idā defined but not used
> cc -O2 -g -Wall -Wstrict-prototypes -I../include -I. -DLinux -
> DNETLINK_ISCSI=8 -D_GNU_SOURCE   -c -o idbm.o idbm.c
> cc -O2 -g -Wall -Wstrict-prototypes -I../include -I. -DLinux -
> DNETLINK_ISCSI=8 -D_GNU_SOURCE   -c -o sysfs.o sysfs.c
> cc -O2 -g -Wall -Wstrict-prototypes -I../include -I. -DLinux -
> DNETLINK_ISCSI=8 -D_GNU_SOURCE   -c -o host.o host.c
> cc -O2 -g -Wall -Wstrict-prototypes -I../include -I. -DLinux -
> DNETLINK_ISCSI=8 -D_GNU_SOURCE   -c -o session_info.o session_info.c
> cc -O2 -g -Wall -Wstrict-prototypes -I../include -I. -DLinux -
> DNETLINK_ISCSI=8 -D_GNU_SOURCE   -c -o iscsi_sysfs.o iscsi_sysfs.c
> cc -O2 -g -Wall -Wstrict-prototypes -I../include -I. -DLinux -
> DNETLINK_ISCSI=8 -D_GNU_SOURCE   -c -o netlink.o netlink.c
> cc -O2 -g -Wall -Wstrict-prototypes -I../include -I. -DLinux -
> DNETL

Re: iscsi_tcp: datalen error while doing IO on sles 10 sp2

2010-07-20 Thread Pasi Kärkkäinen
On Mon, Jul 19, 2010 at 11:29:46AM -0500, Mike Christie wrote:
> On 07/18/2010 02:07 PM, Anil wrote:
>> I keep getting the below errors continually when data from a scsi
>> device is read and written on to xvd device. All this is done by our
>> block device driver.
>
> What is a xvd device? Is that in the upsteam kernel or one some website?
>

xvd = Xen Virtual Disk. So it's a Xen paravirtualized disk device.

-- Pasi

>>
>> We read data by creating a 64k buffer, and constructing a bio from
>> that buffer by zero-copy and then do a generic_make_request. We later
>> construct another bio from the same buffer that is now containing data
>> and do generic_make_request. we call this as syncing.
>>
>> I want to understand why this datalen error happens in the first
>> place. These errors are noticed only in sles 10 sp2 kernel and not on
>> RHEL. So, want to understand why these errors occur at all in the
>> first place.
>>
>> As these errors happen continually, the system crawls. Please help me
>> debug this.
>>
>> Jul 19 00:27:25 L61-152 kernel: iscsi_tcp: datalen 7171547>  131072
>
> It means the initiator and target agreed that the target would send  
> iscsi pdus with at most 131072 bytes of data. To the initiator it looks  
> like the target has sent a PDU with a data length of 7171547 bytes.
>
> Either, the target has a bug, the initiator has a bug, or somewhere the  
> packet got messed up and some bits got switched around so we are seeing  
> a invalid length by accident.
>
> What iscsi target are you using?
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "open-iscsi" group.
> To post to this group, send email to open-is...@googlegroups.com.
> To unsubscribe from this group, send email to 
> open-iscsi+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/open-iscsi?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: Linux IO scalability and pushing over million IOPS over software iSCSI?

2010-06-25 Thread Pasi Kärkkäinen
Hello,

How about numbers using other transports? FC? Has someone done benchmarks 
recently? 

-- Pasi

On Tue, Jun 22, 2010 at 04:44:10PM +0300, Pasi Kärkkäinen wrote:
> Hello,
> 
> Recently Intel and Microsoft demonstrated pushing over 1.25 million IOPS 
> using software iSCSI and a single 10 Gbit NIC:
> http://communities.intel.com/community/wired/blog/2010/04/22/1-million-iops-how-about-125-million
> 
> Earlier they achieved one (1.0) million IOPS:
> http://blog.fosketts.net/2010/01/14/microsoft-intel-push-million-iscsi-iops/
> http://communities.intel.com/community/openportit/server/blog/2010/01/19/100-iops-with-iscsi--thats-not-a-typo
> 
> The benchmark setup explained:
> http://communities.intel.com/community/wired/blog/2010/04/20/1-million-iop-article-explained
> http://dlbmodigital.microsoft.com/ppt/TN-100114-JSchwartz_SMorgan_JPlawner-1032432956-FINAL.pdf
> 
> 
> So the question is.. does someone have enough new hardware to try this with 
> Linux?
> Can Linux scale to over 1 million IO operations per second?
> 
> 
> Intel and Microsoft used the following for the benchmark:
> 
>   - Single Windows 2008 R2 system with Intel Xeon 5600 series CPU, 
> single-port Intel 82599 10 Gbit NIC and MS software-iSCSI initiator 
> connecting to 50x iSCSI LUNs.
>   - IOmeter to benchmark all the 50x iSCSI LUNs concurrently.
> 
>   - 10 servers as iSCSI targets, each having 5x ramdisk LUNs, total of 
> 50x ramdisk LUNs.
>   - iSCSI target server also used 10 Gbit NICs, and StarWind iSCSI target 
> software.
>   - Cisco 10 Gbit switch (Nexus) connecting the servers.
> 
>   - For the 1.25 million IOPS result they used 512 bytes/IO benchmark, 
> outstanding IOs=20.
>   - No jumbo frames, just the standard MTU=1500.
> 
> They used many LUNs so they can scale the iSCSI connections to multiple CPU 
> cores 
> using RSS (Receive Side Scaling) and MSI-X interrupts. 
> 
> So.. Who wants to try this? :) I don't unfortunately have 11x extra computers 
> with 10 Gbit NICs atm to try it myself..
> 
> This test covers networking, block layer, and software iSCSI initiator..
> so it would be a nice to see if we find any bottlenecks from current Linux 
> kernel.
> 
> Comments please!
> 
> -- Pasi
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: Linux IO scalability and pushing over million IOPS over software iSCSI?

2010-06-22 Thread Pasi Kärkkäinen
On Tue, Jun 22, 2010 at 09:24:21AM -0700, Jiahua wrote:
> Maybe a native question, but why need 50 targets? Each target can only
> serve about 25K IOPS? A single ramdisk should be able to handle this.
> Where is the bottleneck?
> 

That's a good question.. dunno. Maybe StarWind iSCSI target didn't scale very 
well? :)

Or maybe it's related to the multi-queue support on the initiator NIC,
to scale the load to multiple queues and thus to multiple IRQs and to multiple 
CPU cores..

so maybe they needed multiple IP addresses to do that and it was easiest to 
just use 
multiple target systems? 


> We had a similar experiment but with Infiniband and Lustre. It turn
> out Lustre has a rate limit in the RPC handling layer. Is it the same
> problem here?
> 

Note that we're trying to benchmark the *initiator* here, not the targets..

-- Pasi

> Jiahua
> 
> 
> 
> On Tue, Jun 22, 2010 at 6:44 AM, Pasi Kärkkäinen  wrote:
> > Hello,
> >
> > Recently Intel and Microsoft demonstrated pushing over 1.25 million IOPS 
> > using software iSCSI and a single 10 Gbit NIC:
> > http://communities.intel.com/community/wired/blog/2010/04/22/1-million-iops-how-about-125-million
> >
> > Earlier they achieved one (1.0) million IOPS:
> > http://blog.fosketts.net/2010/01/14/microsoft-intel-push-million-iscsi-iops/
> > http://communities.intel.com/community/openportit/server/blog/2010/01/19/100-iops-with-iscsi--thats-not-a-typo
> >
> > The benchmark setup explained:
> > http://communities.intel.com/community/wired/blog/2010/04/20/1-million-iop-article-explained
> > http://dlbmodigital.microsoft.com/ppt/TN-100114-JSchwartz_SMorgan_JPlawner-1032432956-FINAL.pdf
> >
> >
> > So the question is.. does someone have enough new hardware to try this with 
> > Linux?
> > Can Linux scale to over 1 million IO operations per second?
> >
> >
> > Intel and Microsoft used the following for the benchmark:
> >
> >        - Single Windows 2008 R2 system with Intel Xeon 5600 series CPU,
> >          single-port Intel 82599 10 Gbit NIC and MS software-iSCSI initiator
> >          connecting to 50x iSCSI LUNs.
> >        - IOmeter to benchmark all the 50x iSCSI LUNs concurrently.
> >
> >        - 10 servers as iSCSI targets, each having 5x ramdisk LUNs, total of 
> > 50x ramdisk LUNs.
> >        - iSCSI target server also used 10 Gbit NICs, and StarWind iSCSI 
> > target software.
> >        - Cisco 10 Gbit switch (Nexus) connecting the servers.
> >
> >        - For the 1.25 million IOPS result they used 512 bytes/IO benchmark, 
> > outstanding IOs=20.
> >        - No jumbo frames, just the standard MTU=1500.
> >
> > They used many LUNs so they can scale the iSCSI connections to multiple CPU 
> > cores
> > using RSS (Receive Side Scaling) and MSI-X interrupts.
> >
> > So.. Who wants to try this? :) I don't unfortunately have 11x extra 
> > computers with 10 Gbit NICs atm to try it myself..
> >
> > This test covers networking, block layer, and software iSCSI initiator..
> > so it would be a nice to see if we find any bottlenecks from current Linux 
> > kernel.
> >
> > Comments please!
> >
> > -- Pasi
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > the body of a message to majord...@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > Please read the FAQ at  http://www.tux.org/lkml/
> >

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Linux IO scalability and pushing over million IOPS over software iSCSI?

2010-06-22 Thread Pasi Kärkkäinen
Hello,

Recently Intel and Microsoft demonstrated pushing over 1.25 million IOPS using 
software iSCSI and a single 10 Gbit NIC:
http://communities.intel.com/community/wired/blog/2010/04/22/1-million-iops-how-about-125-million

Earlier they achieved one (1.0) million IOPS:
http://blog.fosketts.net/2010/01/14/microsoft-intel-push-million-iscsi-iops/
http://communities.intel.com/community/openportit/server/blog/2010/01/19/100-iops-with-iscsi--thats-not-a-typo

The benchmark setup explained:
http://communities.intel.com/community/wired/blog/2010/04/20/1-million-iop-article-explained
http://dlbmodigital.microsoft.com/ppt/TN-100114-JSchwartz_SMorgan_JPlawner-1032432956-FINAL.pdf


So the question is.. does someone have enough new hardware to try this with 
Linux?
Can Linux scale to over 1 million IO operations per second?


Intel and Microsoft used the following for the benchmark:

- Single Windows 2008 R2 system with Intel Xeon 5600 series CPU, 
  single-port Intel 82599 10 Gbit NIC and MS software-iSCSI initiator 
  connecting to 50x iSCSI LUNs.
- IOmeter to benchmark all the 50x iSCSI LUNs concurrently.

- 10 servers as iSCSI targets, each having 5x ramdisk LUNs, total of 
50x ramdisk LUNs.
- iSCSI target server also used 10 Gbit NICs, and StarWind iSCSI target 
software.
- Cisco 10 Gbit switch (Nexus) connecting the servers.

- For the 1.25 million IOPS result they used 512 bytes/IO benchmark, 
outstanding IOs=20.
- No jumbo frames, just the standard MTU=1500.

They used many LUNs so they can scale the iSCSI connections to multiple CPU 
cores 
using RSS (Receive Side Scaling) and MSI-X interrupts. 

So.. Who wants to try this? :) I don't unfortunately have 11x extra computers 
with 10 Gbit NICs atm to try it myself..

This test covers networking, block layer, and software iSCSI initiator..
so it would be a nice to see if we find any bottlenecks from current Linux 
kernel.

Comments please!

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: Over one million IOPS using software iSCSI and 10 Gbit Ethernet, 1.25 million IOPS update

2010-06-19 Thread Pasi Kärkkäinen
On Fri, Jun 18, 2010 at 04:48:57PM +0300, guy keren wrote:
> Pasi Kärkkäinen wrote:
>> On Fri, Jun 18, 2010 at 12:10:04PM +0300, guy keren wrote:
>>> Pasi Kärkkäinen wrote:
>>>> On Mon, Jun 14, 2010 at 11:39:47PM +0400, Vladislav Bolkhovitin wrote:
>>>>> Pasi Kärkkäinen, on 06/11/2010 11:26 AM wrote:
>>>>>> On Fri, Feb 05, 2010 at 02:10:32PM +0300, Vladislav Bolkhovitin wrote:
>>>>>>> Pasi Kärkkäinen, on 01/28/2010 03:36 PM wrote:
>>>>>>>> Hello list,
>>>>>>>>
>>>>>>>> Please check these news items:
>>>>>>>> http://blog.fosketts.net/2010/01/14/microsoft-intel-push-million-iscsi-iops/
>>>>>>>> http://communities.intel.com/community/openportit/server/blog/2010/01/19/100-iops-with-iscsi--thats-not-a-typo
>>>>>>>> http://www.infostor.com/index/blogs_new/dave_simpson_storage/blogs/infostor/dave_simpon_storage/post987_37501094375591341.html
>>>>>>>>
>>>>>>>> "1,030,000 IOPS over a single 10 Gb Ethernet link"
>>>>>>>>
>>>>>>>> "Specifically, Intel and Microsoft clocked 1,030,000 IOPS 
>>>>>>>> (with  512-byte blocks), and more than 2,250MBps with large 
>>>>>>>> block sizes (16KB to 256KB) using the Iometer benchmark"
>>>>>>>>
>>>>>>>> So.. who wants to beat that using Linux + open-iscsi? :)
>>>>>>> I personally, don't like such tests and don't trust them at 
>>>>>>> all. They  are pure marketing. The only goal of them is to 
>>>>>>> create impression that X  (Microsoft and Windows in this 
>>>>>>> case) is a super-puper ahead of the  world. I've seen on the 
>>>>>>> Web a good article about usual tricks used by  vendors to 
>>>>>>> cheat benchmarks to get good marketing material, but,  
>>>>>>> unfortunately, can't find link on it at the moment.
>>>>>>>
>>>>>>> The problem is that you can't say from such tests if X will 
>>>>>>> also  "ahead  of the world" on real life usages, because such 
>>>>>>> tests always heavily  optimized for particular used 
>>>>>>> benchmarks and such optimizations almost  always hurt real 
>>>>>>> life cases. And you hardly find descriptions of those  
>>>>>>> optimizations as well as a scientific description of the 
>>>>>>> tests themself.  The results published practically only in 
>>>>>>> marketing documents.
>>>>>>>
>>>>>>> Anyway, as far as I can see Linux supports all the used 
>>>>>>> hardware as well  as all advance performance modes of it, so 
>>>>>>> if one repeats this test in  the same setup, he/she should 
>>>>>>> get not worse results.
>>>>>>>
>>>>>>> For me personally it was funny to see how MS presents in the  
>>>>>>> WinHEC   presentation 
>>>>>>> (http://download.microsoft.com/download/5/E/6/5E66B27B-988B-4F50-AF3A-C2FF1E62180F/COR-T586_WH08.pptx)
>>>>>>>  
>>>>>>> that they have 1.1GB/s from 4 connections. In the beginning 
>>>>>>> of 2008 I  saw a *single* dd pushing data on that rate over a 
>>>>>>> *single* connection  from Linux initiator to iSCSI-SCST 
>>>>>>> target using regular Myricom hardware  without any special 
>>>>>>> acceleration. I didn't know how proud I must have  been for 
>>>>>>> Linux :).
>>>>>>>
>>>>>> It seems they've described the setup here:
>>>>>> http://communities.intel.com/community/wired/blog/2010/04/20/1-million-iop-article-explained
>>>>>>
>>>>>> And today they seem to have a demo which produces 1.3 million IOPS!
>>>>>>
>>>>>> "1 Million IOPS? How about 1.25 Million!":
>>>>>> http://communities.intel.com/community/wired/blog/2010/04/22/1-million-iops-how-about-125-million
>>>>> I'm glad for them. The only thing surprises me that none of the 
>>>>> Linux  vendors, including Intel itself, interested to repeat this 
>>>>> test for  Linux and fix possible found problems, if any. Ten 
>

Re: Over one million IOPS using software iSCSI and 10 Gbit Ethernet, 1.25 million IOPS update

2010-06-18 Thread Pasi Kärkkäinen
On Fri, Jun 18, 2010 at 12:10:04PM +0300, guy keren wrote:
> Pasi Kärkkäinen wrote:
>> On Mon, Jun 14, 2010 at 11:39:47PM +0400, Vladislav Bolkhovitin wrote:
>>> Pasi Kärkkäinen, on 06/11/2010 11:26 AM wrote:
>>>> On Fri, Feb 05, 2010 at 02:10:32PM +0300, Vladislav Bolkhovitin wrote:
>>>>> Pasi Kärkkäinen, on 01/28/2010 03:36 PM wrote:
>>>>>> Hello list,
>>>>>>
>>>>>> Please check these news items:
>>>>>> http://blog.fosketts.net/2010/01/14/microsoft-intel-push-million-iscsi-iops/
>>>>>> http://communities.intel.com/community/openportit/server/blog/2010/01/19/100-iops-with-iscsi--thats-not-a-typo
>>>>>> http://www.infostor.com/index/blogs_new/dave_simpson_storage/blogs/infostor/dave_simpon_storage/post987_37501094375591341.html
>>>>>>
>>>>>> "1,030,000 IOPS over a single 10 Gb Ethernet link"
>>>>>>
>>>>>> "Specifically, Intel and Microsoft clocked 1,030,000 IOPS (with 
>>>>>>  512-byte blocks), and more than 2,250MBps with large block 
>>>>>> sizes (16KB to 256KB) using the Iometer benchmark"
>>>>>>
>>>>>> So.. who wants to beat that using Linux + open-iscsi? :)
>>>>> I personally, don't like such tests and don't trust them at all. 
>>>>> They  are pure marketing. The only goal of them is to create 
>>>>> impression that X  (Microsoft and Windows in this case) is a 
>>>>> super-puper ahead of the  world. I've seen on the Web a good 
>>>>> article about usual tricks used by  vendors to cheat benchmarks 
>>>>> to get good marketing material, but,  unfortunately, can't find 
>>>>> link on it at the moment.
>>>>>
>>>>> The problem is that you can't say from such tests if X will also  
>>>>> "ahead  of the world" on real life usages, because such tests 
>>>>> always heavily  optimized for particular used benchmarks and such 
>>>>> optimizations almost  always hurt real life cases. And you hardly 
>>>>> find descriptions of those  optimizations as well as a scientific 
>>>>> description of the tests themself.  The results published 
>>>>> practically only in marketing documents.
>>>>>
>>>>> Anyway, as far as I can see Linux supports all the used hardware 
>>>>> as well  as all advance performance modes of it, so if one 
>>>>> repeats this test in  the same setup, he/she should get not worse 
>>>>> results.
>>>>>
>>>>> For me personally it was funny to see how MS presents in the 
>>>>> WinHEC   presentation
>>>>> (http://download.microsoft.com/download/5/E/6/5E66B27B-988B-4F50-AF3A-C2FF1E62180F/COR-T586_WH08.pptx)
>>>>>  
>>>>> that they have 1.1GB/s from 4 connections. In the beginning of 
>>>>> 2008 I  saw a *single* dd pushing data on that rate over a 
>>>>> *single* connection  from Linux initiator to iSCSI-SCST target 
>>>>> using regular Myricom hardware  without any special acceleration. 
>>>>> I didn't know how proud I must have  been for Linux :).
>>>>>
>>>> It seems they've described the setup here:
>>>> http://communities.intel.com/community/wired/blog/2010/04/20/1-million-iop-article-explained
>>>>
>>>> And today they seem to have a demo which produces 1.3 million IOPS!
>>>>
>>>> "1 Million IOPS? How about 1.25 Million!":
>>>> http://communities.intel.com/community/wired/blog/2010/04/22/1-million-iops-how-about-125-million
>>> I'm glad for them. The only thing surprises me that none of the Linux 
>>>  vendors, including Intel itself, interested to repeat this test for  
>>> Linux and fix possible found problems, if any. Ten years ago similar  
>>> test about Linux TCP scalability limitations comparing with Windows   
>>> caused massive reaction and great TCP improvements.
>>>
>>
>> Yeah, I'd like to see this aswell.
>> I don't think I have enough extra hardware myself.. atm.
>>
>> Does someone have enough boxes with 10 Gbit connections? :)
>>
>>> The way how to do the test is quite straightforward, starting from   
>>> making for Linux similarly effective test tool as IOMeter on Windows  
>>> [1]. Maybe, the lack of such tool scares the vendors away?
>>>
>>
>> I'm wondering how big effort it would be to fix IOmeter for linux..  
>> iirc there were some patches to fix the AIO stuff.
>
> the AIO stuff inside IOMeter won't necessarily help, since the AIO  
> implementation in linux kernels is not efficient enough*
>
> * note: i'm only updated to kernel 2.6.18 - but i didn't here there was  
> a strong effort to make this better in newer kernels. correct me if i'm  
> wrong.
>

So what's the actual problem? 

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: Over one million IOPS using software iSCSI and 10 Gbit Ethernet, 1.25 million IOPS update

2010-06-18 Thread Pasi Kärkkäinen
On Mon, Jun 14, 2010 at 11:39:47PM +0400, Vladislav Bolkhovitin wrote:
> Pasi Kärkkäinen, on 06/11/2010 11:26 AM wrote:
>> On Fri, Feb 05, 2010 at 02:10:32PM +0300, Vladislav Bolkhovitin wrote:
>>> Pasi Kärkkäinen, on 01/28/2010 03:36 PM wrote:
>>>> Hello list,
>>>>
>>>> Please check these news items:
>>>> http://blog.fosketts.net/2010/01/14/microsoft-intel-push-million-iscsi-iops/
>>>> http://communities.intel.com/community/openportit/server/blog/2010/01/19/100-iops-with-iscsi--thats-not-a-typo
>>>> http://www.infostor.com/index/blogs_new/dave_simpson_storage/blogs/infostor/dave_simpon_storage/post987_37501094375591341.html
>>>>
>>>> "1,030,000 IOPS over a single 10 Gb Ethernet link"
>>>>
>>>> "Specifically, Intel and Microsoft clocked 1,030,000 IOPS (with  
>>>> 512-byte blocks), and more than 2,250MBps with large block sizes 
>>>> (16KB to 256KB) using the Iometer benchmark"
>>>>
>>>> So.. who wants to beat that using Linux + open-iscsi? :)
>>> I personally, don't like such tests and don't trust them at all. They 
>>>  are pure marketing. The only goal of them is to create impression 
>>> that X  (Microsoft and Windows in this case) is a super-puper ahead 
>>> of the  world. I've seen on the Web a good article about usual tricks 
>>> used by  vendors to cheat benchmarks to get good marketing material, 
>>> but,  unfortunately, can't find link on it at the moment.
>>>
>>> The problem is that you can't say from such tests if X will also 
>>> "ahead  of the world" on real life usages, because such tests always 
>>> heavily  optimized for particular used benchmarks and such 
>>> optimizations almost  always hurt real life cases. And you hardly 
>>> find descriptions of those  optimizations as well as a scientific 
>>> description of the tests themself.  The results published practically 
>>> only in marketing documents.
>>>
>>> Anyway, as far as I can see Linux supports all the used hardware as 
>>> well  as all advance performance modes of it, so if one repeats this 
>>> test in  the same setup, he/she should get not worse results.
>>>
>>> For me personally it was funny to see how MS presents in the WinHEC   
>>> presentation   
>>> (http://download.microsoft.com/download/5/E/6/5E66B27B-988B-4F50-AF3A-C2FF1E62180F/COR-T586_WH08.pptx)
>>>  
>>> that they have 1.1GB/s from 4 connections. In the beginning of 2008 I 
>>>  saw a *single* dd pushing data on that rate over a *single* 
>>> connection  from Linux initiator to iSCSI-SCST target using regular 
>>> Myricom hardware  without any special acceleration. I didn't know how 
>>> proud I must have  been for Linux :).
>>>
>>
>> It seems they've described the setup here:
>> http://communities.intel.com/community/wired/blog/2010/04/20/1-million-iop-article-explained
>>
>> And today they seem to have a demo which produces 1.3 million IOPS!
>>
>> "1 Million IOPS? How about 1.25 Million!":
>> http://communities.intel.com/community/wired/blog/2010/04/22/1-million-iops-how-about-125-million
>
> I'm glad for them. The only thing surprises me that none of the Linux  
> vendors, including Intel itself, interested to repeat this test for  
> Linux and fix possible found problems, if any. Ten years ago similar  
> test about Linux TCP scalability limitations comparing with Windows  
> caused massive reaction and great TCP improvements.
>

Yeah, I'd like to see this aswell.
I don't think I have enough extra hardware myself.. atm.

Does someone have enough boxes with 10 Gbit connections? :)

> The way how to do the test is quite straightforward, starting from  
> making for Linux similarly effective test tool as IOMeter on Windows  
> [1]. Maybe, the lack of such tool scares the vendors away?
>

I'm wondering how big effort it would be to fix IOmeter for linux.. 
iirc there were some patches to fix the AIO stuff.

> Vlad
>
> [1] None of the performance measurement tools for Linux I've seen so  
> far, including disktest (although I've not looked at newer (1-1.5 years)  
> versions) and fio satisfied me for various reasons.
>

What's missing from ltp disktest?

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: Over one million IOPS using software iSCSI and 10 Gbit Ethernet, 1.25 million IOPS update

2010-06-11 Thread Pasi Kärkkäinen
On Fri, Feb 05, 2010 at 02:10:32PM +0300, Vladislav Bolkhovitin wrote:
> Pasi Kärkkäinen, on 01/28/2010 03:36 PM wrote:
>> Hello list,
>>
>> Please check these news items:
>> http://blog.fosketts.net/2010/01/14/microsoft-intel-push-million-iscsi-iops/
>> http://communities.intel.com/community/openportit/server/blog/2010/01/19/100-iops-with-iscsi--thats-not-a-typo
>> http://www.infostor.com/index/blogs_new/dave_simpson_storage/blogs/infostor/dave_simpon_storage/post987_37501094375591341.html
>>
>> "1,030,000 IOPS over a single 10 Gb Ethernet link"
>>
>> "Specifically, Intel and Microsoft clocked 1,030,000 IOPS (with 
>> 512-byte blocks), and more than 2,250MBps with large block sizes (16KB 
>> to 256KB) using the Iometer benchmark"
>>
>> So.. who wants to beat that using Linux + open-iscsi? :)
>
> I personally, don't like such tests and don't trust them at all. They  
> are pure marketing. The only goal of them is to create impression that X  
> (Microsoft and Windows in this case) is a super-puper ahead of the  
> world. I've seen on the Web a good article about usual tricks used by  
> vendors to cheat benchmarks to get good marketing material, but,  
> unfortunately, can't find link on it at the moment.
>
> The problem is that you can't say from such tests if X will also "ahead  
> of the world" on real life usages, because such tests always heavily  
> optimized for particular used benchmarks and such optimizations almost  
> always hurt real life cases. And you hardly find descriptions of those  
> optimizations as well as a scientific description of the tests themself.  
> The results published practically only in marketing documents.
>
> Anyway, as far as I can see Linux supports all the used hardware as well  
> as all advance performance modes of it, so if one repeats this test in  
> the same setup, he/she should get not worse results.
>
> For me personally it was funny to see how MS presents in the WinHEC  
> presentation  
> (http://download.microsoft.com/download/5/E/6/5E66B27B-988B-4F50-AF3A-C2FF1E62180F/COR-T586_WH08.pptx)
>  
> that they have 1.1GB/s from 4 connections. In the beginning of 2008 I  
> saw a *single* dd pushing data on that rate over a *single* connection  
> from Linux initiator to iSCSI-SCST target using regular Myricom hardware  
> without any special acceleration. I didn't know how proud I must have  
> been for Linux :).
>

It seems they've described the setup here:
http://communities.intel.com/community/wired/blog/2010/04/20/1-million-iop-article-explained

And today they seem to have a demo which produces 1.3 million IOPS!

"1 Million IOPS? How about 1.25 Million!":
http://communities.intel.com/community/wired/blog/2010/04/22/1-million-iops-how-about-125-million

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: iscsi performance via 10 Gig / Equallogic PS6010

2010-06-06 Thread Pasi Kärkkäinen
On Wed, May 26, 2010 at 10:32:58AM -0700, Taylor wrote:
> I'm curious what kind of performance numbers people can get from their
> iscsi setup, specifically via 10 Gig.
> 
> We are running with Linux servers connected to Dell Equallogic 10 Gig
> arrays on Suse.
> 
> Recently we were running under SLES 11, and with multipath were seeing
> about 2.5 Gig per NIC, or 5.0 Gbit/sec total IO throughput, but we
> were getting a large number of iscsi connection errors.  We are using
> 10 Gig NICs with jumbo frames.
> 
> We reimaged the server to OpenSuse, same hardware and configs
> otherwise, and since then we are getting about half, or 1.2 to 1.3
> Gbit per NIC, or 2.5 to 3.0 Gbit total IO throughput, but we've not
> had any iscsi connection errors.
> 
> What are other people seeing?  Doesn't need to be an equallogic, just
> any 10 Gig connection to an iscsi array and single host throughput
> numbers.
> 

Some Equallogic PS6010 10 Gbit numbers here..

NOTE! In this thread Vladislav said he doesn't have problems getting
10 Gbit linerate on his environment using sequential IO with dd. 
He was using 10 SAS disks in RAID-0, while in this Equallogic test 
I *ONLY* have 8 SAS disks in RAID-10, so basicly I get write performance 
of only 4 spindles. Vladislav had 2,5x more spindles/disk-performance available!

(Yeah, it's stupid to run a 10 Gbit performance test with only 8 disks in use,
and in RAID-10, but I don't unfortunately have other 10 Gbit equipment atm.)

Initiator is standard CentOS 5.5, using Intel 10 Gbit NIC, no configuration 
tweaks done.
Should I adjust something? queue depth of open-iscsi? NIC settings?


dmesg:

scsi9 : iSCSI Initiator over TCP/IP
eth5: no IPv6 routers present
  Vendor: EQLOGIC   Model: 100E-00   Rev: 4.3
  Type:   Direct-Access  ANSI SCSI revision: 05
SCSI device sdd: 134246400 512-byte hdwr sectors (68734 MB)
sdd: Write Protect is off
sdd: Mode Sense: ad 00 00 00
SCSI device sdd: drive cache: write through
SCSI device sdd: 134246400 512-byte hdwr sectors (68734 MB)
sdd: Write Protect is off
sdd: Mode Sense: ad 00 00 00
SCSI device sdd: drive cache: write through
 sdd: unknown partition table
sd 9:0:0:0: Attached scsi disk sdd
sd 9:0:0:0: Attached scsi generic sg6 type 0


[r...@dellr710 ~]# cat /sys/block/sdd/device/vendor
EQLOGIC
[r...@dellr710 ~]# cat /sys/block/sdd/device/model
100E-00

[r...@dellr710 ~]# cat /proc/partitions
major minor  #blocks  name

   8 0  285474816 sda
   8 1 248976 sda1
   8 2  285218010 sda2
 253 0   33554432 dm-0
 253 1   14352384 dm-1
   848   67123200 sdd

[r...@dellr710 ~]# cat /sys/block/sdd/queue/scheduler
noop anticipatory deadline [cfq]



So here we go, numbers using *default* settings:


write tests:


# for bs in 512 4k 8k 16k 32k 64k 128k 256k 512k 1024k; do echo "bs: $bs" && dd 
if=/dev/zero of=/dev/sdd bs=$bs count=32768 oflag=direct && sync; done
bs: 512
32768+0 records in
32768+0 records out
16777216 bytes (17 MB) copied, 12.25 seconds, 1.4 MB/s
bs: 4k
32768+0 records in
32768+0 records out
134217728 bytes (134 MB) copied, 11.8131 seconds, 11.4 MB/s
bs: 8k
32768+0 records in
32768+0 records out
268435456 bytes (268 MB) copied, 14.3359 seconds, 18.7 MB/s
bs: 16k
32768+0 records in
32768+0 records out
536870912 bytes (537 MB) copied, 19.7916 seconds, 27.1 MB/s
bs: 32k
32768+0 records in
32768+0 records out
1073741824 bytes (1.1 GB) copied, 19.9889 seconds, 53.7 MB/s
bs: 64k
32768+0 records in
32768+0 records out
2147483648 bytes (2.1 GB) copied, 28.4471 seconds, 75.5 MB/s
bs: 128k
32768+0 records in
32768+0 records out
4294967296 bytes (4.3 GB) copied, 46.6343 seconds, 92.1 MB/s
bs: 256k
32768+0 records in
32768+0 records out
8589934592 bytes (8.6 GB) copied, 84.692 seconds, 101 MB/s
bs: 512k
32768+0 records in
32768+0 records out
17179869184 bytes (17 GB) copied, 168.305 seconds, 102 MB/s
bs: 1024k
32768+0 records in
32768+0 records out
34359738368 bytes (34 GB) copied, 216.441 seconds, 159 MB/s



iostat during bs=1024k write test:

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
   0.000.002.25   10.610.00   87.14

Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda   0.00 0.00 0.00  0  0
sda1  0.00 0.00 0.00  0  0
sda2  0.00 0.00 0.00  0  0
dm-0  0.00 0.00 0.00  0  0
dm-1  0.00 0.00 0.00  0  0
sdd 299.01 0.00306186.14  0 309248


read tests:
---

# for bs in 512 4k 8k 16k 32k 64k 128k 256k 512k 1024k; do echo "bs: $bs" && dd 
if=/dev/sdd bs=$bs of=/dev/zero count=32768 iflag=direct && sync; done
bs: 512
32768+0 records in
32768+0 records out
16777216 bytes (17 MB) copied, 4.1097 seconds, 4.1 MB/s
bs: 4k
32768+0 records in
32768+0 records out
134217728 bytes (134 M

Re: iscsi performance via 10 Gig

2010-05-28 Thread Pasi Kärkkäinen
On Fri, May 28, 2010 at 09:54:32AM -0700, Taylor wrote:
> Ulrich, I'll check on the fragmenting.  When you say IRQ assignments,
> are you just talking about cat /proc/interrupts?
> 
> 
> My tests were just doing dd from /dev/zero to create large files on 4
> seperate mount points of disk from the equallogic.
> 
> We have two 10 Gig Equallogics each with 15K 600 GB SAS drives.
> Beleive they are configured as Raid 50.
>

You may want to try Raid 10 instead if you're concerned about IO performance.

> With equallogics, data is supposed to be striped over number of arrays
> in storage group, so if we were to add another array, some background
> process would stripe exisiting data evenly over 48 disks.  I am not
> concerned about number of spindles or RAID config at this point.
> 

For optimal access to striped volumes you need to have equallogic specific
multipath plugin so that it can immediately access correct blocks from 
correct arrays, without going through redirect sequences.

I don't think there's an equallogic multipath plugin for Linux yet.. 
Afaik they're working on creating one. Windows already has one.

-- Pasi

> 
> 
> On May 27, 2:11 am, Vladislav Bolkhovitin  wrote:
> > Boaz Harrosh, on 05/26/2010 10:58 PM wrote:
> >
> >
> >
> >
> >
> > > On 05/26/2010 09:52 PM, Vladislav Bolkhovitin wrote:
> > >> Boaz Harrosh, on 05/26/2010 10:45 PM wrote:
> > >>> On 05/26/2010 09:42 PM, Vladislav Bolkhovitin wrote:
> >  Taylor, on 05/26/2010 09:32 PM wrote:
> > > I'm curious what kind of performance numbers people can get from their
> > > iscsi setup, specifically via 10 Gig.
> >
> > > We are running with Linux servers connected to Dell Equallogic 10 Gig
> > > arrays on Suse.
> >
> > > Recently we were running under SLES 11, and with multipath were seeing
> > > about 2.5 Gig per NIC, or 5.0 Gbit/sec total IO throughput, but we
> > > were getting a large number of iscsi connection errors.  We are using
> > > 10 Gig NICs with jumbo frames.
> >
> > > We reimaged the server to OpenSuse, same hardware and configs
> > > otherwise, and since then we are getting about half, or 1.2 to 1.3
> > > Gbit per NIC, or 2.5 to 3.0 Gbit total IO throughput, but we've not
> > > had any iscsi connection errors.
> >
> > > What are other people seeing?  Doesn't need to be an equallogic, just
> > > any 10 Gig connection to an iscsi array and single host throughput
> > > numbers.
> >  ISCSI-SCST/open-iscsi on a decent hardware can fully saturate 10GbE
> >  link. On writes even with a single stream, i.e. something like a single
> >  dd writing data to a single device.
> >
> > >>> Off topic question:
> > >>> That's a fast disk. A sata HD? the best I got for single sata was like
> > >>> 90 MB/s. Did you mean a RAM device of sorts.
> > >> The single stream data were both from a SAS RAID and RAMFS. The
> > >> multi-stream data were from RAMFS, because I don't have any reports
> > >> about any tests of iSCSI-SCST on fast enough SSDs.
> >
> > > Right thanks. So the SAS RAID had what? like 12-15 spindles?
> >
> > If I remember correctly, it was 10 spindles each capable of 150+MB/s.
> > The RAID was MD RAID0.
> >
> > Vlad- Hide quoted text -
> >
> > - Show quoted text -
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "open-iscsi" group.
> To post to this group, send email to open-is...@googlegroups.com.
> To unsubscribe from this group, send email to 
> open-iscsi+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/open-iscsi?hl=en.
> 

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: iscsi performance via 10 Gig

2010-05-26 Thread Pasi Kärkkäinen
On Wed, May 26, 2010 at 10:32:58AM -0700, Taylor wrote:
> I'm curious what kind of performance numbers people can get from their
> iscsi setup, specifically via 10 Gig.
> 
> We are running with Linux servers connected to Dell Equallogic 10 Gig
> arrays on Suse.
> 
> Recently we were running under SLES 11, and with multipath were seeing
> about 2.5 Gig per NIC, or 5.0 Gbit/sec total IO throughput, but we
> were getting a large number of iscsi connection errors.  We are using
> 10 Gig NICs with jumbo frames.
> 
> We reimaged the server to OpenSuse, same hardware and configs
> otherwise, and since then we are getting about half, or 1.2 to 1.3
> Gbit per NIC, or 2.5 to 3.0 Gbit total IO throughput, but we've not
> had any iscsi connection errors.
> 
> What are other people seeing?  Doesn't need to be an equallogic, just
> any 10 Gig connection to an iscsi array and single host throughput
> numbers.
> 

What's your equallogic model? how many and how fast (rpm) disks? 

How are you measuring the performance? 
Are you interested of the sequential throughput using large blocks,
or random io using small blocks (max iops)?

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: information on the config option -- node.session.iscsi.FastAbort = No

2010-05-02 Thread Pasi Kärkkäinen
On Thu, Apr 29, 2010 at 10:48:20AM -0700, david elsen wrote:
>Mike,
> 
>How can I get the 5.4 and 5.5 kernel fixes and latest iscs-initiator-utils
>for RHEL 5.3 Kernel?
> 
>I would like to change the ISID from initiator while it is trying to
>connect to target. I would like to establish multiple iSCSI session
>between my Initiator and target. I do not see any option for this in the
>Linux initiator on my system.
> 
>I am using RHEL 5.3 with 2.6.18-128.el5 kernel.
> 

"yum update" should do it.

-- Pasi

>Thanks,
>David
> 
> 
>> Date: Wed, 28 Apr 2010 12:48:44 -0500
>> From: micha...@cs.wisc.edu
>> To: open-iscsi@googlegroups.com
>> CC: mforou...@gmail.com
>> Subject: Re: information on the config option --
>node.session.iscsi.FastAbort = No
>>
>> On 04/28/2010 12:43 PM, Mike Christie wrote:
>> > On 04/28/2010 10:40 AM, maguar887 wrote:
>> >> We are currently running open iscsi version 2.0-871 on RHEL 5.3
>> >> (2.6.18-92.1.6.0.2.el5) against a Dell Equallogic iScsi SAN group
>> >> (firmware 4.3.5)
>> >>
>> >
>> > You need to upgrade your kernel. It had a bug with eql targets in the
>> > async logout path. It is fixed in 5.4 and 5.5 kernels.
>> >
>>
>> Upgrade your iscsi-initiator-utils too.
>>
>> --
>> You received this message because you are subscribed to the Google
>Groups "open-iscsi" group.
>> To post to this group, send email to open-is...@googlegroups.com.
>> To unsubscribe from this group, send email to
>open-iscsi+unsubscr...@googlegroups.com.
>> For more options, visit this group at
>http://groups.google.com/group/open-iscsi?hl=en.
>>
> 
>--
> 
>The New Busy is not the old busy. Search, chat and e-mail from your inbox.
>[1]Get started.
> 
>--
>You received this message because you are subscribed to the Google Groups
>"open-iscsi" group.
>To post to this group, send email to open-is...@googlegroups.com.
>To unsubscribe from this group, send email to
>open-iscsi+unsubscr...@googlegroups.com.
>For more options, visit this group at
>http://groups.google.com/group/open-iscsi?hl=en.
> 
> References
> 
>Visible links
>1. 
> http://www.windowslive.com/campaign/thenewbusy?ocid=PID28326::T:WLMTAGL:ON:WL:en-US:WM_HMP:042010_3

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: Over one million IOPS using software iSCSI and 10 Gbit Ethernet

2010-04-12 Thread Pasi Kärkkäinen
On Fri, Feb 05, 2010 at 02:10:32PM +0300, Vladislav Bolkhovitin wrote:
>
> For me personally it was funny to see how MS presents in the WinHEC  
> presentation  
> (http://download.microsoft.com/download/5/E/6/5E66B27B-988B-4F50-AF3A-C2FF1E62180F/COR-T586_WH08.pptx)
>  
> that they have 1.1GB/s from 4 connections. In the beginning of 2008 I  
> saw a *single* dd pushing data on that rate over a *single* connection  
> from Linux initiator to iSCSI-SCST target using regular Myricom hardware  
> without any special acceleration. I didn't know how proud I must have  
> been for Linux :).
>

Btw was this over 10 Gig Ethernet? 

Did you have to tweak something special to achieve this, either on the 
initiator,
or on the target? 

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: amd64 kernel / i386 userland - no login

2010-04-06 Thread Pasi Kärkkäinen
On Fri, Mar 05, 2010 at 04:05:56PM -0600, Mike Christie wrote:
> On 03/03/2010 01:13 PM, Florian Lohoff wrote:
>>
>> Hi,
>> i reported a bug into the Debian Bug Tracking system that with
>> a 64bit Kernel and a 32bit Userspace the login fails.
>>
>> See here:
>>
>> http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=502845#51
>>
>> This is very reproducable - using a 32 bit kernel or a pure 64 bit
>> userland makes it work immediatly - From Bastians response my guess
>> was that some 64/32 syscall wrapper is missing but my question on
>> how to proceed was not answered so i guess its the right thing to
>> send it here too ..
>>
>
> The iscsi netlink struct is not laid out correctly so on 32 bit user 64  
> bit kernel setups, when it gets passed from userspace to the kernel it  
> gets messed up. We have to redo the interface to fix this. Until then  
> you have to use 32 bit user with 32 bit kernel or 64 bit user with 64  
> bit kernels.
>

Is there any timeframe for this fix? 

I was planning to install 64bit kernel to a 32bit RHEL5 system 
(yes, not officially supported, I know ;) but this bug kind of makes
it no-go.. unfortunately.

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: iSCSI target recommendation

2010-03-29 Thread Pasi Kärkkäinen
On Sun, Mar 28, 2010 at 08:52:42AM -0400, Joe Landman wrote:
> An Oneironaut wrote:
>> Hey all.  Could anyone suggest a good NAS that has about 2 to 6TB of
>> storage which is under 4k?  its hard to find out whether these people
>> have tested with open-iscsi or not.  So I was hoping some of you out
>> there who had used a storage device within this range would have some
>> opinions.  Please tell me if you have any suggestions.
>
> If you don't mind a vendor reply, have a look at  
> http://scalableinformatics.com/deltav
>
> Not meant as a commercial, so skip/delete if you object to commercial  
> content.  And blame/flame me offline if you do object vociferously.
>

Don't worry, the URL didn't work ;)

404 Not Found Error: No content found at the requested URL

Sorry, no content was found at the requested path - it's possible that you've 
requested this page in error.

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: bnx2/bnx2i module

2010-03-10 Thread Pasi Kärkkäinen
On Wed, Mar 10, 2010 at 10:27:57AM -0800, Murray wrote:
> 
> 
> On Mar 10, 12:55 pm, Pasi Kärkkäinen  wrote:
> > On Tue, Mar 09, 2010 at 02:26:51PM -0800, Murray wrote:
> > > We've got a
> >
> > > Broadcom Corporation NetXtreme II BCM5708 Gigabit Ethernet (rev 12)
> > > centos 5.4
> > > iscsi-initiator-utils-6.2.0.871-0.12.el5_4.1
> >
> > > I've been trying without success to make the iscsi mounts stop
> > > disconnecting like clockwork after about 10-24 hours. When it works,
> > > the performance is fantastic, blazing fast. However, we need it to
> > > work for more than 24 hours in a row. Apparently, this may be due to
> > > problems with the bnx and bnx2i driver. We're looking at buying a new
> > > network card that will work with iscsi reliably. At this point, the
> > > thought is to just change the hardware. The Broadcom support seems
> > > dubious at best.
> >
> > > Would anyone in the iscsi community recommend a network card and a
> > > module that they know works reliably with centos5.x?
> >
> > 1. Does removing bnx2i help? ie. try using the plain normal tcp transport 
> > without broadcom hw accel.
> >
> > 2. if the above helps, then you could try updating to the latest RHEL 5.5 
> > beta kernel + iscsi-utils,
> >    those have bugfixes for bnx2i.
> >
> > -- Pasi
> 
> While modinfo reports that bnx2i is the iscsi driver for this card, /
> var/log/messages reports otherwise.
> What seems to happen is that the system attempts to load the bnx2i
> module, fails and then falls back to bnx2. This was when I had forced
> eth1 to use bnx2i via /etc/modprobe.conf :
> 

I don't think you should add bnx2i in modprobe.conf for ethX..

> Mar  8 17:51:37 ndsfdh1 kernel: iscsi: registered transport (bnx2i)
> Mar  8 17:52:48 ndsfdh1 kernel: bnx2: eth1: using MSI
> Mar  8 17:52:48 ndsfdh1 kernel: bnx2i: iSCSI not supported, dev=eth1
> Mar  8 17:52:50 ndsfdh1 kernel: bnx2i: iSCSI not supported, dev=eth1
> Mar  8 17:52:51 ndsfdh1 kernel: bnx2: eth1 NIC Copper Link is Up, 1000
> Mbps full
> 
> So, from what I can tell it uses the bnx2 driver. Disconnects follow
> within hours which require the raid array itself to be rebooted before
> another successful iscsi login can be made. Is that what you mean by
> "using plain normal tcp transport without broadcom hw accel"?
> Admittedly, I am not sure how to prove I am using "plain normal tcp
> transport". Or should I completely remove bnx2i somehow? I don't see
> the module loaded, but the disconnects happen anyway.
> 

bnx2 is the ethernet nic driver, while bnx2i is additional driver
that provides iscsi hardware acceleration when used with open-iscsi.

if there's no bnx2i module loaded then you're using the normal/plain 
tcp transport for iscsi.

> My colleague suggested just getting another card and trying something
> known to work, perhaps cheaper than trying to get the broadcom stuff
> to work.
> 
> It's not clear to me the relationship between the two modules. Are
> they dependent or should they be loaded independently or just one and
> not the other? Can/should one run bnx2i on one interface and bnx2 on
> the other?
> 

from open-iscsi iscsiadm you can see what transport (tcp or bnx2i) 
you're using for your iscsi sessions.

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: bnx2/bnx2i module

2010-03-10 Thread Pasi Kärkkäinen
On Tue, Mar 09, 2010 at 02:26:51PM -0800, Murray wrote:
> We've got a
> 
> Broadcom Corporation NetXtreme II BCM5708 Gigabit Ethernet (rev 12)
> centos 5.4
> iscsi-initiator-utils-6.2.0.871-0.12.el5_4.1
> 
> I've been trying without success to make the iscsi mounts stop
> disconnecting like clockwork after about 10-24 hours. When it works,
> the performance is fantastic, blazing fast. However, we need it to
> work for more than 24 hours in a row. Apparently, this may be due to
> problems with the bnx and bnx2i driver. We're looking at buying a new
> network card that will work with iscsi reliably. At this point, the
> thought is to just change the hardware. The Broadcom support seems
> dubious at best.
> 
> Would anyone in the iscsi community recommend a network card and a
> module that they know works reliably with centos5.x?
> 

1. Does removing bnx2i help? ie. try using the plain normal tcp transport 
without broadcom hw accel.

2. if the above helps, then you could try updating to the latest RHEL 5.5 beta 
kernel + iscsi-utils,
   those have bugfixes for bnx2i.

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: iscsiadm and bonding

2010-03-09 Thread Pasi Kärkkäinen
On Tue, Mar 09, 2010 at 07:49:00AM -0500, Hoot, Joseph wrote:
> I had a similar issue, just not using bonding.  The gist of my problem was 
> that, 
> when connecting a physical network card to a bridge, iscsiadm will not login 
> through 
> that bridge (at least in my experience).  I could discover just fine, but 
> wasn't ever able to login.  
> 

This sounds like a configuration problem to me. 

Did you remove the IP addresses from eth0/eth1, and make sure only bond0 has 
the IP? 
Was the routing table correct? 

As long as the kernel routing table is correct open-iscsi shouldn't care what
interface you're using.

(Unless you bind the open-iscsi iface to some physical interface).


-- Pasi


> I am no longer attempting (at least for the moment because of time) to get it 
> working this way, but I would love to change our environment in the future if 
> a scenario such as this would work, because it gives me the flexibility to 
> pass a virtual network card through to the guest and allow the guest to 
> initiate its own iSCSI traffic instead of me doing it all at the dom0 level 
> and then passing those block devices through.
> 
> I've attached a network diagram that explains my situation.  The goal is to 
> give the administrator flexibility to have fiber or iSCSI storage at the xen 
> dom0 as well as being able to pass-through that storage to the guest and 
> allow the guest to initiate iSCSI sessions.  This gives the guest the 
> flexibility to be able to run snapshot-type commands and things using the 
> EqualLogic HIT kits.
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "open-iscsi" group.
> To post to this group, send email to open-is...@googlegroups.com.
> To unsubscribe from this group, send email to 
> open-iscsi+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/open-iscsi?hl=en.
> 

Content-Description: ATT1.txt
> 
> Thanks,
> Joe
> 
> ===
> Joseph R. Hoot
> Lead System Programmer/Analyst
> joe.h...@itec.suny.edu
> GPG KEY:   7145F633
> ===
> 
> On Mar 9, 2010, at 5:19 AM, aclhkaclhk aclhkaclhk wrote:
> 
> > thanks, i could discover using bond0 but could not login.
> > 
> > iscsi-target (openfiler): eth0 (192.168.123.174)
> > iscsi-initiator: eth0 (192.168.123.176), bond0-mode4 (192.168.123.178)
> > 
> > /var/lib/iscsi/ifaces/iface0
> > # BEGIN RECORD 2.0-871
> > iface.iscsi_ifacename = iface0
> > iface.net_ifacename = bond0
> > iface.transport_name = tcp
> > # END RECORD
> > 
> > [r...@pc176 ifaces]# iscsiadm -m discovery -t st -p 192.168.123.174
> > --
> > interface iface0
> > 192.168.123.174:3260,1 192.168.123.174-vg0drbd-iscsi0
> > [r...@pc176 ifaces]#  iscsiadm --mode node --targetname
> > 192.168.123.174-vg0drbd-iscsi0 --portal 192.168.123.174:3260 --login
> > --
> > interface iface0
> > Logging in to [iface: iface0, target: 192.168.123.174-vg0drbd-iscsi0,
> > portal: 192.168.123.174,3260]
> > iscsiadm: Could not login to [iface: iface0, target: 192.168.123.174-
> > vg0drbd-iscsi0, portal: 192.168.123.174,3260]:
> > iscsiadm: initiator reported error (8 - connection timed out)
> > 
> > i could use eth0 to discover and login
> > 
> > ping 192.168.123.174 from iscsi-initiator is ok
> > ping 192.168.123.178 from iscsi-target is ok
> > 
> > 192.168.123.178 is authorised in iscsi-target to login
> > 
> > 
> > On Mar 8, 6:30 pm, Pasi K?rkk?inen  wrote:
> >> On Mon, Mar 08, 2010 at 02:08:43AM -0800, aclhkaclhk aclhkaclhk wrote:
> >>> my server has eth0 (onboard), eth1 and eth2 (intel lan card). eth1 and
> >>> eth2 are bonded as bond0
> >> 
> >>> i want to login iscsi target using bond0 instead of eth0.
> >> 
> >>> iscsiadm --mode node --targetname 192.168.123.1-vg0drbd-iscsi0 --
> >>> portal 192.168.123.1:3260 --login --interface bond0
> >>> iscsiadm: Could not read iface info for bond0.
> >> 
> >>> with interface, eth0 is used.
> >> 
> >>> the bonding was setup correctly. it could be used by xen.
> >> 
> >> You need to create the open-iscsi 'iface0' first, and bind it to 'bond0'.
> >> Then you can login using 'iface0' interface.
> >> 
> >> Like this (on the fly):
> >> 
> >> # iscsiadm -m iface -I iface0 -o new
> >> New interface iface0 added
> >> 
> >> # iscsiadm -m iface -I iface0 --op=update -n iface.net_ifacename -v bond0
> >> iface0 updated.
> >> 
> >> You can set up this permanently in /var/lib/iscsi/ifaces/ directory,
> >> by creating a file called 'iface0' with this content:
> >> 
> >> iface.iscsi_ifacename = iface0
> >> iface.transport_name = tcp
> >> iface.net_ifacename = bond0
> >> 
> >> Hopefully that helps.
> >> 
> >> -- Pasi
> > 
> > -- 
> > You received this message because you are subscribed to the Google Groups 
> > "open-iscsi" group.
> > To post to this group, send email to open-is...@googlegroups.com.
> > To unsubscribe from this group, send email to 
> > open-iscsi+unsubscr...@googlegroups.com.
> > For more options, visit this gro

Re: Failover time of iSCSI multipath devices.

2010-03-08 Thread Pasi Kärkkäinen
On Mon, Mar 08, 2010 at 02:07:14PM -0600, Mike Christie wrote:
> On 03/07/2010 07:46 AM, Pasi Kärkkäinen wrote:
>> On Fri, Mar 05, 2010 at 05:07:53AM -0600, Mike Christie wrote:
>>> On 03/01/2010 08:53 PM, Mike Christie wrote:
>>>> On 03/01/2010 12:06 PM, bet wrote:
>>>>> 1. Based on my timeouts I would think that my session would time out
>>>>
>>>> Yes. It should timeout about 15 secs after you see
>>>>   >  Mar 1 07:14:27 bentCluster-1 kernel: connection4:0: ping timeout of
>>>>   >  5 secs expired, recv timeout 5, last rx 4884304, last ping 4889304,
>>>>   >  now 4894304
>>>>
>>>> You might be hitting a bug where the network layer gets stuck trying to
>>>> send data. I attached a patch that should fix the problem.
>>>>
>>>
>>> It looks like we have two bugs.
>>>
>>> 1. We can get stuck in the network code.
>>> 2. There is a race where the session->state can get reset due to the
>>> xmit thread throwing an error after we have set the session->state but
>>> before we have set the stop_stage.
>>>
>>> The attached patch for RHEL 5.5 should fix them all.
>>>
>>
>> Hello,
>>
>> Will this patch be in the next RHEL 5.5 beta kernel? Easier to test if 
>> there's
>> no need to build custom kernel :)
>>
>
> I am not sure if it will be in the next 5.5 beta. It should be in 5.5  
> though. Do you have a bugzilla account? I made this bugzilla
> https://bugzilla.redhat.com/show_bug.cgi?id=570681
> You can add yourself to it and when the patch is merged you will get a  
> notification and a link to a test kernel.
>
> If you do not have a bugzilla account, just let me know and I will ping  
> you when it is available in a test kernel.
>

I just added myself to the bug. Thanks!

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: iscsiadm and bonding

2010-03-08 Thread Pasi Kärkkäinen
On Mon, Mar 08, 2010 at 02:08:43AM -0800, aclhkaclhk aclhkaclhk wrote:
> my server has eth0 (onboard), eth1 and eth2 (intel lan card). eth1 and
> eth2 are bonded as bond0
> 
> i want to login iscsi target using bond0 instead of eth0.
> 
> iscsiadm --mode node --targetname 192.168.123.1-vg0drbd-iscsi0 --
> portal 192.168.123.1:3260 --login --interface bond0
> iscsiadm: Could not read iface info for bond0.
> 
> with interface, eth0 is used.
> 
> the bonding was setup correctly. it could be used by xen.
> 

You need to create the open-iscsi 'iface0' first, and bind it to 'bond0'.
Then you can login using 'iface0' interface.

Like this (on the fly):

# iscsiadm -m iface -I iface0 -o new
New interface iface0 added

# iscsiadm -m iface -I iface0 --op=update -n iface.net_ifacename -v bond0
iface0 updated.

You can set up this permanently in /var/lib/iscsi/ifaces/ directory, 
by creating a file called 'iface0' with this content:

iface.iscsi_ifacename = iface0 
iface.transport_name = tcp
iface.net_ifacename = bond0

Hopefully that helps.

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: Failover time of iSCSI multipath devices.

2010-03-07 Thread Pasi Kärkkäinen
On Fri, Mar 05, 2010 at 05:07:53AM -0600, Mike Christie wrote:
> On 03/01/2010 08:53 PM, Mike Christie wrote:
>> On 03/01/2010 12:06 PM, bet wrote:
>>> 1. Based on my timeouts I would think that my session would time out
>>
>> Yes. It should timeout about 15 secs after you see
>>  > Mar 1 07:14:27 bentCluster-1 kernel: connection4:0: ping timeout of
>>  > 5 secs expired, recv timeout 5, last rx 4884304, last ping 4889304,
>>  > now 4894304
>>
>> You might be hitting a bug where the network layer gets stuck trying to
>> send data. I attached a patch that should fix the problem.
>>
>
> It looks like we have two bugs.
>
> 1. We can get stuck in the network code.
> 2. There is a race where the session->state can get reset due to the  
> xmit thread throwing an error after we have set the session->state but  
> before we have set the stop_stage.
>
> The attached patch for RHEL 5.5 should fix them all.
>

Hello,

Will this patch be in the next RHEL 5.5 beta kernel? Easier to test if there's 
no need to build custom kernel :)

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: computer reboot when iscsi ethernet cable is unpluged

2010-03-05 Thread Pasi Kärkkäinen
On Fri, Mar 05, 2010 at 01:05:18AM -0800, Marc Grunberg wrote:
> 
> 
> On 5 mar, 08:50, Pasi Kärkkäinen  wrote:
> 
> > So are you using the open-iscsi/iscsi-initiator-utils from the 
> > distributions,
> > or self-compiled from open-iscsi.org ?
> >
> > Are you using the open-iscsi kernel modules provided by the distribution 
> > default kernels?
> 
> Yes I use the provided kernel and open-iscsi from debian 5 and CentOS
> 5.4 :/

Ok. Mike should know if there are known bugs like this in the EL5 open-iscsi 
drivers..

> Do you think I should compile my own kernel with latest code from open-
> iscsi web sites ?
> 

Well you can always try them.. make sure you replace both the kernel module
and tools.

> Could it be realted to x86-64 bit mode ? I have not tested this
> behavior on 32 bits distributions.
> 

No, I don't think it's x86_64 causing problems.

> I will try first to enable debug mode to catch some error log ... if
> any.
> 

Yeah.. serial console should help you and allow you to capture the error 
messages.

> Just for information ... without ethernet cable link unpluged, iscsi
> disks work very well !
> 

Yep.

-- Pasi


-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: computer reboot when iscsi ethernet cable is unpluged

2010-03-04 Thread Pasi Kärkkäinen
On Thu, Mar 04, 2010 at 08:42:36AM -0800, Marc Grunberg wrote:
> Hi
> 
> I am a newcomer on this group and  I just own an 2xEquallogic PS6000
> that I use on linux.
> 
> I am facing strange unwanted computer reboot using iscsi with this
> Equallogic
> 
> This problem is easily reproducible on different hardware/software
> combination.
> the first computer is Debian 5/Lenny x86-64 with a Broadcom
> Corporation NetXtreme II BCM5708 Gigabit Ethernet
> the second one is Centos 5.4 x86-64 with Intel Corporation 82546GB
> Gigabit Ethernet Controller (e1000 module)
> booth are running up to date resease of  open-iscsi/iscsi-initiator-
> utils.
> 

So are you using the open-iscsi/iscsi-initiator-utils from the distributions,
or self-compiled from open-iscsi.org ?

Are you using the open-iscsi kernel modules provided by the distribution 
default kernels? 

> To make this strange behavior happens  Equalogic disks have to be
> mounted  first.
> Then the reboot comes when I unplug the computer's ethernet iscsi
> cable from the switch. I have not made
> exhaustive test  but the disconnection is around  1 minute then
> computers reboot (like reset)  without shutting down and without any
> log message  !
> 
> Does anybody have similar problems ? Any clue to overcome this ?
> 

Please set up a serial console so you're able to log the crash/error messages.

Anyway, this sounds like a really weird problem.

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: Can I tell if my iSCSI is already mounted somewhere else?

2010-02-25 Thread Pasi Kärkkäinen
On Thu, Feb 25, 2010 at 03:26:57PM +0100, Jörg Delker wrote:
> Hi Pasi,
> 
> could you please explain or give a hint on how to do that - limiting the
> target to a single initiator?
> 
> My impression was, that this isn't possible with open-iscsi !?
> 

You can't do that with open-iscsi. open-iscsi is an iSCSI initiator (client),
not iSCSI target (server).

You need to configure that in your iSCSI target (=storage server).

-- Pasi

> 
> Am 25.02.10 11:27 schrieb "Pasi Kärkkäinen" unter :
> 
> > On Wed, Feb 24, 2010 at 10:54:57PM +0200, Pasi Kärkkäinen wrote:
> >> On Wed, Feb 24, 2010 at 12:31:30PM -0800, guymatz wrote:
> >>> Yeah, again, thanks, but that doesn't help me check to see if the LUN
> >>> is already mounted on another server, and which one.
> >>> 
> >> 
> >> You can only see that from the iSCSI target.
> >> 
> >> Of if your application or filesystem writes some data about itself 
> >> (including
> >> the node name)
> >> to the LUN, you could read and check that.
> >> 
> > 
> > Or you could configure your iSCSI target to only allow one initiator
> > to attach to the LUN at a time..
> > 
> > -- Pasi


-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: Can I tell if my iSCSI is already mounted somewhere else?

2010-02-25 Thread Pasi Kärkkäinen
On Wed, Feb 24, 2010 at 10:54:57PM +0200, Pasi Kärkkäinen wrote:
> On Wed, Feb 24, 2010 at 12:31:30PM -0800, guymatz wrote:
> > Yeah, again, thanks, but that doesn't help me check to see if the LUN
> > is already mounted on another server, and which one.
> > 
> 
> You can only see that from the iSCSI target.
> 
> Of if your application or filesystem writes some data about itself (including 
> the node name)
> to the LUN, you could read and check that.
> 

Or you could configure your iSCSI target to only allow one initiator 
to attach to the LUN at a time..

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: Can I tell if my iSCSI is already mounted somewhere else?

2010-02-24 Thread Pasi Kärkkäinen
On Wed, Feb 24, 2010 at 12:31:30PM -0800, guymatz wrote:
> Yeah, again, thanks, but that doesn't help me check to see if the LUN
> is already mounted on another server, and which one.
> 

You can only see that from the iSCSI target.

Of if your application or filesystem writes some data about itself (including 
the node name)
to the LUN, you could read and check that.

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: Over one million IOPS using software iSCSI and 10 Gbit Ethernet

2010-02-23 Thread Pasi Kärkkäinen
On Mon, Feb 22, 2010 at 02:03:59PM -0800, ByteEnable wrote:
> This was achieved by using cache.
>

What cache? Obviously the iscsi targets were ramdisks, but what do you
mean with caching? 

-- Pasi
 
> On Jan 28, 6:36 am, Pasi Kärkkäinen  wrote:
> > Hello list,
> >
> > Please check these news 
> > items:http://blog.fosketts.net/2010/01/14/microsoft-intel-push-million-iscs...http://communities.intel.com/community/openportit/server/blog/2010/01...http://www.infostor.com/index/blogs_new/dave_simpson_storage/blogs/in...
> >
> > "1,030,000 IOPS over a single 10 Gb Ethernet link"
> >
> > "Specifically, Intel and Microsoft clocked 1,030,000 IOPS (with 512-byte 
> > blocks),
> > and more than 2,250MBps with large block sizes (16KB to 256KB) using the 
> > Iometer benchmark"
> >
> > So.. who wants to beat that using Linux + open-iscsi? :)
> >
> > -- Pasi
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "open-iscsi" group.
> To post to this group, send email to open-is...@googlegroups.com.
> To unsubscribe from this group, send email to 
> open-iscsi+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/open-iscsi?hl=en.
> 

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: Over one million IOPS using software iSCSI and 10 Gbit Ethernet

2010-02-08 Thread Pasi Kärkkäinen
On Fri, Feb 05, 2010 at 02:10:32PM +0300, Vladislav Bolkhovitin wrote:
> Pasi Kärkkäinen, on 01/28/2010 03:36 PM wrote:
>> Hello list,
>>
>> Please check these news items:
>> http://blog.fosketts.net/2010/01/14/microsoft-intel-push-million-iscsi-iops/
>> http://communities.intel.com/community/openportit/server/blog/2010/01/19/100-iops-with-iscsi--thats-not-a-typo
>> http://www.infostor.com/index/blogs_new/dave_simpson_storage/blogs/infostor/dave_simpon_storage/post987_37501094375591341.html
>>
>> "1,030,000 IOPS over a single 10 Gb Ethernet link"
>>
>> "Specifically, Intel and Microsoft clocked 1,030,000 IOPS (with 
>> 512-byte blocks), and more than 2,250MBps with large block sizes (16KB 
>> to 256KB) using the Iometer benchmark"
>>
>> So.. who wants to beat that using Linux + open-iscsi? :)
>
> I personally, don't like such tests and don't trust them at all. They  
> are pure marketing. The only goal of them is to create impression that X  
> (Microsoft and Windows in this case) is a super-puper ahead of the  
> world. I've seen on the Web a good article about usual tricks used by  
> vendors to cheat benchmarks to get good marketing material, but,  
> unfortunately, can't find link on it at the moment.
>
> The problem is that you can't say from such tests if X will also "ahead  
> of the world" on real life usages, because such tests always heavily  
> optimized for particular used benchmarks and such optimizations almost  
> always hurt real life cases. And you hardly find descriptions of those  
> optimizations as well as a scientific description of the tests themself.  
> The results published practically only in marketing documents.
>
> Anyway, as far as I can see Linux supports all the used hardware as well  
> as all advance performance modes of it, so if one repeats this test in  
> the same setup, he/she should get not worse results.
>
> For me personally it was funny to see how MS presents in the WinHEC  
> presentation  
> (http://download.microsoft.com/download/5/E/6/5E66B27B-988B-4F50-AF3A-C2FF1E62180F/COR-T586_WH08.pptx)
>  
> that they have 1.1GB/s from 4 connections. In the beginning of 2008 I  
> saw a *single* dd pushing data on that rate over a *single* connection  
> from Linux initiator to iSCSI-SCST target using regular Myricom hardware  
> without any special acceleration. I didn't know how proud I must have  
> been for Linux :).
>

Hehe, congrats :)

Did you ever benchmark/measure what kind of IOPS numbers you can get? 

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: Over one million IOPS using software iSCSI and 10 Gbit Ethernet

2010-02-01 Thread Pasi Kärkkäinen
On Mon, Feb 01, 2010 at 11:33:53AM +0200, Pasi Kärkkäinen wrote:
> On Thu, Jan 28, 2010 at 01:44:00PM -0500, Joe Landman wrote:
> > Pasi Kärkkäinen wrote:
> >
> >> I think SFP+ 10 Gbit has 0.6usec latency.. ? 10GBase-T is 2.6 usec.
> >
> >
> > http://www.mellanox.com/pdf/whitepapers/wp_mellanox_en_Arista.pdf
> >
> 
> This one is from 2008..
> 
> > They are reporting 7+ us latency.  ConnectX are a bit better on latency  
> > than the Intel NICs.
> >
> > http://www.ednasia.com/article-24923-solarflarearistanetworkspublishtestreportdemonstratinglowlatencywith10gbe-Asia.html
> >
> > shows latency, server to server of ~5us.  This is about what you expect  
> > (and why 10GbE isn't quite to IB capability yet in low latency apps).
> >
> 
> Yeah.. well, I don't seem to be able to find better data :)
> 

Marketing stuff again:
http://www.aristanetworks.com/media/system/pdf/CloudNetworkLatency.pdf

"Arista Cut-through Design: Intra-Rack Latency: 0.6us, Inter-Rack Latency: 
2.4us".

> > I suspect that they really aren't seeing ~1us latencies, but that with  
> > some neat tricks, it appears to be this.
> >
> > Physically, it isn't there.  I'd classify this as a "marketing number"  
> > (e.g. unachievable in applications that matter).  I'd be happy to be  
> > proven wrong, but there really isn't much to suggest that I am wrong.
> >
>

Thinking about this again.. how can they cheat? That >million IOPS was the
iometer reported benchmark result?

Of course they used ramdisks on the iscsi targets etc..

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: Over one million IOPS using software iSCSI and 10 Gbit Ethernet

2010-02-01 Thread Pasi Kärkkäinen
On Thu, Jan 28, 2010 at 01:44:00PM -0500, Joe Landman wrote:
> Pasi Kärkkäinen wrote:
>
>> I think SFP+ 10 Gbit has 0.6usec latency.. ? 10GBase-T is 2.6 usec.
>
>
> http://www.mellanox.com/pdf/whitepapers/wp_mellanox_en_Arista.pdf
>

This one is from 2008..

> They are reporting 7+ us latency.  ConnectX are a bit better on latency  
> than the Intel NICs.
>
> http://www.ednasia.com/article-24923-solarflarearistanetworkspublishtestreportdemonstratinglowlatencywith10gbe-Asia.html
>
> shows latency, server to server of ~5us.  This is about what you expect  
> (and why 10GbE isn't quite to IB capability yet in low latency apps).
>

Yeah.. well, I don't seem to be able to find better data :)

> I suspect that they really aren't seeing ~1us latencies, but that with  
> some neat tricks, it appears to be this.
>
> Physically, it isn't there.  I'd classify this as a "marketing number"  
> (e.g. unachievable in applications that matter).  I'd be happy to be  
> proven wrong, but there really isn't much to suggest that I am wrong.
>

I hope I get to test some 10 Gbit Ethernet equipment soon.. let's see 
what I can get myself.

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: Over one million IOPS using software iSCSI and 10 Gbit Ethernet

2010-01-28 Thread Pasi Kärkkäinen
On Thu, Jan 28, 2010 at 07:38:28PM +0100, Bart Van Assche wrote:
> On Thu, Jan 28, 2010 at 4:01 PM, Joe Landman
>  wrote:
> > Pasi Kärkkäinen wrote:
> >>
> >> Please check these news items:
> >>
> >> http://blog.fosketts.net/2010/01/14/microsoft-intel-push-million-iscsi-iops/
> >>
> >> http://communities.intel.com/community/openportit/server/blog/2010/01/19/100-iops-with-iscsi--thats-not-a-typo
> >>
> >> http://www.infostor.com/index/blogs_new/dave_simpson_storage/blogs/infostor/dave_simpon_storage/post987_37501094375591341.html
> >>
> >> "1,030,000 IOPS over a single 10 Gb Ethernet link"
> >
> > This is less than 1us per IOP.  Interesting.  Their hardware may not
> > actually support this.  10GbE typically is 7-10us, though ConnectX and some
> > others get down to 2ish.
> 
> Which I/O depth has been used in the test ? Latency matters most with
> an I/O depth of one and is almost irrelevant for high I/O depth
> values.
> 

iirc outstanding I/Os was 20 in that benchmark.

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: Over one million IOPS using software iSCSI and 10 Gbit Ethernet

2010-01-28 Thread Pasi Kärkkäinen
On Thu, Jan 28, 2010 at 10:01:39AM -0500, Joe Landman wrote:
> Pasi Kärkkäinen wrote:
>> Hello list,
>>
>> Please check these news items:
>> http://blog.fosketts.net/2010/01/14/microsoft-intel-push-million-iscsi-iops/
>> http://communities.intel.com/community/openportit/server/blog/2010/01/19/100-iops-with-iscsi--thats-not-a-typo
>> http://www.infostor.com/index/blogs_new/dave_simpson_storage/blogs/infostor/dave_simpon_storage/post987_37501094375591341.html
>>
>> "1,030,000 IOPS over a single 10 Gb Ethernet link"
>
> This is less than 1us per IOP.  Interesting.  Their hardware may not  
> actually support this.  10GbE typically is 7-10us, though ConnectX and  
> some others get down to 2ish.
>

I think SFP+ 10 Gbit has 0.6usec latency.. ? 10GBase-T is 2.6 usec.

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: Over one million IOPS using software iSCSI and 10 Gbit Ethernet

2010-01-28 Thread Pasi Kärkkäinen
On Thu, Jan 28, 2010 at 05:35:25PM +0100, Bart Van Assche wrote:
>On Thu, Jan 28, 2010 at 1:36 PM, Pasi Kärkkäinen <[1]pa...@iki.fi> wrote:
> 
>  Please check these news items:
>  
> [2]http://blog.fosketts.net/2010/01/14/microsoft-intel-push-million-iscsi-iops/
>  
> [3]http://communities.intel.com/community/openportit/server/blog/2010/01/19/100-iops-with-iscsi--thats-not-a-typo
>  
> [4]http://www.infostor.com/index/blogs_new/dave_simpson_storage/blogs/infostor/dave_simpon_storage/post987_37501094375591341.html
> 
>  "1,030,000 IOPS over a single 10 Gb Ethernet link"
> 
>  "Specifically, Intel and Microsoft clocked 1,030,000 IOPS (with 512-byte
>  blocks),
>  and more than 2,250MBps with large block sizes (16KB to 256KB) using the
>  Iometer benchmark"
> 
>  So.. who wants to beat that using Linux + open-iscsi? :)
> 
>A few comments:
>* A throughput of 2250 MB/s over a 10 Gb/s link is only possible when
>running read and write tests simultaneously and when counting the traffic
>that flows in both directions.

Obviously..

>* These results say more about the NIC used than about they say about the
>iSCSI initiator "software" used. A quote from
>
> [5]http://msevents.microsoft.com/CUI/WebCastEventDetails.aspx?culture=en-US&EventID=1032432957&CountryCode=US:
>Topics we discuss include [ ... ] Advanced iSCSI acceleration features in
>Intel Ethernet Server Adapters and how they work with the native iSCSI
>support in Windows Sever 2008 R2.
> 

We were just trying to figure out if they used some "Advanced iSCSI 
acceleration" or not..

Afaik Intel NICs don't really contain much iSCSI acceleration, in addition to 
the usual TCP/IP
acceleration/offloading features..

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: Over one million IOPS using software iSCSI and 10 Gbit Ethernet

2010-01-28 Thread Pasi Kärkkäinen
On Thu, Jan 28, 2010 at 10:16:24AM -0600, Mike Christie wrote:
> On 01/28/2010 06:36 AM, Pasi Kärkkäinen wrote:
>> Hello list,
>>
>> Please check these news items:
>> http://blog.fosketts.net/2010/01/14/microsoft-intel-push-million-iscsi-iops/
>> http://communities.intel.com/community/openportit/server/blog/2010/01/19/100-iops-with-iscsi--thats-not-a-typo
>> http://www.infostor.com/index/blogs_new/dave_simpson_storage/blogs/infostor/dave_simpon_storage/post987_37501094375591341.html
>>
>> "1,030,000 IOPS over a single 10 Gb Ethernet link"
>>
>> "Specifically, Intel and Microsoft clocked 1,030,000 IOPS (with 512-byte 
>> blocks),
>
> Did it look like they were using ioat? I think for Windows they support  
> iscsi and ioat, right?
>

Hmm.. let's see: 
http://dlbmodigital.microsoft.com/ppt/TN-100114-JSchwartz_SMorgan_JPlawner-1032432956-FINAL.pdf

Performance factors:

- iSCSI initiator perf optimizations
- Network stack optimizations
- Receive Side Scaling (RSS)
- Intel Xeon 5500 QPI and integrated memory controller
- Intel 82599: HW Acceleration, multi-core scaling with RSS, MSI-X

iSCSI and Storage Enhancements in (Windows 2008) R2:

- iSCSI Multi-Core and Numa IO
- DPC redirection
- Dynamic Load Balancing
- Storage IO monitoring
- CRC Digest Offload
- Support for 32 paths at boot time

Intel Xeon 5500 Processor Series, Shatters previous iSCSI performance

Architecture increases I/O Bandwidth and CPU efficiency
- New memory subsystem
- Intel Quickpath Interconnect
- New I/O subsystem w/ PCIe Gen2 and CRC32-C instruction set

Intel Ethernet Adapters, Elements of iSCSI connectivity

..
..

3. Performance
- Transport HW off-loads: Ethernet, TCP/IP and IPSEC
- Multi-Core I/O scaling integrated with Windows Server
- Intel VMDQ: VM mapping off-load with Hyper-V
- Intel Xeon Processor 5500 iSCSI CRC digest instruction set

Intel Ethernet iSCSI Acceleration

1. Largest portion of storage I/O host processing is in the application 
& SCSI layers. 
   No off-load possible.
2. Integrated initiator compute insignificant at run time.
   CPU CRC computation offers maximum data protection. CRC instruction 
off-load.
3. Header created in SW, Segmentation and checksum off-loaded.
   Transport Off-load.
4. I/O mapped to cores via classes and queues.
   Mapping off-load.
5. Transport layer: LRO, LSO, Cksum Rx/Tx, RSS, IPSEC HW Off-load.
   Transport off-load.

So yeah.. no idea if those are IOAT or not.

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: Over one million IOPS using software iSCSI and 10 Gbit Ethernet

2010-01-28 Thread Pasi Kärkkäinen
On Thu, Jan 28, 2010 at 02:36:09PM +0200, Pasi Kärkkäinen wrote:
> Hello list,
> 
> Please check these news items:
> http://blog.fosketts.net/2010/01/14/microsoft-intel-push-million-iscsi-iops/
> http://communities.intel.com/community/openportit/server/blog/2010/01/19/100-iops-with-iscsi--thats-not-a-typo
> http://www.infostor.com/index/blogs_new/dave_simpson_storage/blogs/infostor/dave_simpon_storage/post987_37501094375591341.html
> 
> "1,030,000 IOPS over a single 10 Gb Ethernet link"
> 
> "Specifically, Intel and Microsoft clocked 1,030,000 IOPS (with 512-byte 
> blocks), 
> and more than 2,250MBps with large block sizes (16KB to 256KB) using the 
> Iometer benchmark"
> 
> So.. who wants to beat that using Linux + open-iscsi? :)
> 

Some more information about the benchmark, and MS marketing stuff:
http://dlbmodigital.microsoft.com/ppt/TN-100114-JSchwartz_SMorgan_JPlawner-1032432956-FINAL.pdf

And here's earlier benchmark using older hardware, from 03/2009:
http://gestaltit.com/featured/top/stephen/wirespeed-10-gb-iscsi/

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Over one million IOPS using software iSCSI and 10 Gbit Ethernet

2010-01-28 Thread Pasi Kärkkäinen
Hello list,

Please check these news items:
http://blog.fosketts.net/2010/01/14/microsoft-intel-push-million-iscsi-iops/
http://communities.intel.com/community/openportit/server/blog/2010/01/19/100-iops-with-iscsi--thats-not-a-typo
http://www.infostor.com/index/blogs_new/dave_simpson_storage/blogs/infostor/dave_simpon_storage/post987_37501094375591341.html

"1,030,000 IOPS over a single 10 Gb Ethernet link"

"Specifically, Intel and Microsoft clocked 1,030,000 IOPS (with 512-byte 
blocks), 
and more than 2,250MBps with large block sizes (16KB to 256KB) using the 
Iometer benchmark"

So.. who wants to beat that using Linux + open-iscsi? :)

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: Can anybody confirm that bnx2i on 5709 cards works withEquallogic 6xxx?

2010-01-25 Thread Pasi Kärkkäinen
On Fri, Jan 22, 2010 at 11:59:12AM -0500, Paul Koning wrote:
> > Hmm.. will this cause problems with Equallogic storage?
> > Equallogic prefers having all the initiators (and targets) in the same
> > IP subnet.
> 
> Not quite.  It supports both single and multi subnet configurations.
> (There are some specific rules for multi subnet configurations which are
> documented.)
> 
> If people don't know of a reason why they should use multi subnet, we
> suggest the use of single subnet because it is easier to set up, but
> it's a stretch to call that a preference.  And in any case, whatever you
> call it, a multi subnet configuration correctly constructed will not
> cause problems.
> 

Ok, maybe 'prefers' was too strong wording :) 

And yeah, I knew you can do multi-subnet, but it seems many are using EQL 
storage
only with a single subnet.

Now let's hope bnx2i works OK with single subnet aswell :)

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: Can anybody confirm that bnx2i on 5709 cards works with Equallogic 6xxx?

2010-01-22 Thread Pasi Kärkkäinen
On Fri, Jan 22, 2010 at 12:20:03AM -0800, Benjamin Li wrote:
> Hi Ciprian,
> 
> Thanks for the additional information, there are a couple of notes with
> this offload technique.
> 
> 1.  The route/device CNIC will choose is based off the host routing
> table.  (The CNIC uses the kernel function ip_route_output_key() to
> determine the device to use.  This function can possibly return devices
> with assigned IP addresses but are down'ed)  Could you also provide your
> host routing table along with the IP address used by your network
> devices too?  When looking at the routing table does it look like CNIC
> is choosing the correct device to offload?
> 

Hmm.. will this cause problems with Equallogic storage? 
Equallogic prefers having all the initiators (and targets) in the same IP 
subnet.

ie. you might have multipath configuration like this on the initiator system:

eth0: 10.0.0.1/24
eth1: 10.0.0.2/24

And open-iscsi connects to the same target IP (Equallogic group redirection IP) 
from both interfaces. Target IP might be 10.0.0.10.

This works well with the non-offloaded open-iscsi over tcp. You can specify the
ethernet interface for each open-iscsi ifaceX by using the ifacename.

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.



Re: Can anybody confirm that bnx2i on 5709 cards works with Equallogic 6xxx?

2010-01-20 Thread Pasi Kärkkäinen
On Wed, Jan 20, 2010 at 07:46:48PM +0100, Ciprian Vizitiu (GBIF) wrote:
> Hi,
>
> Can anybody here please confirm whether iSCSI offload via bnx2i, on RHEL  
> 5.4, with 5709 Broadcoms towards EQLs 6000 series works or not? Despite  
> countless attempts (and latest EQL OS update) I still can't match them  
> (but then the software transport works perfectly). :-|
>

Do you get any kind of errors? Check "dmesg" and /var/log/messages.
(Is there some option to make bnx2i give verbose debug?)

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: iscsi offload boot support

2010-01-15 Thread Pasi Kärkkäinen
On Fri, Jan 15, 2010 at 05:43:09AM -0600, Mike Christie wrote:
> On 01/15/2010 03:33 AM, Pasi Kärkkäinen wrote:
> >On Fri, Jan 15, 2010 at 02:41:14AM -0600, Mike Christie wrote:
> >>On 01/15/2010 02:33 AM, Mike Christie wrote:
> >>>Hey,
> >>>
> >>>In the open-iscsi git tree there is now offload boot support. The driver
> >>>must support ibft or OF. This means that bnx2i and cxgb3i are supported.
> >>>If you run
> >>>
> >>>iscsistart -b
> >>>or
> >>>iscsiadm -m fw -l
> >>>
> >>>it will create sessions using the offload card that was used for boot.
> >>>
> >>>
> >>>iscsiadm -m discovery -t fw
> >>>
> >>>This will create ifaces or update existing ones with the ibft/of info
> >>>for the offload card, and it will set things up so the targets in
> >>>ibft/of are setup in the node db to accessed through the card.
> >>>
> >>>However, because I am not sure if any distros can support this and it
> >>>conflicts with the old behavior I turned it off by default. To turn it
> >>>on you need to compile the userspace code with OFFLOAD_BOOT_SUPPORTED.
> >>>
> >>>
> >>>To support this, distros or users that like pain, should add the offlaod
> >>>driver (cxgb3i or bnx2i) to the initramfs, and for bnx2i you need to
> >>>also throw in the brcm or uip (depends on the code version/base) daemon
> >>>in the initramfs and start it before running iscsistart or iscsiadm.
> >>>
> >>
> >>Oh yeah one other note. For bnx2i, you also need to bring up some other
> >>bnx2/bnx2x nic still. This requires a kernel fix. For cxgb3i this is not
> >>needed.
> >
> >Hmm.. can you be more speficic? if eth0 has offloaded boot configured,
> >what do you need to do for eth1, because of the bug?
> >
> 
> I should not have said fix above. Maybe I should have said improvement.
> 
> I meant to say bnx2i still has the same start up reqruirements as it did 
> before, and so you have to do the network startup junk you had to do 
> nornally but in the initramfs when booting.
> 
> Have you got to use your broadcom card yet? If not it is good you saw 
> the post first. Basically the card has a iscsi function and a net 
> function. You have to set both up to be able to use the iscsi side.
> 

Not yet unfortunately.. I'll be testing this during the upcoming weeks.

> Let's say you have a broadcom card with one port. The net interface is 
> called eth0. Set that up like you normally would and bring it up with 
> ifup. Then because of how bnx2i is designed the iscsi side and network 
> side sort of work together. It is like this with boot or not. So then 
> when you set up the iscsi iface for eth0, the ip address for the iscsi 
> iface has to be on the same subnet as eth0, and eth0 has to be up when 
> you use the iscsi iface with bnx2i.
>

Thanks for the explanation! It's clear now.

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: iscsi offload boot support

2010-01-15 Thread Pasi Kärkkäinen
On Fri, Jan 15, 2010 at 02:41:14AM -0600, Mike Christie wrote:
> On 01/15/2010 02:33 AM, Mike Christie wrote:
> >Hey,
> >
> >In the open-iscsi git tree there is now offload boot support. The driver
> >must support ibft or OF. This means that bnx2i and cxgb3i are supported.
> >If you run
> >
> >iscsistart -b
> >or
> >iscsiadm -m fw -l
> >
> >it will create sessions using the offload card that was used for boot.
> >
> >
> >iscsiadm -m discovery -t fw
> >
> >This will create ifaces or update existing ones with the ibft/of info
> >for the offload card, and it will set things up so the targets in
> >ibft/of are setup in the node db to accessed through the card.
> >
> >However, because I am not sure if any distros can support this and it
> >conflicts with the old behavior I turned it off by default. To turn it
> >on you need to compile the userspace code with OFFLOAD_BOOT_SUPPORTED.
> >
> >
> >To support this, distros or users that like pain, should add the offlaod
> >driver (cxgb3i or bnx2i) to the initramfs, and for bnx2i you need to
> >also throw in the brcm or uip (depends on the code version/base) daemon
> >in the initramfs and start it before running iscsistart or iscsiadm.
> >
> 
> Oh yeah one other note. For bnx2i, you also need to bring up some other 
> bnx2/bnx2x nic still. This requires a kernel fix. For cxgb3i this is not 
> needed.

Hmm.. can you be more speficic? if eth0 has offloaded boot configured,
what do you need to do for eth1, because of the bug? 

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: Intel NIC iSCSI acceleration in Linux and open-iscsi

2010-01-12 Thread Pasi Kärkkäinen
On Tue, Sep 29, 2009 at 10:48:45PM +0300, Pasi Kärkkäinen wrote:
> 
> On Mon, Sep 28, 2009 at 08:50:57PM -0700, Meenakshi Ramamoorthi wrote:
> >Yes, what details do you require ?
> >
> 
> Well.. what's the status? Is the code available from somewhere? Can I test 
> it? :)
> 
> I haven't seen anything on open-iscsi list.. at least I can't remember.
> 

Hello again,

Any updates/comments about this? 

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: iSCSI throughput drops as link rtt increases?

2010-01-07 Thread Pasi Kärkkäinen
On Thu, Jan 07, 2010 at 10:05:57AM -0800, Jack Z wrote:
> Hi Pasi,
> 
> Thanks again for your reply!
> 
> > > > Try to play and experiment with these options:
> >
> > > > -B 64k (blocksize 64k, try also 4k)
> > > > -I BD (block device, direct IO (O_DIRECT))
> > > > -K 16 (16 threads, aka 16 outstanding IOs. -K 1 should be the same as 
> > > > dd)
> >
> > > > Examples:
> >
> > > > Sequential (linear) reads using blocksize 4k and 4 simultaneous 
> > > > threads, for 60 seconds:
> > > > disktest -B 4k -h 1 -I BD -K 4 -p l -P T -T 60 -r /dev/sdX
> >
> > > > Random writes:
> >
> > > > disktest -B 4k -h 1 -I BD -K 4 -p r -P T -T 60 -w /dev/sdX
> >
> > > > 30% random reads, 70% random writes:
> > > > disktest -r -w -D30:70 -K2 -E32 -B 8k -T 60 -pR -Ibd -PA /dev/md4
> >
> > > > Hopefully that helps..
> >
> > > That did help. I tried the following combinations of -B -K and -p at
> > > 20 ms RTT and the other options were -h 30 -I BD -P T -S0:(1 GB size)
> >
> > > -B 4k/64k -K 4/64 -p l
> >
> > > It seems that when I put -p l there the performance goes down
> > > drastically...
> >
> > That's really weird.. linear/sequential (-p l) should always be faster
> > than random.
> >
> > > -B 4k -K 4/64 -p r
> >
> > > The disk throughput is similar to the one I used in the previous post
> > > "disktest -w -S0:1k -B 1024 /dev/sdb " and it's much lower than dd
> > > could get.
> >
> > like said, weird.
> 
> I'll try to repeat more of these tests that yielded weird results.
> I'll let you know if anything new comes up. :)
> 

Yep.


> 
> > > -B 64k -K 4 -p r
> >
> > > The disk throughput is higher than the last one but still not as high
> > > as dd could get.
> >
> > > -B 64k -K 64 -p r
> >
> > > The disk throughput was boosted to 8.06 MB/s and the IOPS was 129.0.
> > > At the link layer, the traffic rate was 70.536 Mbps (the TCP baseline
> > > was 96.202 Mbps). At the same time, dd ( bs=64K count=(1 GB size)) got
> > > a throughput of 6.7 MB/s and the traffic rate on the link layer was
> > > 57.749 Mbps.
> >
> > Ok.
> >
> > 129 IOPS * 64kB = 8256 kB/sec, which pretty much matches the 8 MB/sec
> > you measured.
> >
> > this still means there was only 1 outstanding IO.. and definitely not 64 
> > (-K 64).
> 
> For this part, I did not quite understand... Following your previous
> calculations,
> 
> 1 s = 1000 ms
> 1000 / 129 = 7.75 ms/IO
> 
> And the link RTT is 20 ms.
> 
> 20/7.75 = 2.58 > 2. So there should be at least 2 outstanding IOs...
> Am I corrent...?
> 

That's correct. I was wrong. I was too busy when replying to you :)

> And for the 64 outstanding IOs, I'll try more experiments and see why
> that is not happening.
> 

It could be because of the IO elevator/scheduler.. see below.

> 
> > > Although not much, it was still an improvement and it was the first
> > > improvement I have ever seen since I started my experiments! Thank you
> > > very much!
> >
> > > As for
> >
> > > > Oh, also make sure you have 'oflag=direct' for dd.
> >
> > > The result was surprisingly low again... Do you think the reason might
> > > be that I was running dd on a device file (/dev/sdb), which did not
> > > have any partitions/file systems on it?
> >
> > > Thanks a lot!
> >
> > oflag=direct makes dd use O_DIRECT, aka bypass all kernel/initiator caches 
> > for writing.
> > iflag=direct would bypass all caches for reading.
> >
> > It shouldn't matter if you write or read from /dev/sda1 instead of /dev/sda.
> > As long as it's a raw block device, it shouldn't matter.
> > If you write/read to/from a filesystem, that obviously matters.
> >
> > What kind of target you are using for this benchmark?
> 
> It is the iSCSI Enterprise Target, which came with ubuntu 9.04.
> (iscsitarget (0.4.16+svn162-3ubuntu1)).
> 
> Thank you very much!
> 

Make sure you use 'deadline' elevator on the target machine!! This is
important, since the default 'cfq' doesn't perform well with IETD.

You can either set the target machine kernel option 'elevator=deadline'
in grub.conf and reboot, or then you can change the settings on the fly
like this:

echo deadline > /sys/block/sdX/queue/scheduler

do that for all the disks/devices you have in your target machine, ie.
replace sdX with each disk.

Also if you're using fileio on IETD, change it to blockio.


One more things: On the initiator machine you should use 'noop'
scheduler for the iSCSI disks.. so on the initiator do for each iSCSI disk:

echo noop > /sys/block/sdX/queue/scheduler

And benchmark again after setting correct schedulers/elevators on both
the target and initiator, and the blockio mode on IETD.

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: iSCSI throughput drops as link rtt increases?

2010-01-07 Thread Pasi Kärkkäinen
On Wed, Jan 06, 2010 at 11:59:37PM -0800, Jack Z wrote:
> Hi Pasi,
> 
> Thank you very much for your help. I really appreciate it!
> 
> On Jan 5, 12:58 pm, Pasi Kärkkäinen  wrote:
> > On Tue, Jan 05, 2010 at 02:05:03AM -0800, Jack Z wrote:
> >
> >
> > > > Try using some benchmarking tool that can do multiple outstanding IOs..
> > > > for example ltp disktest.
> >
> > > And I tried ltp disktest, too. But I'm not sure whether I used it
> > > right because the result was a little surprising...
> >
> > > I did
> >
> > > disktest -w -S0:1k -B 1024 /dev/sdb
> >
> > > (/dev/sdb is the iSCSI device file, no partition or file system on it)
> >
> > > And the result was:
> >
> > > | 2010/01/05-02:58:26 | START | 27293 | v1.4.2 | /dev/sdb | Start
> > > args: -w -S0:1024k -B 1024 -PA (-I b) (-N 8385867) (-K 4) (-c) (-p R)
> > > (-L 1048577) (-D 0:100) (-t 0:2m) (-o 0)
> > > | 2010/01/05-02:58:26 | INFO  | 27293 | v1.4.2 | /dev/sdb | Starting
> > > pass
> > > ^C| 2010/01/05-03:00:58 | STAT  | 27293 | v1.4.2 | /dev/sdb | Total
> > > bytes written in 85578 transfers: 87631872
> > > | 2010/01/05-03:00:58 | STAT  | 27293 | v1.4.2 | /dev/sdb | Total
> > > write throughput: 701055.0B/s (0.67MB/s), IOPS 684.6/s.
> > > | 2010/01/05-03:00:58 | STAT  | 27293 | v1.4.2 | /dev/sdb | Total
> > > Write Time: 125 seconds (0d0h2m5s)
> > > | 2010/01/05-03:00:58 | STAT  | 27293 | v1.4.2 | /dev/sdb | Total
> > > overall runtime: 152 seconds (0d0h2m32s)
> > > | 2010/01/05-03:00:58 | END   | 27293 | v1.4.2 | /dev/sdb | User
> > > Interrupt: Test Done (Passed)
> >
> > > As you can see, the throughput was only 0.67MB/s and only 85578
> > > written in 87631872 transfers...
> > > I also tweaked the options with "-p l" and/or "-I bd" (change seek
> > > pattern to linear and/or speficy IO type as block and direct IO) but
> > > no improvement happened...
> >
> > Hmm.. so it does 684 IO operations per second (IOPS), and each IO was 1k
> > in size, so it makes 684 kB/sec of throughput.
> >
> > 1000 milliseconds (1 second) divided by 684 IOPS is 1.46 milliseconds per 
> > IO..
> >
> > Are you sure you had 16ms of rtt?
> 
> Actually that was probably the output from 0.2 ms rtt instead of 16
> ms... I'm sorry for the mistake. I tried again the same command on a
> 16ms RTT, and the IOPS was mostly around 180.
> 

1000ms divided by 16ms rtt gives you 62,5 synchronous IOPS max.
So that means you had about 3 outstanding IOs running, since you
got 180 IOPS.

If I'm still following everything correctly :)

> 
> > Try to play and experiment with these options:
> >
> > -B 64k  (blocksize 64k, try also 4k)
> > -I BD (block device, direct IO (O_DIRECT))
> > -K 16 (16 threads, aka 16 outstanding IOs. -K 1 should be the same as dd)
> >
> > Examples:
> >
> > Sequential (linear) reads using blocksize 4k and 4 simultaneous threads, 
> > for 60 seconds:
> > disktest -B 4k -h 1 -I BD -K 4 -p l -P T -T 60 -r /dev/sdX
> >
> > Random writes:
> >
> > disktest -B 4k -h 1 -I BD -K 4 -p r -P T -T 60 -w /dev/sdX
> >
> > 30% random reads, 70% random writes:
> > disktest -r -w -D30:70 -K2 -E32 -B 8k -T 60 -pR -Ibd -PA /dev/md4
> >
> > Hopefully that helps..
> 
> That did help. I tried the following combinations of -B -K and -p at
> 20 ms RTT and the other options were -h 30 -I BD -P T -S0:(1 GB size)
> 
> -B 4k/64k -K 4/64 -p l
> 
> It seems that when I put -p l there the performance goes down
> drastically...
> 

That's really weird.. linear/sequential (-p l) should always be faster
than random.

> -B 4k -K 4/64 -p r
> 
> The disk throughput is similar to the one I used in the previous post
> "disktest -w -S0:1k -B 1024 /dev/sdb " and it's much lower than dd
> could get.
> 

like said, weird.

> -B 64k -K 4 -p r
> 
> The disk throughput is higher than the last one but still not as high
> as dd could get.
> 
> -B 64k -K 64 -p r
> 
> The disk throughput was boosted to 8.06 MB/s and the IOPS was 129.0.
> At the link layer, the traffic rate was 70.536 Mbps (the TCP baseline
> was 96.202 Mbps). At the same time, dd ( bs=64K count=(1 GB size)) got
> a throughput of 6.7 MB/s and the traffic rate on the link layer was
> 57.749 Mbps.
> 

Ok.

129 IOPS * 64kB = 8256 kB/sec, which pretty much matches the 8 MB/sec
you measured.

this still means there was only 1 outstanding IO.. and definitely not 64 (-K 
64).

> Although not much, it was 

Re: kernel oops in resched_task

2010-01-06 Thread Pasi Kärkkäinen
On Wed, Jan 06, 2010 at 04:28:32PM +0200, Erez Zilber wrote:
> Hi,
> 
> I got this oops while running open-iscsi on a CentOS 5.3 machine
> (don't know how to recreate it). it crashes while trying to wake up
> the work queue after queuecommand was called. Has anyone seen
> something similar?
>

I'd suggest you to upgrade to 5.4 and see if it still happens.

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: iSCSI throughput drops as link rtt increases?

2010-01-05 Thread Pasi Kärkkäinen
On Tue, Jan 05, 2010 at 09:58:09PM +0200, Pasi Kärkkäinen wrote:
> On Tue, Jan 05, 2010 at 02:05:03AM -0800, Jack Z wrote:
> > Hi Pasi,
> > 
> > Thank you very much for your reply.
> > 
> > > > I was testing the performance of open-iscsi initiator with IET target
> > > > over a 100Mbps Ethernet link with emulated rtt.  What I did was to do
> > > > raw disk sequential write by
> > >
> > > > $ dd if=/dev/zero of=/dev/sdb bs=1024 count=1048576
> > >
> > > Did you also try with bigger block sizes? 1k blocks are pretty small.
> > >
> > > try bs=1024k to see if it makes a difference.
> > 
> > 
> > I tried bs = 1024k and the throughput is improved, but not much... It
> > goes from 7.2MB/s to 8.0MB/s at a rtt of 16ms. And again, over 90% of
> > the TCP segments on the wire was only of 1448 bytes...
> >
> 

Oh, also make sure you have 'oflag=direct' for dd.

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: iSCSI throughput drops as link rtt increases?

2010-01-05 Thread Pasi Kärkkäinen
On Tue, Jan 05, 2010 at 02:05:03AM -0800, Jack Z wrote:
> Hi Pasi,
> 
> Thank you very much for your reply.
> 
> > > I was testing the performance of open-iscsi initiator with IET target
> > > over a 100Mbps Ethernet link with emulated rtt.  What I did was to do
> > > raw disk sequential write by
> >
> > > $ dd if=/dev/zero of=/dev/sdb bs=1024 count=1048576
> >
> > Did you also try with bigger block sizes? 1k blocks are pretty small.
> >
> > try bs=1024k to see if it makes a difference.
> 
> 
> I tried bs = 1024k and the throughput is improved, but not much... It
> goes from 7.2MB/s to 8.0MB/s at a rtt of 16ms. And again, over 90% of
> the TCP segments on the wire was only of 1448 bytes...
>

Ok..


> 
> > dd will use only one outstanding IO, so you have wait for rtt
> > milliseconds after every IO for the ack.. so that definitely slows you
> > down a lot when rtt gets bigger.
> >
> > Try using some benchmarking tool that can do multiple outstanding IOs..
> > for example ltp disktest.
> >
> 
> And I tried ltp disktest, too. But I'm not sure whether I used it
> right because the result was a little surprising...
> 
> I did
> 
> disktest -w -S0:1k -B 1024 /dev/sdb
> 
> (/dev/sdb is the iSCSI device file, no partition or file system on it)
> 
> And the result was:
> 
> | 2010/01/05-02:58:26 | START | 27293 | v1.4.2 | /dev/sdb | Start
> args: -w -S0:1024k -B 1024 -PA (-I b) (-N 8385867) (-K 4) (-c) (-p R)
> (-L 1048577) (-D 0:100) (-t 0:2m) (-o 0)
> | 2010/01/05-02:58:26 | INFO  | 27293 | v1.4.2 | /dev/sdb | Starting
> pass
> ^C| 2010/01/05-03:00:58 | STAT  | 27293 | v1.4.2 | /dev/sdb | Total
> bytes written in 85578 transfers: 87631872
> | 2010/01/05-03:00:58 | STAT  | 27293 | v1.4.2 | /dev/sdb | Total
> write throughput: 701055.0B/s (0.67MB/s), IOPS 684.6/s.
> | 2010/01/05-03:00:58 | STAT  | 27293 | v1.4.2 | /dev/sdb | Total
> Write Time: 125 seconds (0d0h2m5s)
> | 2010/01/05-03:00:58 | STAT  | 27293 | v1.4.2 | /dev/sdb | Total
> overall runtime: 152 seconds (0d0h2m32s)
> | 2010/01/05-03:00:58 | END   | 27293 | v1.4.2 | /dev/sdb | User
> Interrupt: Test Done (Passed)
> 
> As you can see, the throughput was only 0.67MB/s and only 85578
> written in 87631872 transfers...
> I also tweaked the options with "-p l" and/or "-I bd" (change seek
> pattern to linear and/or speficy IO type as block and direct IO) but
> no improvement happened...
> 

Hmm.. so it does 684 IO operations per second (IOPS), and each IO was 1k
in size, so it makes 684 kB/sec of throughput.

1000 milliseconds (1 second) divided by 684 IOPS is 1.46 milliseconds per IO..

Are you sure you had 16ms of rtt? 

> There must be something I've done wrong... Could you maybe help me out
> here?
> 
> Thanks a lot!
> 

Try to play and experiment with these options:

-B 64k  (blocksize 64k, try also 4k)
-I BD (block device, direct IO (O_DIRECT))
-K 16 (16 threads, aka 16 outstanding IOs. -K 1 should be the same as dd)

Examples:

Sequential (linear) reads using blocksize 4k and 4 simultaneous threads, for 60 
seconds:
disktest -B 4k -h 1 -I BD -K 4 -p l -P T -T 60 -r /dev/sdX

Random writes:
disktest -B 4k -h 1 -I BD -K 4 -p r -P T -T 60 -w /dev/sdX

30% random reads, 70% random writes:
disktest -r -w -D30:70 -K2 -E32 -B 8k -T 60 -pR -Ibd -PA /dev/md4

Hopefully that helps..

-- Pasi

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: iSCSI throughput drops as link rtt increases?

2010-01-04 Thread Pasi Kärkkäinen
On Mon, Jan 04, 2010 at 06:54:17AM -0800, Jack Z wrote:
> Hi all,
> 
> I was testing the performance of open-iscsi initiator with IET target
> over a 100Mbps Ethernet link with emulated rtt.  What I did was to do
> raw disk sequential write by
> 
> $ dd if=/dev/zero of=/dev/sdb bs=1024 count=1048576
> 

Did you also try with bigger block sizes? 1k blocks are pretty small.

try bs=1024k to see if it makes a difference.

> , in which /dev/sdb is the iSCSI device. I also measured TCP
> throughput using iperf with the default setup except "-n 1024M". And I
> got the following data on iSCSI throughput and TCP throughput v.s. rtt
> 
> rtt (ms)iSCSI throughput by dd (MB/s)   TCP throughput by
> iperf (Mbit/s)
> 0.2   11.3
> 94.3
> 4  11.1
> 94.3
> 8  10.2
> 94.3
> 128.6
> 94.2
> 167.2
> 94.2
> 206.0
> 94.1
> 
> local disk throughput by dd was 26.7 MB/s.
> 
> As shown in the table above, iSCSI throughput declined rapidly with
> rtt increased from 0.2ms to 20ms. TCP throughput, however, only
> dropped less than 1 percent.
> 

dd will use only one outstanding IO, so you have wait for rtt
milliseconds after every IO for the ack.. so that definitely slows you
down a lot when rtt gets bigger.

Try using some benchmarking tool that can do multiple outstanding IOs..
for example ltp disktest.

-- Pasi

--

You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: Need help with multipath and iscsi in CentOS 5.4

2009-12-31 Thread Pasi Kärkkäinen
On Wed, Dec 30, 2009 at 11:48:31AM -0600, Kyle Schmitt wrote:
> On Wed, Dec 9, 2009 at 8:52 PM, Mike Christie  wrote:
> >> So far single connections work: If I setup the box to use one NIC, I
> >> get one connection and can use it just fine.
> > Could you send the /var/log/messages for when you run the login command
> > so I can see the disk info?
> 
> Sorry for the delay.  In the meanwhile I tore down the server and
> re-configured it using ethernet bonding.  It worked, according to
> iozone, provided moderately better throughput than the single
> connection I got before.  Moderately.  Measurably.  Not significantly.
>

If you have just a single iscsi connection/login from the initiator to the
target, then you'll have only one tcp connection, and that means bonding
won't help you at all - you'll be only able to utilize one link of the
bond.

bonding needs multiple tcp/ip connections for being able to give more
bandwidth.

> I tore it down after that and reconfigured again using MPIO, and funny
> enough, this time it worked.  I can access the lun now using two
> devices (sdb and sdd), and both ethernet devices that connect to iscsi
> show traffic.
> 
> The weird thing is that aside from writing bonding was measurably
> faster than MPIO.  Does that seem right?
> 

That seems a bit weird.

How did you configure multipath? Please paste your multipath settings. 

-- Pasi

> 
> Here's the dmesg, if that lends any clues.  Thanks for any input!
> 
> --Kyle
> 
>  156 lines of dmesg follows 
> 
> cxgb3i: tag itt 0x1fff, 13 bits, age 0xf, 4 bits.
> iscsi: registered transport (cxgb3i)
> device-mapper: table: 253:6: multipath: error getting device
> device-mapper: ioctl: error adding target to table
> device-mapper: table: 253:6: multipath: error getting device
> device-mapper: ioctl: error adding target to table
> Broadcom NetXtreme II CNIC Driver cnic v2.0.0 (March 21, 2009)
> cnic: Added CNIC device: eth0
> cnic: Added CNIC device: eth1
> cnic: Added CNIC device: eth2
> cnic: Added CNIC device: eth3
> Broadcom NetXtreme II iSCSI Driver bnx2i v2.0.1e (June 22, 2009)
> iscsi: registered transport (bnx2i)
> scsi3 : Broadcom Offload iSCSI Initiator
> scsi4 : Broadcom Offload iSCSI Initiator
> scsi5 : Broadcom Offload iSCSI Initiator
> scsi6 : Broadcom Offload iSCSI Initiator
> iscsi: registered transport (tcp)
> iscsi: registered transport (iser)
> bnx2: eth0: using MSIX
> ADDRCONF(NETDEV_UP): eth0: link is not ready
> bnx2i: iSCSI not supported, dev=eth0
> bnx2i: iSCSI not supported, dev=eth0
> bnx2: eth0 NIC Copper Link is Up, 1000 Mbps full duplex, receive &
> transmit flow control ON
> ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
> bnx2: eth2: using MSIX
> ADDRCONF(NETDEV_UP): eth2: link is not ready
> bnx2i: iSCSI not supported, dev=eth2
> bnx2i: iSCSI not supported, dev=eth2
> bnx2: eth2 NIC Copper Link is Up, 1000 Mbps full duplex
> ADDRCONF(NETDEV_CHANGE): eth2: link becomes ready
> bnx2: eth3: using MSIX
> ADDRCONF(NETDEV_UP): eth3: link is not ready
> bnx2i: iSCSI not supported, dev=eth3
> bnx2i: iSCSI not supported, dev=eth3
> bnx2: eth3 NIC Copper Link is Up, 1000 Mbps full duplex
> ADDRCONF(NETDEV_CHANGE): eth3: link becomes ready
> eth0: no IPv6 routers present
> eth2: no IPv6 routers present
> scsi7 : iSCSI Initiator over TCP/IP
> scsi8 : iSCSI Initiator over TCP/IP
> scsi9 : iSCSI Initiator over TCP/IP
> scsi10 : iSCSI Initiator over TCP/IP
>   Vendor: DGC   Model: RAID 5Rev: 0429
>   Type:   Direct-Access  ANSI SCSI revision: 04
> sdb : very big device. try to use READ CAPACITY(16).
> SCSI device sdb: 7693604864 512-byte hdwr sectors (3939126 MB)
> sdb: Write Protect is off
> sdb: Mode Sense: 7d 00 00 08
> SCSI device sdb: drive cache: write through
> sdb : very big device. try to use READ CAPACITY(16).
> SCSI device sdb: 7693604864 512-byte hdwr sectors (3939126 MB)
> sdb: Write Protect is off
> sdb: Mode Sense: 7d 00 00 08
> SCSI device sdb: drive cache: write through
>  sdb:<5>  Vendor: DGC   Model: RAID 5Rev: 0429
>   Type:   Direct-Access  ANSI SCSI revision: 04
>   Vendor: DGC   Model: RAID 5Rev: 0429
>   Type:   Direct-Access  ANSI SCSI revision: 04
> sdc : very big device. try to use READ CAPACITY(16).
> SCSI device sdc: 7693604864 512-byte hdwr sectors (3939126 MB)
> sdc: test WP failed, assume Write Enabled
> sdc: asking for cache data failed
> sdc: assuming drive cache: write through
>   Vendor: DGC   Model: RAID 5Rev: 0429
>   Type:   Direct-Access  ANSI SCSI revision: 04
> sdc : very big device. try to use READ CAPACITY(16).
> SCSI device sdc: 7693604864 512-byte hdwr sectors (3939126 MB)
> sdc: test WP failed, assume Write Enabled
> sde : very big device. try to use READ CAPACITY(16).
> sdc: asking for cache data failed
> sdc: assuming drive cache: write through
>  sdc:<5>SCSI device sde: 7693604864 512-byte hdwr sectors (3939126 MB)

Re: [Scst-devel] iSCSI latency issue

2009-12-29 Thread Pasi Kärkkäinen
On Thu, Nov 26, 2009 at 08:06:13AM +0100, Bart Van Assche wrote:
> On Wed, Nov 25, 2009 at 5:57 PM, Shachar f  wrote:
> > I'm running open-iscsi with scst on Broadcom 10Gig network and facing write
> > latency issues.
> > When using netperf over an idle network the latency for a single block round
> > trip transfer is 30 usec and with open-iscsi it is 90-100 usec.
> >
> > I see that Nagle (TCP_NODELAY) is disabled when openning socket on the
> > initiator side and I'm not sure about the target side.
> > Vlad, Can you elaborate on this?
> >
> > Are others in the mailing list aware to possible environment changes that
> > effext latency?
> >
> > more info -
> > I'm running this test with Centos5.3 machines with almost latest open-iscsi.
> 
> Please make sure that interrupt coalescing has been disabled -- see
> also ethtool -c.
> 

Did you ever figure out the problem for the additional latency? 

-- Pasi

--

You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: lio-target crashes when windows initiator logs in

2009-12-09 Thread Pasi Kärkkäinen
On Tue, Dec 08, 2009 at 10:13:16AM -0800, ablock wrote:
> 
> Hi,
> I have problems with the lio-target software. I tried lio-core-2.6.31
> and lio-core-2.6.
> I compiled it together with lio-utils under ubuntu 9.10 and debian
> 5.0.
> Ubuntu and debian was installed in a virtual machine. I used virtual
> box 3.0.12.
> I tried it also on bare metal with the same problems.
>

Hello,

You're posting to a wrong mailinglist..
This list is about the open-iscsi linux iscsi initiator, not lio target.

I don't know how many people here can help you with the lio target..

-- Pasi

> 
> I can get it working when i use a block device like /dev/sdb.
> It crashes completely when i use a block device like /dev/sdb1 (The
> Partition exists!!!)
> It also crashes completely when i use a logical volume or a md-device.
> 
> The crash happens whenever a Windows Initiator logs in. I tried
> Windows Vista and Windows Server 2008.
> 
> When I start the target module I get the following output:
> 
> Loading target_core_mod/ConfigFS core:   [OK]
> Calling ConfigFS script /etc/target/tcm_start.sh for
> target_core_mod:   [OK]
> Calling ConfigFS script /etc/target/lio_start.sh for
> iscsi_target_mod:   [OK]
> 
> 
> In /var/log/messages I get:
> 
> Dec  8 18:50:51 debian kernel: [  106.480865] TARGET_CORE[0]: Loading
> Generic Kernel Storage Engine: v3.1.0 on Linux/x86_64 on 2.6.31.4v3.1
> Dec  8 18:50:51 debian kernel: [  106.481007] TARGET_CORE[0]:
> Initialized ConfigFS Fabric Infrastructure: v2.0.0 on Linux/x86_64 on
> 2.6.31.4v3.1
> Dec  8 18:50:51 debian kernel: [  106.481036] SE_PC[0] - Registered
> Plugin Class: TRANSPORT
> Dec  8 18:50:51 debian kernel: [  106.481061] PLUGIN_TRANSPORT[1] -
> pscsi registered
> Dec  8 18:50:51 debian kernel: [  106.481084] PLUGIN_TRANSPORT[2] -
> stgt registered
> Dec  8 18:50:51 debian kernel: [  106.481212] CORE_STGT[0]: Bus
> Initalization complete
> Dec  8 18:50:51 debian kernel: [  106.481232] PLUGIN_TRANSPORT[4] -
> iblock registered
> Dec  8 18:50:51 debian kernel: [  106.481250] PLUGIN_TRANSPORT[5] -
> rd_dr registered
> Dec  8 18:50:51 debian kernel: [  106.481268] PLUGIN_TRANSPORT[6] -
> rd_mcp registered
> Dec  8 18:50:51 debian kernel: [  106.481285] PLUGIN_TRANSPORT[7] -
> fileio registered
> Dec  8 18:50:51 debian kernel: [  106.481307] SE_PC[1] - Registered
> Plugin Class: OBJ
> Dec  8 18:50:51 debian kernel: [  106.481326] PLUGIN_OBJ[1] - dev
> registered
> 
> 
> I then initialize the iscsi target with the following commands
> 
> tcm_node --block iblock_0/my_dev2 /dev/vg1/lv1
> lio_node --addlun iqn.2009-11.local.schule.target.i686:sn.123456789 1
> 0 my_dev_port iblock_0/my_dev2
> lio_node --disableauth iqn.2009-11.local.schule.target.i686:sn.
> 123456789 1
> lio_node --addnp iqn.2009-11.local.schule.target.i686:sn.123456789 1
> 192.168.56.101:3260
> lio_node --addlunacl iqn.2009-11.local.schule.target.i686:sn.123456789
> 1 iqn.1991-05.com.microsoft:andreas-pc 0 0
> lio_node --enabletpg iqn.2009-11.local.schule.target.i686:sn.123456789
> 1
> 
> They produce the following output:
> Output tcm_node:
> 
> Status: DEACTIVATED  Execute/Left/Max Queue Depth: 0/32/32
> SectorSize: 512  MaxSectors: 255
> iBlock device: dm-0
> Major: 253 Minor: 0  CLAIMED: IBLOCK
>  ConfigFS HBA: iblock_0
> Successfully added TCM/ConfigFS HBA: iblock_0
>  ConfigFS Device Alias: my_dev2
> Device Params ['/dev/vg1/lv1']
> Set T10 WWN Unit Serial for iblock_0/my_dev2 to: 57f6b040-3159-49df-
> a5bd-2acdb948ef6f
> Successfully created TCM/ConfigFS storage object: /sys/kernel/config/
> target/core/iblock_0/my_dev2
> 
> Output lio_node --addlun:
> Successfully created iSCSI Target Logical Unit
> 
> Output lio_node --disableauth:
> Successfully disabled iSCSI Authentication on iSCSI Target Portal
> Group: iqn.2009-11.local.schule.target.i686:sn.123456789 1
> 
> Output lio_node --addnp:
> Successfully created network portal: 192.168.56.101:3260 created iqn.
> 2009-11.local.schule.target.i686:sn.123456789 TPGT: 1
> 
> Output von lio_node --addlunacl:
> Successfully added iSCSI Initiator Mapped LUN: 0 ACL iqn.
> 1991-05.com.microsoft:andreas-pc for iSCSI Target Portal Group: iqn.
> 2009-11.local.schule.target.i686:sn.123456789 1
> 
> Output von lio_node --enabletpg:
> Successfully enabled iSCSI Target Portal Group: iqn.
> 2009-11.local.schule.target.i686:sn.123456789 1
> 
> 
> In /var/log/messages the initialization leads to the following:
> 
> Dec  8 18:53:11 debian kernel: [  246.679996] Target_Core_ConfigFS:
> Located se_plugin: 88000dd630e0 plugin_name: iblock hba_type: 4
> plugin_dep_id: 0
> Dec  8 18:53:11 debian kernel: [  246.680398] CORE_HBA[0] - Linux-
> iSCSI.org iBlock HBA Driver 3.1 on Generic Target Core Stack v3.1.0
> Dec  8 18:53:11 debian kernel: [  246.680425] CORE_HBA[0] - Attached
> iBlock HBA: 0 to Generic Target Core TCQ Depth: 512
> Dec  8 18:53:11 debian kernel: [  246.680452] CORE_HBA[0] - Attached
> HBA to Generic Target Core
> Dec  8 18:53:11 debian kernel: [  246.680

Re: openiscsi 10gbe network

2009-11-25 Thread Pasi Kärkkäinen
On Tue, Nov 24, 2009 at 08:07:12AM -0800, Chris K. wrote:
> Hello,
> I'm writing in regards to the performance with open-iscsi on a
> 10gbe network. On your website you posted performance results
> indicating you reached read and write speeds of 450 MegaBytes per
> second.
> 
> In our environment we use Myricom dual channel 10gbe network cards on
> a gentoo linux system connected via fiber to a 10gbe interfaced SAN
> with a raid 0 volume mounted with 4 15000rpm SAS drives.
> Unfortunately, the maximum speed we are acheiving is 94 MB/s. We do
> know that the network interfaces can stream data at 822MB/s (results
> obtained with netperf). we know that local read performance on the
> disks is 480MB/s. When using netcat or direct tcp/ip connection we get
> speeds in this range, however when we connect a volume via the iscsi
> protocol using the open-iscsi initiator we drop to 94MB/s(best result.
> Obtained with bonnie++ and dd).
>

What block size are you using with dd? 
Try: dd if=/dev/foo of=/dev/null bs=1024k count=32768

How's the CPU usage on both the target and the initiator when you run
that? Is there iowait?

Did you try with nullio LUN from the target?

-- Pasi

--

You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: iscsi diagnosis help

2009-11-17 Thread Pasi Kärkkäinen
On Mon, Nov 16, 2009 at 09:39:00PM -0500, Hoot, Joseph wrote:
> 
> On Nov 16, 2009, at 8:19 PM, Hoot, Joseph wrote:
> 
> > thanks.  That helps.  So I know that with the EqualLogic targets, there is 
> > a "Group IP" which, I believe, responds with an iscsi login_redirect. 
> > 
> > 1) Could the "Login authentication failed" message be the response because 
> > of a login redirect messages from the EQL redirect?
> > 
> > and then my next question is more for curiosity sake:
> > 
> > 2) Are there plans in the future to have more than one connection per 
> > session?  and I guess in addition to that, would that mean multiple 
> > connections to a single volume over the same nic?
> > 
> > 
> 
> 
> Also Mike, I'm seeing one or two of these every 30-40 minutes if I slam our 
> EqualLogic with roughly 7-15k IOPS (reads and writes) non stop on 3 volumes.  
> In this type of scenario, would you expect to see timeouts like this once in 
> awhile?  If so, do you think increasing my NOOP timeouts would assist so we 
> don't get these?  maybe set it to 15 seconds instead of 10?
> 

Equallogic does active loadbalancing (redirects) during operation..
dunno about the errors though.

-- Pasi

> 
> > 
> > On Nov 16, 2009, at 7:18 PM, Mike Christie wrote:
> > 
> >> Hoot, Joseph wrote:
> >>> Hi all,
> >>> 
> >>> I'm trying to understand what I'm seeing in my /var/log/messages.  Here's 
> >>> what I have:
> >>> 
> >>> Nov 13 10:49:47 oim6102506 kernel:  connection5:0: ping timeout of 10 
> >>> secs expired, last rx 191838122, last ping 191839372, now 191841872
> >>> Nov 13 10:49:47 oim6102506 kernel:  connection5:0: detected conn error 
> >>> (1011)
> >>> Nov 13 10:49:47 oim6102506 iscsid: Kernel reported iSCSI connection 5:0 
> >>> error (1011) state (3)
> >>> Nov 13 10:49:50 oim6102506 iscsid: Login authentication failed with 
> >>> target 
> >>> iqn.2001-05.com.equallogic:0-8a0906-e7d1dea02-786272c42554aef2-ovm-2-lun03
> >>> Nov 13 10:49:52 oim6102506 iscsid: connection5:0 is operational after 
> >>> recovery (1 attempts)
> >>> 
> >>> the first line, what is "connection5:0"?  is that referenced from 
> >>> iscsiadm somewhere? I only ask because I'm seeing iscsid messages and 
> >>> kernel messages.  I also have dm-multipath running, which usually shows 
> >>> up as dm-multipath or something like that.  I understand that iscsid is 
> >>> the process that is logging in and out.  But is the "kernel:" message 
> >>> just an iscsi modules that is loaded into the kernel, which is why it is 
> >>> being logged as "kernel:"?
> >>> 
> >> 
> >> It is the session id and connection id.
> >> 
> >> connection$SESSION_ID:$CONNECTION_ID
> >> 
> >> If you run iscsiadm -m session -P 1 or -P 3
> >> 
> >> You will see
> >> 
> >> #iscsiadm -m session -P 1
> >> Target: iqn.1992-08.com.netapp:sn.33615311
> >>Current Portal: 10.15.85.19:3260,3
> >>Persistent Portal: 10.15.85.19:3260,3
> >>Iface Transport: tcp
> >>Iface IPaddress: 10.11.14.37
> >>Iface HWaddress: default
> >>Iface Netdev: default
> >>SID: 7
> >>iSCSI Connection State: LOGGED IN
> >>Internal iscsid Session State: NO CHANGE
> >> 
> >> 
> >> Session number is the SID value.
> >> 
> >> If you run
> >> iscsiadm -m session
> >> tcp [2] 10.15.84.19:3260,2 iqn.1992-08.com.netapp:sn.33615311
> >> 
> >> the session number/SID is the value in brackets.
> >> 
> >> 
> >> If you run iscsiadm in session mode (iscsiadm -m session) then you can 
> >> use the -R argument and pass in a SID to do an opertaion like
> >> 
> >> iscsiadm -m session -R 2 --rescan
> >> 
> >> would rescan that session.
> >> 
> >> Connection number is currently always zero.
> >> 
> >> 
> >> For the second question, iscsid handles login and logout, and error 
> >> handling, and the kernel basically passes iscsi packets around.
> >> 
> >> 
> >> Nov 13 10:49:47 oim6102506 kernel:  connection5:0: ping timeout of 10 
> >> secs expired, last rx 191838122, last ping 191839372, now 191841872
> >> 
> >> 
> >> so here the iscsi kernel code sends a iscsi ping/nop every noop_interval 
> >> seconds, and if we do not get a response withing noop_timeout seconds it 
> >> will fire off a connection error.
> >> 
> >> 
> >> 
> >> Nov 13 10:49:47 oim6102506 kernel:  connection5:0: detected conn error 
> >> (1011)
> >> 
> >> 
> >> Here is the kernel code notifying userspace of the problem.
> >> 
> >> 
> >> Nov 13 10:49:47 oim6102506 iscsid: Kernel reported iSCSI connection 5:0 
> >> error (1011) state (3)
> >> 
> >> 
> >> And there iscsid is accepting the error (probably no need for the error 
> >> to be logged twice).
> >> 
> >> 
> >> Nov 13 10:49:50 oim6102506 iscsid: Login authentication failed with target
> >> 
> >> 
> >> And then here iscsid handled the error by killing the tcp/ip connection, 
> >> reconnection the tcp/ip connection, and then re-logging into the iscsi 
> >> target. But for some reason we could not l

Re: Problem using multiple NICs

2009-11-17 Thread Pasi Kärkkäinen
On Thu, Nov 12, 2009 at 04:49:47PM -0800, Jim Cole wrote:
> 
> Hi - I am running into problems utilizing two NICs in an iSCSI setup
> for multipath IO. The setup involves a Linux server (Ubuntu 9.10
> Server) with two Broadcom NetXtreme II GbE NICs connected to two
> separate switches on a single subnet, which is dedicated to EqualLogic
> SAN access.
> 

Here's what I did when I tested multiple interfaces with Equallogic:
http://pasik.reaktio.net/open-iscsi-multiple-ifaces-test.txt

-- Pasi

> I have setup two iface definitions using the following steps.
> 
>   - iscsiadm -m iface -I eth4 --op=new
>   - iscsiadm -m iface -I eth5 --op=new
>   - iscsiadm -m iface -I eth4 --op=update -n iface.net_ifacename -v
> eth4
>   - iscsiadm -m iface -I eth5 --op=update -n iface.net_ifacename -v
> eth5
> 
> I have also tried specifying the MAC addresses explicitly with no
> change in behavior.
> 
> Discovery was performed with the following command and worked as
> expected, generating node entries for both interfaces.
> 
> - iscsiadm -m discovery -t st -p xx.xx.xx.xx:3260 -I eth4 -I eth5
> 
> Up to this point everything looks good. And I have no trouble logging
> one interface into the desired target. However attempts to login the
> second interface always result in a time out. The message is
> 
>   iscsiadm: Could not login to [iface: eth4, target: , portal:
> xx.xx.xx.xx,3260]:
>   iscsiadm: initiator reported error (8 - connection timed out)
> 
> The problem is not specific to one interface. I am able to login with
> either one. I just can't seem to login with both at the same time.
> 
> I am using the open-iscsi package that ships with the Ubuntu distro
> (open-iscsi 2.0.870.1-0ubuntu12).
> 
> I have another server on the same network, with identical hardware and
> iSCSI configuration, that is working properly. The only difference is
> that the other server is running CentOS 5.4 and using the initiator
> that ships with that distro (iscsi-initiator-utils
> 6.2.0.871-0.10.el5).
> 
> If anyone could provide any guidance on how to further diagnose, and
> hopefully solve, this problem, it would be greatly appreciated.
> 
> TIA
> 
> Jim
> 
> --~--~-~--~~~---~--~~
> You received this message because you are subscribed to the Google Groups 
> "open-iscsi" group.
> To post to this group, send email to open-iscsi@googlegroups.com
> To unsubscribe from this group, send email to 
> open-iscsi+unsubscr...@googlegroups.com
> For more options, visit this group at 
> http://groups.google.com/group/open-iscsi
> -~--~~~~--~~--~--~---
> 

--

You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=.




Re: Non-news on HP MPX100

2009-11-12 Thread Pasi Kärkkäinen

On Thu, Nov 12, 2009 at 09:17:59AM +0100, Ulrich Windl wrote:
> 
> Hi,
> 
> just a short note on the HP MPX100 firmware: Different to the announcement 
> made 
> some months ago, the most current firmware for the HP MPX100 (HP EVA iSCSI 
> connectivity option) included no change regarding Linux and CHAP: The 
> documenatation still says CHAP ist not supported for Linux. As the product is 
> actually from Qlogic, I'm not sure who's to blame.
> The impression that I get from those software giants is that they completely 
> unable to react to markets demands. Sorry, but I had to let that off...
> 

And they'll fall on their own trap.. the world is changing :) who needs
FC soon..

-- Pasi


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: duplicate IP addressing

2009-10-21 Thread Pasi Kärkkäinen

On Tue, Oct 20, 2009 at 03:55:13PM -0400, Paul Cooper wrote:
> 
> I have something interesting going on with VMWare (yea I know not open 
> source but I tried) VS ISCSI.
> I am getting a "duplicate IP address" message being reported on the 
> server that is provisioning the LUNS. not sure if it is a symptom or a 
> cause. any thoughts or anybody heard of this?
>

Well.. it sounds like you have the same IP in use on multiple machines.

That's bad.

-- Pasi


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: MC/S support

2009-10-07 Thread Pasi Kärkkäinen

On Wed, Oct 07, 2009 at 02:02:41PM -0700, Learner Study wrote:
> 
> Hello:
> 
> Does open-iscsi (version 2.0-870) support MC/S (Multiple connections
> per iSCSI session)? Can someone please let me know how to enable this?
> 

I don't think open-iscsi has MC/s support. 

This has been discussed many times earlier, and I believe this feature
wouldn't be accepted into upstream Linux, so it won't be implemented in
open-iscsi.

-- Pasi


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Intel NIC iSCSI acceleration in Linux and open-iscsi

2009-09-29 Thread Pasi Kärkkäinen

On Mon, Sep 28, 2009 at 08:50:57PM -0700, Meenakshi Ramamoorthi wrote:
>Yes, what details do you require ?
>

Well.. what's the status? Is the code available from somewhere? Can I test it? 
:)

I haven't seen anything on open-iscsi list.. at least I can't remember.

Thanks!

-- Pasi

>On Sat, Sep 26, 2009 at 4:49 AM, Pasi Kärkkäinen <[1]pa...@iki.fi> wrote:
> 
>  Hello,
> 
>  Is anyone working on Linux/open-iscsi iSCSI offloading/acceleration
>  using Intel gigabit and 10 gigabit NICs?
> 
>  -- Pasi
> 
>> 
> References
> 
>Visible links
>1. mailto:pa...@iki.fi

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Intel NIC iSCSI acceleration in Linux and open-iscsi

2009-09-28 Thread Pasi Kärkkäinen

Hello,

Is anyone working on Linux/open-iscsi iSCSI offloading/acceleration 
using Intel gigabit and 10 gigabit NICs? 

-- Pasi


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Question about iSCSI Enterprise Target

2009-09-10 Thread Pasi Kärkkäinen

On Tue, Sep 08, 2009 at 08:14:38PM +0200, Michael Schwartzkopff wrote:
> 
> Hi,
> 
> perhaps this is the wrong mailing list, but anyway, perhaps anybody here can 
> help me:
> 
> iSCSI Enterprise Target is the projekt for the Linux implementation of a 
> iSCSI 
> target. My question is wheather the software can be used to bind the same 
> target to two different initiators?
> 

Yes, IETD can be used for that.

And yes, this is wrong mailinglist :)

-- Pasi


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: How many S/W iSCSI Initiators on same machine?

2009-08-05 Thread Pasi Kärkkäinen

On Tue, Aug 04, 2009 at 05:05:36PM -0400, Donald Williams wrote:
> I don't know if there's a way to set unique initiator names for each NIC.  A
> quick scan of the config file didn't show anything.  I *believe* iscsid has
> the initiator name so it's a global parameter.
>  Why do you want unique names for each initiator?   What do you think it
> will gain you?
> 

You can create multiple ifaces with open-iscsi, and 'bind' each iface to
specified ethernet internet, and this way login multiple sessions from the
same computer to the target, getting multiple paths to the same LUN/target.

-- Pasi

>  -don
> 
> On Tue, Aug 4, 2009 at 4:16 AM, Rainer Bläs
> wrote:
> 
> >
> > Thanks for your answer!
> > Yes, by using "#iscsiadm -m iface -I ethN, N=1...6" we can have 6
> > iSCSI sessions.
> >
> > But now there is the question "HOWTO assign an initiator name for EACH
> > session"?
> > For one iSCSI session it can be found in the /etc/iscsi/
> > initiatorname.iscsi File:
> >
> > InitiatorName=iqn.1986-03.com.hp:Ethernet1
> >
> > Can it be done by adding these 5 entries
> >
> > InitiatorName=iqn.1986-03.com.hp:Ethernet2
> > InitiatorName=iqn.1986-03.com.hp:Ethernet3
> > InitiatorName=iqn.1986-03.com.hp:Ethernet4
> > InitiatorName=iqn.1986-03.com.hp:Ethernet5
> > InitiatorName=iqn.1986-03.com.hp:Ethernet6
> >
> > or which syntax has to be used?
> >
> > Rainer
> >
> >
> >
> >
> >
> >
> > On Aug 3, 10:36 pm, Donald Williams  wrote:
> > > Hello,
> > > I'm not sure what your question really is.  Yes, you can have 6x GbE
> > > interfaces on different subnets and run iSCSI over them. What target are
> > you
> > > using?   Typically, your iSCSI SAN is on one subnet.  It avoids the need
> > to
> > > do IP routing.   Which adds latency and can reduce performance.
> > >
> > >  -don
> > >
> > > On Fri, Jul 31, 2009 at 6:03 AM, Rainer Bläs
> > > wrote:
> > >
> > >
> > >
> > > > Dear all,
> > >
> > > > we are running a SLES 10SP2 system with 6 physical Ethernet ports.
> > > > For instance is it possible to have 6 iSCSI initiators onthis system
> > > > when each IP of these six ports are belonging to 6 different (Sub)
> > > > Lans?
> > >
> > > > THX, Rainer
> > >
> > > --
> > >
> > > Marie von Ebner-Eschenbach<
> > http://www.brainyquote.com/quotes/authors/m/marie_von_ebnereschenbac>
> > > - "Even a stopped clock is right twice a day."
> >
> > >
> >
> 
> 
> -- 
> 
> Pablo Picasso
> - "Computers are useless. They can only give you answers."
> 
> > 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: How to refresh "iscsi" partition list in all clients?

2009-07-27 Thread Pasi Kärkkäinen

On Fri, Jul 24, 2009 at 09:10:47AM -0700, Christopher Chen wrote:
> 
> Partprobe will probably do what you want. Partprobe rescans partition
> tables. --rescan just rescans the targets for new LUNs.
> 
> Are you sharing filesystems? If you are you should probably look
> Clustered LVM+GFS. I use clustered LVM to provision logical volumes
> for Xen guests on shared iSCSI luns, and it works just fine...
> 

This is a bit offtopic, but have you ever done online resizing of the luns used
for CLVM ? ie. first grow the LUN on SAN storage array, then rescan the LUN
on the Xen hosts, and then pvresize.. to get more free space to the clustered 
VG.

If I have understood things correctly that _should_ work nowadays, at least
in RHEL 5.3 and above, but I haven't tried it yet myself with CLVM (I've done 
it with the normal non-clustered LVM).

-- Pasi

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: iscsiadm -m iface and routing

2009-07-09 Thread Pasi Kärkkäinen

On Thu, Jul 02, 2009 at 09:41:30AM -0400, Hoot, Joseph wrote:
> 
> Hi all,
> 
> I'm currently attempting to implement a Dell EqualLogic iSCSI solution
> connected through m1000e switches to Dell m610 blades with (2) iSCSI
> dedicated nics in each blade running Oracle VM Server v2.1.5 (which, I
> believe, is based off of RHEL5.1).
> 
> [r...@oim6102501 log]# rpm -qa | grep iscsi
> iscsi-initiator-utils-6.2.0.868-0.7.el5
> [r...@oim6102501 log]# uname -a
> Linux oim6102501 2.6.18-8.1.15.3.1.el5xen #1 SMP Tue May 12 19:21:30 EDT
> 2009 i686 i686 i386 GNU/Linux
> 
> I have setup a bond with the two nics to test this out initially and had
> no problems.  This allows for failover (active-passive bond).  I
> discovered my targets, I logged into sessions, and recognized everything
> from dm-multipath.  I fdisked, formatted drives, and mounted them.I
> then ran lots of dd's to and from the disks with speeds around 90MB/sec
> when `dd if=/dev/mapper/ovm-1-lun0p1 of=/dev/null bs=1M count=1000`
> 
> So since this is going to be one of many VM servers in our OVM cluster
> with multiple VM's running on it (of which many are database servers), I
> wanted to try to make this more efficient.  Therefore, I read that by
> using the `iscsiadm -m iface` syntax you can (instead of bonding) setup
> (2) nics individually, each with an IP on that same segment as your
> iSCSI storage.  From what I understand, this allows (2) sessions to be
> created to each volume-- which should give you a little more throughput.
> I did this:
> 
> iscsiadm -m iface -I eth2 --op=new
> iscsiadm -m iface -I eth3 --op=new
> iscsiadm -m iface -I eth2 --op=update -n iface.hwaddress -v
> 00:10:18:3A:5B:6C
> iscsiadm -m iface -I eth3 --op=update -n iface.hwaddress -v
> 00:10:18:3A:5B:6E
> iscsiadm -m discovery -t st -p 192.168.0.19 -P 1
> iscsiadm -m node --loginall=all
> 
> dm-multipath sees 2 paths now to each volume.  If I run `iscsiadm -m
> session -P 3` I can see which /dev/sdX device is used by which multipath
> device.  I have multipath setup to load balance across both paths with
> rr_min_io set to 10 in /etc/multipath.conf (which it is my understanding
> that this will send 10 I/Os to one path and then switch to the other
> path).
> 
> my eth2 is 192.168.0.151
> my eth3 is 192.168.0.161
> 
> my group is 192.168.0.19
> my eql interface#1 is 192.168.0.30
> my eql interface#1 is 192.168.0.31
> 
> [r...@oim6102501 log]# netstat -rn
> Kernel IP routing table
> Destination Gateway Genmask Flags   MSS Window  irtt
> Iface
> 10.0.10.0   0.0.0.0 255.255.255.0   U 0 0  0
> vlan10
> 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0  0
> eth3
> 192.168.0.0 0.0.0.0 255.255.0.0 U 0 0  0
> eth2
> 192.168.0.0 0.0.0.0 255.255.0.0 U 0 0  0
> eth3
> 0.0.0.0 10.0.10.254 0.0.0.0 UG0 0  0
> vlan10
> 
> PROBLEM:
> 
> I'm now having some odd issues where everything appears to work fine.  I
> can mount drives and dd stuff around.  But then I will occasionally get
> "Reset received on the connection" from my EqualLogic logs.  I will see
> the same thing in /var/log/messages from my kernel scsi layer as well as
> dm-multipath layer.  When I look at `iscsiadm -m session -P 2` I can see
> the following: 
> 
> iSCSI Connection State: TRANSPORT WAIT
> iSCSI Session State: Unknown
> Internal iscsid Session State: REPOEN
> 
> By the way, is that a bug?  "REPOEN"?  should it be "REOPEN"?
> 
> Within about 1-2 minutes it will reconnect.  But I'm a bit baffled what
> would cause this.
> 
> QUESTION:
> =
> Since I am creating two iscsi sessions (one out eth2 and eth3), I'm
> wondering how routing plays into sessions.  Since iscsiadm is given the
> hwaddress, does iscsid need to care much about routing?  In other words,
> let's say that, for whatever reason, my session that I had through eth3
> (that session, by the way, is connected to 192.168.0.31 on the
> EqualLogic) timesout.  iscsid sees this and attempts to REOPEN the
> session.  Since my routing table shows eth2 above the route for eth3,
> IP-wise, eth2 will be the interface that would typically be chosen for
> that traffic to route out of.  However, is iscsid smart enough to not
> use that route and instead select the iface (based on hwaddress) to use
> for that reconnection?
> 

You can also use the ethernet interface name, to make sure correct iface is 
always used.

# iscsiadm -m iface -I iface3 -o new
New interface iface3 added

# iscsiadm -m iface -I iface3 --op=update -n iface.net_ifacename -v eth1.234
iface3 updated.

Replace the eth1.234 VLAN with whatever you use, for example eth3.

You can also specify these things in /var/lib/iscsi/ifaces/ directory.
Create a file called "ifaceX" and write something like this in it:

iface.iscsi_ifacename = ifaceX
iface.transport_name = tcp
iface.net_ifacename = eth0.xyz

or you could replace the

Re: RFC: do we need a new list for kernel patches

2009-06-11 Thread Pasi Kärkkäinen

On Thu, Jun 11, 2009 at 12:41:17PM -0500, Mike Christie wrote:
> 
> Hey,
> 
> It seems like we have a lot of members on the list that are not kernel 
> developers, but we now have 5 iscsi drivers (qla4xxx, bnx2i, cxgb3i, 
> iscsi_tcp and ib_iser) with another being written. So it seems like we 
> are going to have lots of patches. I would also like to start sending my 
> kernel patches out in a way that everyone can see them. Previously to 
> avoid noise on this list, I have been pinging you guys privately which 
> just does not work so well now when we have so many people.
> 
> What do you people think?
> 
> Do other people on the list prefer to see everything here, so you can 
> see what features are making progress?
> 

I think it's OK to send patches to this list. It's easier to see everything
in one place.

Just my 2 eurocents :)

-- Pasi

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: HBA versus software initiator (open-iscsi)

2009-06-05 Thread Pasi Kärkkäinen

On Fri, Jun 05, 2009 at 12:57:11AM +0200, benoit plessis wrote:
> Hi,
> 
> Well the target are NetApp Filers, one FAS2020 and the other a FAS3020.
> 
> The test system is directly attached to the filer, using either a bnx2 card
> or an intel
> 
> The latency is calculated by bonnie++ 1.95 on a ext2 file system, the
> bandwith came from interface monitoring using cacti or bmon and also
> bonnie++
> 
> Iperf give 924Mbits/sec using iperf
> Using a 4k dd read on a nfs file mounted on the same filer i got 96Mo/s
> using
> an un-optimized path (no jumbo frame, going thru a few switchs and a router)
> 
> Device blocksize is 4k, as is the FS blocksize
> 

OK. I don't really have personal experience with NetApp filers. Maybe
they're good in NAS stuff, not in (iSCSI) SAN stuff? 

If you get 924 Mbit/sec with iperf using that bnx2 card, then the card and
drivers should be fine. 

What distribution are you using on the initiator? What kernel? What open-iscsi 
version? 

Can you paste your iSCSI sessions parameters/settings from the active
session? 

Did you try with 512 bytes device blocksize? 

-- Pasi

> 2009/6/4 Pasi Kärkkäinen 
> 
> >
> > On Thu, Jun 04, 2009 at 04:03:28PM +0200, benoit plessis wrote:
> > > Hi,
> > >
> > > What do you think of (real) iSCSI HBA like Qlogic cards under linux ?
> > > We are using open-iscsi now with bnx2 cards and the perfs are far from
> > > decent (top 50/100Mbps),
> > > some comparative tests on an intel e1000 card show betters results
> > > (500/600Mbps) but
> > > far from gigabit saturation, and still high latency.
> > >
> >
> > What target are you using? "nullio" mode or memdisk is good for
> > benchmarking,
> > if either of those are possible on your target.
> >
> > How did you measure the latency? Are you using direct crossover cables, or
> > switches?
> >
> > How does your NICs perform with for example FTP? How about iperf?
> >
> > How did you get those 50/100 and 500/600 Mbit numbers? What benchmark did
> > you use? What kind of blocksize?
> >
> > -- Pasi
> >
> >
> > >
> >
> 
> > 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: HBA versus software initiator (open-iscsi)

2009-06-04 Thread Pasi Kärkkäinen

On Thu, Jun 04, 2009 at 04:03:28PM +0200, benoit plessis wrote:
> Hi,
> 
> What do you think of (real) iSCSI HBA like Qlogic cards under linux ?
> We are using open-iscsi now with bnx2 cards and the perfs are far from
> decent (top 50/100Mbps),
> some comparative tests on an intel e1000 card show betters results
> (500/600Mbps) but
> far from gigabit saturation, and still high latency.
> 

What target are you using? "nullio" mode or memdisk is good for benchmarking, 
if either of those are possible on your target.

How did you measure the latency? Are you using direct crossover cables, or
switches? 

How does your NICs perform with for example FTP? How about iperf? 

How did you get those 50/100 and 500/600 Mbit numbers? What benchmark did
you use? What kind of blocksize? 

-- Pasi


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: iSCSI and FileSystem (ext2/ext3)

2009-04-16 Thread Pasi Kärkkäinen

On Thu, Apr 16, 2009 at 11:50:27AM +0200, Bart Van Assche wrote:
> 
> On Wed, Apr 15, 2009 at 9:19 PM, Pasi Kärkkäinen  wrote:
> > noop is usually good for the initiator. cfq has a feature (or a bug?) that
> > prevents achieving queue depths deeper than 1, and thus limits your
> > bandwidth a lot when there are (or should be) many ios on the fly at the
> > same time.
> 
> Do you remember on which kernel version you observed the above
> behavior ? This might be a bug in the CFQ scheduler. I found the
> following in the 2.6.28 changelog: "cfq-iosched: fix queue depth
> detection". See also
> http://www.eu.kernel.org/pub/linux/kernel/v2.6/ChangeLog-2.6.28 or
> http://lkml.org/lkml/2008/8/22/39.
> 

Iirc it has been with RHEL5/CentOS5 2.6.18 based kernels.. 

Mike Christie has been writing about this aswell.. dunno about what kernels
he has seen it with.

Then again CFQ was designed for "single disk workstations".. 

-- Pasi

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: iSCSI and FileSystem (ext2/ext3)

2009-04-15 Thread Pasi Kärkkäinen

On Wed, Apr 15, 2009 at 08:42:33PM +0200, Bart Van Assche wrote:
> 
> On Wed, Apr 15, 2009 at 8:24 PM, benoit plessis
>  wrote:
> > I wanted to share some infos about a "discovery" we made using mysql over
> > iSCSI.
> >
> > We have a bunch of replicated mysql server, initially all using ext3, due to
> > perfs problems we
> > tried comparing persf in ext3 vs ext2, and we found the following:
> >
> > server using ext3
> >     normal iops   100
> >     normal bw  25/30Mbps
> >     peak iops   1000
> >     peak bw 45/52Mbps
> >
> > server using ext2
> >     normal iops   40
> >     normal bw  4/5 Mbps
> >     peak iops   50
> >     peak bw 7/8 Mbps
> >
> > All servers using the "noop" scheduler.
> >
> > The ext3 FS wasn't even using journalised datas, only the standard metadata
> > configuration, but the
> > impact on resource usage is quite impressive 
> >
> > So the question is, what do you use as FS over iSCSI ?
> 
> Why are you using the noop scheduler on the initiator instead of
> deadline or CFQ ? 

noop is usually good for the initiator. cfq has a feature (or a bug?) that
prevents achieving queue depths deeper than 1, and thus limits your
bandwidth a lot when there are (or should be) many ios on the fly at the
same time.

-- Pasi

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: multipath iSCSI installs

2009-04-07 Thread Pasi Kärkkäinen

On Thu, Apr 02, 2009 at 11:38:29PM -0700, mala...@us.ibm.com wrote:
> 
> Mike Christie [micha...@cs.wisc.edu] wrote:
> > If the ibft implementation uses one session, but exports all the targets 
> > in the ibft info, then in RHEL 5.3 the installer only picks up the 
> > session used for the ibft boot up, but the initrd root-boot code used 
> > after the install should log into all the targets in ibft whether they 
> > were used for the ibft boot or not. There different behavior is a result 
> > of the installer goofing up where is used the wrong api.
> 
> It is quite likely that my iBFT implementation uses&exports a single
> session.
> 

Btw is this on HS21 blade? Or something else..? I could test also, if it's
an IBM blade.. 

-- Pasi

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: multipath iSCSI installs

2009-04-01 Thread Pasi Kärkkäinen

On Wed, Apr 01, 2009 at 05:13:10AM -0700, mala...@us.ibm.com wrote:
> 
> Hi all,
> 
>   I am trying to install RHEL5.3 on an iSCSI disk with two paths.
> I booted with "mapth" option but the installer picked up only a single
> path. Is this the expected behavior when I use "iBFT"?
>

I've installed RHEL 5.3 (and CentOS 5.3) to multipath-root using "mpath"
installer option. It worked fine. I didn't use iBFT though.. 
 
> The install went fine on a single path. I was trying to convert the
> single path to multi-path by running "mkinitrd". RHEL was unable to boot
> (panics) with the new "initrd" image.  The only difference between the
> old initrd image and the new one is that the old initrd image was using
> iBFT method and the new image was trying to use the values from the
> existing session(s) at "initrd" creation time. For some reason the
> latter method doesn't work. Is this a known bug?
> 

Yeah conversion from single path to multipath root can be tricky.. I think it
might require manual customization/editing of initrd image (scripts). 

-- Pasi

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Linux iscsi performance, multipath issue

2008-12-19 Thread Pasi Kärkkäinen

On Thu, Dec 11, 2008 at 12:52:49PM +0200, Pasi Kärkkäinen wrote:
> 
> On Mon, Dec 08, 2008 at 12:45:11PM -0800, Kmec wrote:
> > 
> > 
> > >
> > > > IIRC IOmeter for Linux had some issues.. related to queue depth maybe? 
> > > > So
> > > > you should use other tools than IOmeter on Linux. Dunno if that problem 
> > > > is
> > > > already fixed or if there is a patch available for IOmeter for Linux..
> > >
> > > Oh, and please try using 'noop' elevator/scheduler on your iSCSI disks..
> > > that might help with the performance.
> > >
> > 
> > noop scheduler didn't help at all :-(
> > 
> > What is curious that we can't get more than 62 MBps from single NIC :-
> > (
> > 
> 
> Earlier you said you got 90 MBps from a single NIC.. 
> 
> How did you measure this? Please paste the commands used.. 
> 
> Did you verify both the paths are connected to different interface on EQL
> array? You can check this with "iscsiadm -m session -P3"
> 
> Another question: Which MPIO load balance policy did you use in Windows?
> (round-robin, weighted path, least queue depth?)
> 

.. And did you have the Equallogic MPIO DSM installed on Windows? 

-- Pasi

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Open-iSCSI error on CentOs -> ping timeout of 5 secs expired

2008-12-18 Thread Pasi Kärkkäinen

On Wed, Dec 17, 2008 at 06:33:15PM +0100, Santi Saez wrote:
> 
> 
> 
> On Wed, 17 Dec 2008 11:12:46 -0600, Mike Christie 
> wrote:
> 
> > It is an error in that we tried to send a ping to the target and did not
> > get a response.
> > 
> > Are you using the kernel from CentOS 5.2? If so it has a bug in that
> > code patch that you might be hitting. The bug is that the code thought
> > the ping timedout when it had not, so the driver would fire off the conn
> > error and start recovery when we should not have.
> 
> Thanks!
> 
> Upss.. but I have a problem: it's a Virtuozzo based system, so I have not
> access to the source code to patch this bug. Virtuozzo is a Linux kernel
> modification based virtualization system, and it's not open-source :(
> 

Hmm.. if it's a modified Linux, then the changes need to be GPL too..

Are you sure you can't get the source from them? That would be GPL
violation.. 

-- Pasi

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: [PATCH v2 0/2 2.6.29] cxgb3i -- open-iscsi initiator acceleration

2008-12-16 Thread Pasi Kärkkäinen

On Mon, Dec 15, 2008 at 05:52:54PM -0800, Karen Xie wrote:
> 
> Hi, Pasi,
> 
> Here are some throughput numbers we see with disktest, one tcp connection 
> (iscsi session).
> 
> The setup are between a pair of chelsio 10G adapters. The target is Chelsio's 
> ramdisk target with data discarded (similiar to IET's NULLIO mode). The 
> Chelsio target is used because of the digest offload and payload ddp. 
> 
> The ethernet frame is standard 1500 bytes.
> 
> The numbers are about 3 months old, but you get the idea :) For cxgb3i 
> driver, since the digest is offloaded the performance is very similar in the 
> digest off case, so only the numbers for digest on are shown.
>

Thanks! cxgb3i acceleration seems to have a nice performance boost compared
to plain iscsi-tcp.. especially if having digest on! 

> We will re-run the tests and get the cpu stats too, will keep you posted.
> 

Yep, cpu stats would be really nice to have too.

-- Pasi

> 
> Test cxgb3i iscsi-tcp iscsi-tcp
>  digest on  digest on digest off
>  (MB/sec)   (MB/sec)  (MB/sec)
> ===
> 
> 512-read 36.85   34.1336.69
> 1k-read  71.91   58.5266.81
> 2k-read  137.24  97.75128.46
> 4k-read  280.61 137.98214.04
> 8k-read  531.34 201.87325.09
> 16k-read 953.67 226.49429.32
> 64k-read 1099.57248.57626.30
> 128k-read1102.65256.04613.94
> 256k-read1105.28262.28642.73
>  
> 512-write39.54   34.18 38.36
> 1k-write 79.52   56.51 75.06
> 2k-write 158.03  84.12140.85
> 4k-write 314.56 126.33282.72
> 8k-write 559.83 155.49528.24
> 16k-write968.84 168.50676.38
> 64k-write1099.31182.82978.82
> 128k-write   1074.62182.55974.18
> 256k-write   1063.85185.67972.88
> 
> 
> -Original Message-
> From: Pasi Kärkkäinen [mailto:pa...@iki.fi] 
> Sent: Monday, December 15, 2008 6:06 AM
> To: open-iscsi@googlegroups.com
> Cc: linux-s...@vger.kernel.org; micha...@cs.wisc.edu; 
> james.bottom...@hansenpartnership.com; Karen Xie
> Subject: Re: [PATCH v2 0/2 2.6.29] cxgb3i -- open-iscsi initiator acceleration
> 
> On Tue, Dec 09, 2008 at 02:15:22PM -0800, Karen Xie wrote:
> > 
> > [PATCH v2 0/2 2.6.29] cxgb3i -- open-iscsi initiator acceleration 
> > 
> > From: Karen Xie 
> > 
> > Here is the updated patchset for adding cxgb3i iscsi initiator.
> > 
> > The updated version incorporates the comments from Mike and Boaz:
> > - remove the cxgb3 sysfs entry for the private iscsi ip address, it can be
> >   accessed from iscsi.
> > - in cxgb3i.txt, added error message logged for not setting 
> > MaxRecvDataSegmentLength properly.
> > - renamed cxgb3i Makefile to Kbuild
> > - removed "select ISCSI_TCP" in Kconfig
> > - consistent handling of AHS: on tx, reserve rooms for AHS; on rx, assume 
> > we could receive AHS.
> > - add support of bi-directional commands for ddp setup,
> > 
> > The cxgb3i driver, especially the part handles the offloaded iscsi tcp 
> > connection mangement, has gone through the netdev review 
> > (http://marc.info/?l=linux-netdev&m=121944339211552, 
> > http://marc.info/?l=linux-netdev&m=121989660016124).
> > 
> > The cxgb3i driver provides iscsi acceleration (PDU offload and payload data 
> > direct placement) to the open-iscsi initiator. It accesses the hardware 
> > through the cxgb3 module.
> > 
> 
> Hello!
> 
> Do you guys have performance comparison/numbers for normal open-iscsi over 
> tcp vs. cxgb3i accelerated?
> 
> Would be nice to see throughput/iops/cpu-usage statistics..
> 
> -- Pasi
> 
> 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: [PATCH v2 0/2 2.6.29] cxgb3i -- open-iscsi initiator acceleration

2008-12-15 Thread Pasi Kärkkäinen

On Tue, Dec 09, 2008 at 02:15:22PM -0800, Karen Xie wrote:
> 
> [PATCH v2 0/2 2.6.29] cxgb3i -- open-iscsi initiator acceleration 
> 
> From: Karen Xie 
> 
> Here is the updated patchset for adding cxgb3i iscsi initiator.
> 
> The updated version incorporates the comments from Mike and Boaz:
> - remove the cxgb3 sysfs entry for the private iscsi ip address, it can be
>   accessed from iscsi.
> - in cxgb3i.txt, added error message logged for not setting 
> MaxRecvDataSegmentLength properly.
> - renamed cxgb3i Makefile to Kbuild
> - removed "select ISCSI_TCP" in Kconfig
> - consistent handling of AHS: on tx, reserve rooms for AHS; on rx, assume we 
> could receive AHS.
> - add support of bi-directional commands for ddp setup,
> 
> The cxgb3i driver, especially the part handles the offloaded iscsi tcp 
> connection mangement, has gone through the netdev review 
> (http://marc.info/?l=linux-netdev&m=121944339211552, 
> http://marc.info/?l=linux-netdev&m=121989660016124).
> 
> The cxgb3i driver provides iscsi acceleration (PDU offload and payload data 
> direct placement) to the open-iscsi initiator. It accesses the hardware 
> through the cxgb3 module.
> 

Hello!

Do you guys have performance comparison/numbers for normal open-iscsi over tcp 
vs. cxgb3i accelerated?

Would be nice to see throughput/iops/cpu-usage statistics..

-- Pasi

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: mount as ro for users, rw for root?

2008-12-14 Thread Pasi Kärkkäinen

On Sun, Dec 14, 2008 at 04:53:02AM -0500, Scott R. Ehrlich wrote:
> 
> Under CentOS 5.2, is it possible to mount an iscsi filesystem/partition as 
> rw for root, but ro for users?
> 
> If so, what would be the proper syntax?
> 

iSCSI is not a filesystem.. iSCSI is a protocol to export block devices. 

So this question is not really related to iSCSI at all..

And to answer your question.. I don't know. Adjust the filesystem permissions 
so that
only root can write? 

-- Pasi

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Linux iscsi performance, multipath issue

2008-12-11 Thread Pasi Kärkkäinen

On Mon, Dec 08, 2008 at 12:45:11PM -0800, Kmec wrote:
> 
> 
> >
> > > IIRC IOmeter for Linux had some issues.. related to queue depth maybe? So
> > > you should use other tools than IOmeter on Linux. Dunno if that problem is
> > > already fixed or if there is a patch available for IOmeter for Linux..
> >
> > Oh, and please try using 'noop' elevator/scheduler on your iSCSI disks..
> > that might help with the performance.
> >
> 
> noop scheduler didn't help at all :-(
> 
> What is curious that we can't get more than 62 MBps from single NIC :-
> (
> 

Earlier you said you got 90 MBps from a single NIC.. 

How did you measure this? Please paste the commands used.. 

Did you verify both the paths are connected to different interface on EQL
array? You can check this with "iscsiadm -m session -P3"

Another question: Which MPIO load balance policy did you use in Windows?
(round-robin, weighted path, least queue depth?)

-- Pasi

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Frequent Connection Errors with Dell Equallogic RAID

2008-12-10 Thread Pasi Kärkkäinen

On Wed, Dec 10, 2008 at 12:45:04PM +0100, Ulrich Windl wrote:
> 
> On 8 Dec 2008 at 17:52, Pasi Kärkkäinen wrote:
> 
> > iscsiadm -m session -P3
> > 
> 
> Hi,
> 
> being curious: My version if iscsiadm (version 2.0-754) doesn't know about 
> the "-
> P3". What ist it expected to do?
> 

It just prints more information.. details.

-- Pasi

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Frequent Connection Errors with Dell Equallogic RAID

2008-12-08 Thread Pasi Kärkkäinen

On Mon, Dec 08, 2008 at 02:57:44PM -0500, Evan Broder wrote:
> 
> Mike Christie wrote:
> > Evan Broder wrote:
> >> A group I work with is currently using a Dell Equallogic RAID with
> >> four servers on a dedicated storage network. We've been regularly
> >> experiencing connection errors:
> >>
> >> Dec  8 00:50:36 aperture-science kernel: [1010621.595904]
> >> connection1:0: iscsi: detected conn error (1011)
> >> Dec  8 00:50:37 aperture-science iscsid: Kernel reported iSCSI
> >> connection 1:0 error (1011) state (3)
> >> Dec  8 00:50:39 aperture-science iscsid: Login authentication failed
> >> with target iqn.2001-05.com.equallogic:
> >
> > Are you guys using CHAP at this time?
> 
> Yeah. Our storage network is completely isolated, so we thought about
> trying to disable CHAP, but we couldn't find an option to turn it off in
> the Equallogic config.
> 

In your volume properties, add access with just the IP address. Remove any
username access entries.

I'm using EQL volumes without CHAP.

-- Pasi

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Frequent Connection Errors with Dell Equallogic RAID

2008-12-08 Thread Pasi Kärkkäinen

On Mon, Dec 08, 2008 at 05:48:22AM -0800, Evan Broder wrote:
> 
> A group I work with is currently using a Dell Equallogic RAID with
> four servers on a dedicated storage network. We've been regularly
> experiencing connection errors:
> 
> Dec  8 00:50:36 aperture-science kernel: [1010621.595904]
> connection1:0: iscsi: detected conn error (1011)
> Dec  8 00:50:37 aperture-science iscsid: Kernel reported iSCSI
> connection 1:0 error (1011) state (3)
> Dec  8 00:50:39 aperture-science iscsid: Login authentication failed
> with target iqn.2001-05.com.equallogic:
> 0-8a0906-2b6e7d402-891497db5ca48925-xvm-volume-1
> Dec  8 00:50:39 aperture-science kernel: [1010624.597349] iscsi: host
> reset succeeded
> Dec  8 00:50:40 aperture-science iscsid: connection1:0 is operational
> after recovery (1 attempts)
> 
> These errors always occur about 40 seconds after a multiple of 5
> minutes after the hour. Other than that, we've seen no pattern in when
> they occur, or their cause. We've tried running some scripts designed
> to heavily utilize storage devices, but that doesn't seem to trigger
> it. These errors occur on all four of our servers, but not at the same
> time.
> 
> The RAID is being used as a physical volume for an LVM volume
> group. The servers are being used for hosting Xen virtual machines,
> which use LVs on the RAID as their disk images. Shortly before the
> connection errors are logged, all disk I/O from the virtual machines
> hangs completely, causing the VMs to basically become non-responsive
> to non-trivial interaction.
> 
> Our four servers are running the stock Ubuntu Hardy 2.6.24 kernel. I
> don't see an explicit version number in any of the iSCSI kernel
> source, but drivers/scsi/scsi_transport_iscsi.c contains:
> > > #define ISCSI_TRANSPORT_VERSION "2.0-724"
> 
> The userspace utilities are 2.0.865.
> 
> Does anyone know how to stop these errors? Is there more diagnostic
> information we could provide? We're way out of our league in terms of
> debugging this.
> 

Hmm.. wondering if those are related to automatic connection loadbalancing
on Equallogic arrays. 

Maybe check with iscsiadm if the connected interface (on the EQL array)
changes after those errors:

iscsiadm -m session -P3

And check the 'Current Portal' and 'Persistent Portal' values.. 

Persistent Portal should be your EQL group IP address, and Current Portal
should be whatever interface you're happening to use atm.. 

-- Pasi

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Linux iscsi performance, multipath issue

2008-12-08 Thread Pasi Kärkkäinen

On Mon, Dec 08, 2008 at 02:17:45PM +0200, Pasi Kärkkäinen wrote:
> 
> On Sun, Dec 07, 2008 at 11:01:46AM -0800, Kmec wrote:
> > 
> > Hi,
> > I would like to ask for help with some strange behavior of linux
> > iscsi. Situation is as follows: iSCSI SAN Dell Equallogic, SAS 10k RPM
> > drives, 4x Broadcom NIC or 4x Intel NIC in Dell R900 server (24 cores,
> > 64 GB RAM). It's testing environment where we are trying to measure
> > SAN Dell EQL performance.
> > 
> > Totally we are solving 2 different issues:
> > 1) In case of running IOmeter test on server running Windows 2008
> > server, we are able to get 112 MBps read and 110 MBps write over 1NIC,
> > 220 MBps read and 210 MBps over 2 NICs. On SuSE 10 or Centos 66 MBps
> > read and 38 MBps write only over one NIC. So I think we can forget
> > about finding issues on SAN or switch. Strange is, that with dd or
> > hdparm we can get wirespeed. Question is what to do to get same
> > numbers from IOmeter on Windows and Linux. We also tried dt tool and
> > we get same results as from IOmeter.
> > How to continue?
> >
> 
> IIRC IOmeter for Linux had some issues.. related to queue depth maybe? So
> you should use other tools than IOmeter on Linux. Dunno if that problem is
> already fixed or if there is a patch available for IOmeter for Linux..
>  

Oh, and please try using 'noop' elevator/scheduler on your iSCSI disks..
that might help with the performance. 

-- Pasi

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Linux iscsi performance, multipath issue

2008-12-08 Thread Pasi Kärkkäinen

On Sun, Dec 07, 2008 at 11:01:46AM -0800, Kmec wrote:
> 
> Hi,
> I would like to ask for help with some strange behavior of linux
> iscsi. Situation is as follows: iSCSI SAN Dell Equallogic, SAS 10k RPM
> drives, 4x Broadcom NIC or 4x Intel NIC in Dell R900 server (24 cores,
> 64 GB RAM). It's testing environment where we are trying to measure
> SAN Dell EQL performance.
> 
> Totally we are solving 2 different issues:
> 1) In case of running IOmeter test on server running Windows 2008
> server, we are able to get 112 MBps read and 110 MBps write over 1NIC,
> 220 MBps read and 210 MBps over 2 NICs. On SuSE 10 or Centos 66 MBps
> read and 38 MBps write only over one NIC. So I think we can forget
> about finding issues on SAN or switch. Strange is, that with dd or
> hdparm we can get wirespeed. Question is what to do to get same
> numbers from IOmeter on Windows and Linux. We also tried dt tool and
> we get same results as from IOmeter.
> How to continue?
>

IIRC IOmeter for Linux had some issues.. related to queue depth maybe? So
you should use other tools than IOmeter on Linux. Dunno if that problem is
already fixed or if there is a patch available for IOmeter for Linux..
 
> 2) our next problem is multipath. When we configure multipath, over
> one NIC with dd we get 90 MBps read, but over 2 NICs just 80 MBps what
> is strange. On switch and SAN we see that data flow is over both NICs,
> but dd shows still 80 MBps.
> 
> I will appreciate any suggestion.
> 

Hmm.. what kind of path selector are you using on windows to split the IOs
between paths? How about on Linux?

-- Pasi

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Extremely slow read performance, but write speeds are near-perfect!

2008-11-25 Thread Pasi Kärkkäinen

On Fri, Nov 21, 2008 at 12:16:05PM -0800, Heady wrote:
> 
> Folks,
> 
> I've been struggling with a similar problem for a while now.  My write
> speeds are around 110M/s whereas, even following A. Eijkhoudt's advice
> I've only been able to get 34M/s reads.
> 
> * The initiator is Open-iSCSI running on the Xen client.  The exported
> targets are then S/W RAID1 and the resulting md1 is then divided using
> client side LVM2 presenting the LVs to the client.
> 
> So the software stack is quite layered.  However, this doesn't seem to
> be a problem for writes (110M/s) just reads (30-34M/s).
> 

Are you using partitions on the client iSCSI devices? Are your partitions
aligned for example with 64k boundary? Have you tried without partitions?

I'd measure with the raw iSCSI device (/dev/sdX) first, and then start
adding additional layers (md-raid, lvm, etc).

-- Pasi

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Fwparam tool limitation (multiple session at boot time.)

2008-10-14 Thread Pasi Kärkkäinen

On Mon, Oct 13, 2008 at 11:04:02AM -0400, Konrad Rzeszutek wrote:
> 
> On Mon, Oct 13, 2008 at 05:52:27PM +0300, Pasi Kärkkäinen wrote:
> > 
> > On Mon, Oct 13, 2008 at 10:39:32AM -0400, Konrad Rzeszutek wrote:
> > > 
> > > On Tue, Oct 07, 2008 at 07:48:58PM +0530, [EMAIL PROTECTED] wrote:
> > > > 
> > > > The open-iscsi fwparam tool does not connect through all the initiators
> > > > in the initiator structure exported by the iBFT.
> > > > 
> > > > I ask this because though the Nic fw can connect twice to the same
> > > > target using both the initiators(Dual port card) but then the OS makes
> > > > only one connection with the last(I hope I got this right) initiator
> > > > with the Firmware boot selected flag set.
> > > > 
> > > > Now, I would think we should have both ports(dual port NIC) with an iqn
> > > > should be able to connect to a target portal and thus have two sessions
> > > > to the same target portal.
> > > 
> > > I think you are asking two questions here:
> > > 
> > > 1). Should we connect to all portals listed in the iBFT irregardless if 
> > > some
> > >   of the flag.
> > > 
> > > 2). Should we connect to the portals on both ports.
> > >
> > 
> > For example Equallogic iSCSI storage only has a single portal that you log
> > into.. to the same portal from all ports/NICs (if using multipathing). 
> 
> Right. During discovery we learn of that - and I believe that the code picks 
> the target portal
> IP, and then does a discovery and logs on the IPs associated with the target 
> portal.
> 
> This meaning it should work fine with your setup.
> 
> > 
> > That portal then does loadbalancing/redirection to some real interface.. 
> 
> Are we talking about the same issue that Shyam raised?
>

Good question.. I was tired when I wrote that :)

Anyway, I guess my point was that you might have the same target portal (IP)
for both ports. Some storage has different target portal/IP per controller,
and others do not.. 

-- Pasi

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Fwparam tool limitation (multiple session at boot time.)

2008-10-13 Thread Pasi Kärkkäinen

On Mon, Oct 13, 2008 at 10:39:32AM -0400, Konrad Rzeszutek wrote:
> 
> On Tue, Oct 07, 2008 at 07:48:58PM +0530, [EMAIL PROTECTED] wrote:
> > 
> > The open-iscsi fwparam tool does not connect through all the initiators
> > in the initiator structure exported by the iBFT.
> > 
> > I ask this because though the Nic fw can connect twice to the same
> > target using both the initiators(Dual port card) but then the OS makes
> > only one connection with the last(I hope I got this right) initiator
> > with the Firmware boot selected flag set.
> > 
> > Now, I would think we should have both ports(dual port NIC) with an iqn
> > should be able to connect to a target portal and thus have two sessions
> > to the same target portal.
> 
> I think you are asking two questions here:
> 
> 1). Should we connect to all portals listed in the iBFT irregardless if some
>   of the flag.
> 
> 2). Should we connect to the portals on both ports.
>

For example Equallogic iSCSI storage only has a single portal that you log
into.. to the same portal from all ports/NICs (if using multipathing). 

That portal then does loadbalancing/redirection to some real interface.. 

-- Pasi

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Correct qisioctl module version for RHEL 5.2 kernel qla4xxx driver

2008-09-11 Thread Pasi Kärkkäinen

On Wed, Jul 09, 2008 at 08:34:27AM -0400, [EMAIL PROTECTED] wrote:
> 
> Hi Pasi.
> 
>   The correct IOCTL module should be included in the latest SANsurfer 
> package. 
> 

Thanks! This seems to be the case..

I'm succesfully using Qlogic iSCSI Sansurfer v5.00.32 on RHEL 5.2, using
only the default in-kernel qla4xxx driver. I haven't downloaded or used any
drivers from Qlogic.  

-- Pasi

> Regards,
> Wayne.
> 
> -Original Message-
> From: open-iscsi@googlegroups.com [mailto:[EMAIL PROTECTED] On Behalf Of Pasi 
> Kärkkäinen
> Sent: Wednesday, July 09, 2008 5:46 AM
> To: open-iscsi@googlegroups.com
> Cc: [EMAIL PROTECTED]
> Subject: Correct qisioctl module version for RHEL 5.2 kernel qla4xxx driver
> 
> 
> Hello!
> 
> What's the correct/recommended qisioctl module version for RHEL 5.2 default
> in-kernel qla4xxx driver? 
> 
> I'd like to use Qlogic iSCSI Sansurfer.. 
> 
> -- Pasi
> 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Correct qisioctl module version for RHEL 5.2 kernel qla4xxx driver

2008-07-09 Thread Pasi Kärkkäinen

Hello!

What's the correct/recommended qisioctl module version for RHEL 5.2 default
in-kernel qla4xxx driver? 

I'd like to use Qlogic iSCSI Sansurfer.. 

-- Pasi

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Connection Errors

2008-07-04 Thread Pasi Kärkkäinen

On Thu, Jul 03, 2008 at 02:33:02PM -0700, swejis wrote:
> 
> > > Looks like a tcp-reset was received ? Would it help if I start the
> > > initiator by hand and increased debug-level ?
> >
> > No. It looks like the target dropped the connection on us. Did you see
> > anything on the target logs?
> 
> I am afraid not, this target unfortunately leaves much to wish for, a
> good log-function being one (jumboframes another). The only log
> facility I am aware of is the so called eventlogger, only giving
> information on major events such as disk failures etc. I am certain
> however a more sophisticated log-function does exist but is kept
> hidden from us mortals. Perhaps if Pasi reads this knows something
> that I do not (in the matter hehe..) ? 

Hi!

I'm afraid I don't know more about that.. 

My general impression about this target (Promise Vtrak M500i) is that it sucks a
lot. I've seen a lot of target crashes, failing firmware updates, missing
(unimplemented) features in the menus, especially in the cmdline, missing
links on the web management (mentioned in the docs but nowhere in the actual
gui) and general feeling of "unstable and not finished".. not to mention bad
customer support. 

It's the same as always.. good and cheap come in different packages :(

-- Pasi

> Log entrys on the initiator
> side before and after the iscsi-error reveals the error occurred
> during backup which most likely is the I/O peak. No error was however
> reported by the backup routine nor have I found any data inconsistent.
> Probably yet another stupid question, but how is that possible ? Is a
> modern filesystem (I use XFS) able to recover or are the program told
> by the initiator to wait until it's recover have completed ?
> 
> Last night backup succeeded with no errors reported.
> 
> Furthermore I would also like (as others) to thank you Mike for
> spending time to help someone like me. It is greatly appreciated and
> widely spread among those in doubt of the open source philosophy.
> 
> Brgds Jonas

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Connection Errors

2008-05-29 Thread Pasi Kärkkäinen

On Thu, May 29, 2008 at 03:34:08AM -0700, swejis wrote:
> 
> Thanks Pasi, I saw your post a couple of days ago. Perhaps you could
> post some Target-configuration to compare ?
>

It's pretty much the standard or "out of the box" configuration with a
couple of LD's defined.. I'm running the latest Promise firmware. 

I have two disk arrays, both having a single LD (so two LD's total).

This LD having problems is RAID5 with 5 drives in use (and a hotspare).

I'm not sure if the other LD has problems too.. It's being used by 
Qlogic iSCSI HBA so I'm not sure if there are problems or not.

-- Pasi

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Connection Errors

2008-05-29 Thread Pasi Kärkkäinen

On Thu, May 29, 2008 at 01:06:45AM -0700, swejis wrote:
> 
> I spent some time this morning trying to find more evidence. As said
> only one connection seem to suffer of those connections errors. What
> also said however is that only connections with I/O suffer. One thing
> bothered me for some time is that the /dev/sdX devices "move around"
> when restarting the initiator. To settle this once and for all I read
> thought quite a few posts this morning until I finally found one with
> a solution. I was unaware of the /dev/disk/by-path devices. By instead
> using one of those devices I mounted one lun pointing to the other
> connection, and now I see both connection report errors.
> 
> tcp: [3] 192.168.43.6:3260,2 iqn.
> 1994-12.com.promise.target.a9.39.4.55.1.0.0.20
> tcp: [4] 192.168.43.5:3260,1 iqn.
> 1994-12.com.promise.target.a9.39.4.55.1.0.0.20
> 
> 
> May 29 09:36:32 manjula klogd:  connection3:0: detected conn error
> (1011)
> May 29 09:36:33 manjula iscsid: Kernel reported iSCSI connection 3:0
> error (1011) state (3)
> May 29 09:48:37 manjula klogd:  connection4:0: detected conn error
> (1011)
> May 29 09:48:38 manjula iscsid: Kernel reported iSCSI connection 4:0
> error (1011) state (3)
> May 29 09:49:11 manjula klogd:  connection4:0: detected conn error
> (1011)
> May 29 09:49:12 manjula iscsid: Kernel reported iSCSI connection 4:0
> error (1011) state (3)
> 

Hi!

I'm also (unfortunately) running Promise m500i on one setup, and I've had
problems with it basicly all the time. Should just get rid of it and replace
it with something more stable and powerful.

You can check this thread for more info about my problems with this target:
"open-iscsi with Promise M500i dropping session / Nop-out timedout"

Basicly I'm seeing "Nop-out timedout" and "Session dropped" errors whenever
there is IO going on..

-- Pasi

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



  1   2   >