Re: openiscsi 10gbe network

2009-11-25 Thread Pasi Kärkkäinen
On Tue, Nov 24, 2009 at 08:07:12AM -0800, Chris K. wrote:
 Hello,
 I'm writing in regards to the performance with open-iscsi on a
 10gbe network. On your website you posted performance results
 indicating you reached read and write speeds of 450 MegaBytes per
 second.
 
 In our environment we use Myricom dual channel 10gbe network cards on
 a gentoo linux system connected via fiber to a 10gbe interfaced SAN
 with a raid 0 volume mounted with 4 15000rpm SAS drives.
 Unfortunately, the maximum speed we are acheiving is 94 MB/s. We do
 know that the network interfaces can stream data at 822MB/s (results
 obtained with netperf). we know that local read performance on the
 disks is 480MB/s. When using netcat or direct tcp/ip connection we get
 speeds in this range, however when we connect a volume via the iscsi
 protocol using the open-iscsi initiator we drop to 94MB/s(best result.
 Obtained with bonnie++ and dd).


What block size are you using with dd? 
Try: dd if=/dev/foo of=/dev/null bs=1024k count=32768

How's the CPU usage on both the target and the initiator when you run
that? Is there iowait?

Did you try with nullio LUN from the target?

-- Pasi

--

You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: iSCSI latency issue

2009-11-25 Thread Vladislav Bolkhovitin
Shachar f, on 11/25/2009 07:57 PM wrote:
 I'm running open-iscsi with scst on Broadcom 10Gig network and facing 
 write latency issues.
 When using netperf over an idle network the latency for a single block 
 round trip transfer is 30 usec and with open-iscsi it is 90-100 usec.
  
 I see that Nagle (TCP_NODELAY) is disabled when openning socket on the 
 initiator side and I'm not sure about the target side. 
 Vlad, Can you elaborate on this?

TCP_NODELAY is always enabled in iSCSI-SCST. You can at any time have 
latency statistics on the target side by enabling 
CONFIG_SCST_MEASURE_LATENCY (see README). Better also enable 
CONFIG_PREEMPT_NONE to not count CPU scheduler latency.

 Are others in the mailing list aware to possible environment changes 
 that effext latency?
  
 more info -
 I'm running this test with Centos5.3 machines with almost latest open-iscsi.
  
 Thanks,
   Shachar

--

You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: openiscsi 10gbe network

2009-11-25 Thread Chris K.
The dd command I am running is time dd if=/dev/zero bs=1024k of=/mnt/
iscsi/10gfile.txt count=10240
My fs is xfs (mkfs.xfs -d agcount=8 -l internal,size=128m -n size=8k -
i size=2048 /dev/sdb1 -f) those are the parameters used to format the
drive.

Here are the top values: Cpu(s):  0.0%us,  6.1%sy,  0.0%ni, 25.0%id,
67.2%wa,  0.1%hi,  1.7%si,  0.0%st

I have not tried nullio LUN from target. I'm not sure how to go about
it actually...

Thanks for your help !

On Nov 25, 5:04 am, Pasi Kärkkäinen pa...@iki.fi wrote:
 On Tue, Nov 24, 2009 at 08:07:12AM -0800, Chris K. wrote:
  Hello,
      I'm writing in regards to the performance with open-iscsi on a
  10gbe network. On your website you posted performance results
  indicating you reached read and write speeds of 450 MegaBytes per
  second.

  In our environment we use Myricom dual channel 10gbe network cards on
  a gentoo linux system connected via fiber to a 10gbe interfaced SAN
  with a raid 0 volume mounted with 4 15000rpm SAS drives.
  Unfortunately, the maximum speed we are acheiving is 94 MB/s. We do
  know that the network interfaces can stream data at 822MB/s (results
  obtained with netperf). we know that local read performance on the
  disks is 480MB/s. When using netcat or direct tcp/ip connection we get
  speeds in this range, however when we connect a volume via the iscsi
  protocol using the open-iscsi initiator we drop to 94MB/s(best result.
  Obtained with bonnie++ and dd).

 What block size are you using with dd?
 Try: dd if=/dev/foo of=/dev/null bs=1024k count=32768

 How's the CPU usage on both the target and the initiator when you run
 that? Is there iowait?

 Did you try with nullio LUN from the target?

 -- Pasi

--

You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




iSCSI latency issue

2009-11-25 Thread Shachar f
I'm running open-iscsi with scst on Broadcom 10Gig network and facing write
latency issues.
When using netperf over an idle network the latency for a single block round
trip transfer is 30 usec and with open-iscsi it is 90-100 usec.

I see that Nagle (TCP_NODELAY) is disabled when openning socket on the
initiator side and I'm not sure about the target side.
Vlad, Can you elaborate on this?

Are others in the mailing list aware to possible environment changes that
effext latency?

more info -
I'm running this test with Centos5.3 machines with almost latest open-iscsi.

Thanks,
  Shachar

--

You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: openiscsi 10gbe network

2009-11-25 Thread Chris K.
Thank you for your response. The SAN is a 10gbe Nimbus with I believe
to be iscsitarget(http://iscsitarget.sourceforge.net/) as it's target
server.
The switch is a Cisco Nexus5010 set to jumbo frame and flow control.
We have through tcp/ip performance tests in conjunction with Cisco
proved that this works. Furthermore using netcat and dd conjointly we
have achieved speeds around 200MB/s. This is far from the 822MB/s
shown in our testing with netperf and Cisco's performance tests, but
it is way above what we are getting with iscsi at 94MB/s which
technically is a GiG network not a 10gbe network.

I am not familiar with no-op-io-scheduler where exactly is this set
and what are it's implications ?

Thank you once again for your help.

On Wed, Nov 25, 2009 at 4:11 AM, Boaz Harrosh bharr...@panasas.com wrote:
 On 11/24/2009 06:07 PM, Chris K. wrote:
 Hello,
     I'm writing in regards to the performance with open-iscsi on a
 10gbe network. On your website you posted performance results
 indicating you reached read and write speeds of 450 MegaBytes per
 second.

 In our environment we use Myricom dual channel 10gbe network cards on
 a gentoo linux system connected via fiber to a 10gbe interfaced SAN
 with a raid 0 volume mounted with 4 15000rpm SAS drives.

 That is the iscsi-target machine, right?
 What is the SW environment of the initiator box?

 Unfortunately, the maximum speed we are acheiving is 94 MB/s. We do
 know that the network interfaces can stream data at 822MB/s (results
 obtained with netperf). we know that local read performance on the
 disks is 480MB/s. When using netcat or direct tcp/ip connection we get
 speeds in this range, however when we connect a volume via the iscsi
 protocol using the open-iscsi initiator we drop to 94MB/s(best result.
 Obtained with bonnie++ and dd).


 What iscsi target are you using?

 Mike, is it still best to use no-op-io-scheduler on initiator?

 Boaz
 We were wondering if you would have any recommendations in terms of
 configuring the initiator or perhaps the linux system to achieve
 higher throughput.
 We have also set the the interfaces on both ends to jumbo frames (mtu
 9000). We have also modified sysctl parameters to look as follows :

 net.core.rmem_max = 16777216
 net.core.wmem_max = 16777216
 net.ipv4.tcp_rmem = 4096 87380 16777216
 net.ipv4.tcp_wmem = 4096 65536 16777216
 net.core.netdev_max_backlog = 25

 Any help would greatly be appreciated,
 Thank you for your time and  your work.



--

You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: openiscsi 10gbe network

2009-11-25 Thread Chris K.

Here is the dd command : time dd if=/dev/zero bs=1024k of=/mnt/iscsi/
10gfile.txt count=10240

Here are the cpu values :
Cpu(s):  0.0%us,  8.7%sy,  0.0%ni, 25.0%id, 64.0%wa,  0.4%hi,
1.9%si,  0.0%st - Client
Cpu(s):  0.6%us,  2.8%sy,  0.0%ni, 86.4%id,  9.7%wa,  0.0%hi,
0.4%si,  0.0%st - SAN

I have not tried the nullio LUN from the target... I'm not sure how to
go about this ...?

Thank you for your help.


On Nov 25, 5:04 am, Pasi Kärkkäinen pa...@iki.fi wrote:
 On Tue, Nov 24, 2009 at 08:07:12AM -0800, Chris K. wrote:
  Hello,
      I'm writing in regards to the performance with open-iscsi on a
  10gbe network. On your website you posted performance results
  indicating you reached read and write speeds of 450 MegaBytes per
  second.

  In our environment we use Myricom dual channel 10gbe network cards on
  a gentoo linux system connected via fiber to a 10gbe interfaced SAN
  with a raid 0 volume mounted with 4 15000rpm SAS drives.
  Unfortunately, the maximum speed we are acheiving is 94 MB/s. We do
  know that the network interfaces can stream data at 822MB/s (results
  obtained with netperf). we know that local read performance on the
  disks is 480MB/s. When using netcat or direct tcp/ip connection we get
  speeds in this range, however when we connect a volume via the iscsi
  protocol using the open-iscsi initiator we drop to 94MB/s(best result.
  Obtained with bonnie++ and dd).

 What block size are you using with dd?
 Try: dd if=/dev/foo of=/dev/null bs=1024k count=32768

 How's the CPU usage on both the target and the initiator when you run
 that? Is there iowait?

 Did you try with nullio LUN from the target?

 -- Pasi

--

You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: !!!!Help: Problem when I login the iscsi hard disk

2009-11-25 Thread Mike Christie
Ricky wrote:
 sda: got wrong page

You mean this right? The linux scsi layer was trying to figure out the 
cache type. It got an unexpected answer and so ...


 sda: assuming drive cache: write through

it used the default of write through cache.


 sd 6:0:0:0: Attached scsi disk sda
 sd 6:0:0:0: Attached scsi generic sg0 type 0
 
 You have new mail in /var/spool/mail/root
 
 --
 
 You received this message because you are subscribed to the Google Groups 
 open-iscsi group.
 To post to this group, send email to open-is...@googlegroups.com.
 To unsubscribe from this group, send email to 
 open-iscsi+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/open-iscsi?hl=en.
 
 

--

You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: [Patch 1/2] iscsiadm: login_portal() misses outputting logs for iscsid_req_by_rec()

2009-11-25 Thread Mike Christie
Yangkook Kim wrote:
 Thanks for your patch.
 
 I tested your patch and it worked fine.
 
 So, next you will upload this patch to the git tree
 and the patch will become the part of source code
 in the next release of open-iscsi.
 
 Is my understanding correct?

Yeah.

I merged it and uploaded it to the git tree. The commit id is 
fb4f2d3072bee96606d01e3535c100dc99b8d331. It can take a couple of hours 
to show up (the disks have to get synced up or something), so you should 
see it shortly.

When I make the next release it will be included.

 
 I am asking this question because I just want to know
 the normal development process of this and other
 linux project.

No problem.

 
 
 2009/11/24, Mike Christie micha...@cs.wisc.edu:
 Mike Christie wrote:
 Yangkook Kim wrote:
 Hi, Mike. Thank you for your patch.

 I do not want to add a login log message to the iscsid_req_* functions
 because they are generic and could be used for any operation.
 Yes, that's perfectly right idea. That should be bettet than my patch.

 I tried your patch, but that still does not output login-success
 message when calling
 iscsid_req_by_rec.

 It seems that log_login_msg() would not be called in either
 login_portal() or
 iscsid_logout_reqs_wait() when iscsid_req_by_rec returns success.

 I probably missed something. I will look at it tomorrow again.

 Nope. You are right. Nice catch. I messed up. I was only concentrating
 on the error paths. I will fix up my patch and resend. Thanks.


 Here is a corrected patch.

 --

 You received this message because you are subscribed to the Google Groups
 open-iscsi group.
 To post to this group, send email to open-is...@googlegroups.com.
 To unsubscribe from this group, send email to
 open-iscsi+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/open-iscsi?hl=.



 
 --
 
 You received this message because you are subscribed to the Google Groups 
 open-iscsi group.
 To post to this group, send email to open-is...@googlegroups.com.
 To unsubscribe from this group, send email to 
 open-iscsi+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/open-iscsi?hl=en.
 
 

--

You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: openiscsi 10gbe network

2009-11-25 Thread Mike Christie
Boaz Harrosh wrote:
 On 11/24/2009 06:07 PM, Chris K. wrote:
 Hello,
 I'm writing in regards to the performance with open-iscsi on a
 10gbe network. On your website you posted performance results
 indicating you reached read and write speeds of 450 MegaBytes per
 second.

 In our environment we use Myricom dual channel 10gbe network cards on
 a gentoo linux system connected via fiber to a 10gbe interfaced SAN
 with a raid 0 volume mounted with 4 15000rpm SAS drives.
 
 That is the iscsi-target machine, right?
 What is the SW environment of the initiator box?
 
 Unfortunately, the maximum speed we are acheiving is 94 MB/s. We do
 know that the network interfaces can stream data at 822MB/s (results
 obtained with netperf). we know that local read performance on the
 disks is 480MB/s. When using netcat or direct tcp/ip connection we get
 speeds in this range, however when we connect a volume via the iscsi
 protocol using the open-iscsi initiator we drop to 94MB/s(best result.
 Obtained with bonnie++ and dd).

 
 What iscsi target are you using?
 
 Mike, is it still best to use no-op-io-scheduler on initiator?
 

Sometimes.

Chris, try doing

echo noop  /sys/block/sdXYZ/queue/scheduler

Then rerun your tests.

For your tests you might want something that can do more IO. If you can 
could try disktest or fio or even do multiple dds at the same time.

Also what is the output of

iscsiadm -m session -P 3

--

You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: !!!!Help: Problem when I login the iscsi hard disk

2009-11-25 Thread Ruiqiang FU
This information will also come out when I fdisk /dev/sda. And I can not
mkfs this disk.
I think the cache type should be write back.
But I do not how to handle this situation.



2009/11/26 Mike Christie micha...@cs.wisc.edu

 Ricky wrote:
  sda: got wrong page

 You mean this right? The linux scsi layer was trying to figure out the
 cache type. It got an unexpected answer and so ...


  sda: assuming drive cache: write through

 it used the default of write through cache.


  sd 6:0:0:0: Attached scsi disk sda
  sd 6:0:0:0: Attached scsi generic sg0 type 0
 
  You have new mail in /var/spool/mail/root
 
   --
 
  You received this message because you are subscribed to the Google Groups
 open-iscsi group.
  To post to this group, send email to open-is...@googlegroups.com.
  To unsubscribe from this group, send email to
 open-iscsi+unsubscr...@googlegroups.comopen-iscsi%2bunsubscr...@googlegroups.com
 .
  For more options, visit this group at
 http://groups.google.com/group/open-iscsi?hl=en.
 
 

 --

 You received this message because you are subscribed to the Google Groups
 open-iscsi group.
 To post to this group, send email to open-is...@googlegroups.com.
 To unsubscribe from this group, send email to
 open-iscsi+unsubscr...@googlegroups.comopen-iscsi%2bunsubscr...@googlegroups.com
 .
 For more options, visit this group at
 http://groups.google.com/group/open-iscsi?hl=en.




--

You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: [Scst-devel] iSCSI latency issue

2009-11-25 Thread Bart Van Assche
On Wed, Nov 25, 2009 at 5:57 PM, Shachar f shacharf...@gmail.com wrote:
 I'm running open-iscsi with scst on Broadcom 10Gig network and facing write
 latency issues.
 When using netperf over an idle network the latency for a single block round
 trip transfer is 30 usec and with open-iscsi it is 90-100 usec.

 I see that Nagle (TCP_NODELAY) is disabled when openning socket on the
 initiator side and I'm not sure about the target side.
 Vlad, Can you elaborate on this?

 Are others in the mailing list aware to possible environment changes that
 effext latency?

 more info -
 I'm running this test with Centos5.3 machines with almost latest open-iscsi.

Please make sure that interrupt coalescing has been disabled -- see
also ethtool -c.

Bart.

--

You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.




Re: openiscsi 10gbe network

2009-11-25 Thread Ulrich Windl
On 25 Nov 2009 at 14:15, Chris K. wrote:

 Here are the cpu values :
 Cpu(s):  0.0%us,  8.7%sy,  0.0%ni, 25.0%id, 64.0%wa,  0.4%hi,

A note: I don't know how well open-iscsi uses multiple threads, but looking at 
individual CPUs may be interesting, as the above is only an average for 
multiple 
CPUs. Press '1' in top to switch to individual CPU display. Hope you don't have 
too many cores ;-)

Here's some example for the different displays:

Cpu(s): 23.0%us,  1.2%sy,  0.0%ni, 73.8%id,  1.9%wa,  0.0%hi,  0.2%si,  0.0%st

Cpu0  :  4.2%us,  0.5%sy,  0.1%ni, 89.2%id,  5.6%wa,  0.1%hi,  0.3%si,  0.0%st
Cpu1  :  4.8%us,  0.5%sy,  0.1%ni, 94.0%id,  0.6%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu2  :  7.9%us,  0.7%sy,  0.0%ni, 90.7%id,  0.7%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu3  :  8.6%us,  0.7%sy,  0.0%ni, 90.2%id,  0.4%wa,  0.0%hi,  0.0%si,  0.0%st

Have fun!
Ulrich

--

You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-is...@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.