Re: Tuning iscsi read performance with multipath Redhat 5.3 / SLES 10 SP2 / Oracle Linux / Equallogic

2009-04-28 Thread Ulrich Windl

On 24 Apr 2009 at 16:06, Konrad Rzeszutek wrote:

 
 On Fri, Apr 24, 2009 at 02:14:43PM -0400, Donald Williams wrote:
  Have you tried increasing the disk readahead value?
  #blockdev --setra X /dev/multipath device
  
   The default is 256.Use --getra to see current setting.
  
   Setting it too high will probably hurt your database performance.  Since
  databases tend to be random, not sequential.
 
 I would think that the databases would open the disks with O_DIRECT
 bypassing the block cache (And hence the disk readahead value isn't used
 at all).

Hi,

first a silly question: Shouldn't the read-ahead on the server be as least as 
hight as the setting on the client to provide any benefit?

And two interesting numbers: On one of our busy databases the the read:write 
ratio 
is about 10:1 and the tables are severely fragmented
On our Linux servers using iSCSI the read:write ratio is about 1:10 because the 
machines have several GIGs of RAM and the disk caching is very efficient. So 
the 
machine just has to send out the writes...

Regards,
Ulrich


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Tuning iscsi read performance with multipath Redhat 5.3 / SLES 10 SP2 / Oracle Linux / Equallogic

2009-04-27 Thread ByteEnable

I'm not sure if you have seen this, but there is a guide from Dell on
this subject:

http://www.support.dell.com/support/edocs/software/appora10/lin_x86_64/multlang/EELinux_storage_4_1.pdf

Also I would suggest the following changes in multipath.conf for RHEL5

 device {
         vendor EQLOGIC
         product 100E-00
         getuid_callout /sbin/scsi_id -g -u -s /block/%n
 hardware_handler 0
 path_selector round-robin 0
 path_grouping_policy multibus
 failback immediate
 features 1 queue_if_no_path
         path_checker tur
         rr_min_io 10
         rr_weight uniform
 }

Byte
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Tuning iscsi read performance with multipath Redhat 5.3 / SLES 10 SP2 / Oracle Linux / Equallogic

2009-04-24 Thread jnantel

As an update:

new observed behavior:
- RAW disk read performance is phenomenal (200meg/sec)
- Ext3 performance is 100meg/sec and tps in iostat aren't going about
800 (50k with raw disk).

Some added info:
- This system has an oracle database on it and it's tuned for huge
pages..etc (see sysctl posted above)


On Apr 24, 12:07 pm, jnantel nan...@hotmail.com wrote:
 If you recall my thread on tuning performance for writes.  Now I am
 attempting to squeeze as much read performance as I can from my
 current setup.  I've read a lot of the previous threads, and there has
 been mention of miracle settings that resolved slow reads vs
 writes.  Unfortunately, most posts reference the effects and not the
 changes.   If I were tuning for read performance in the 4k to 128k
 block range what would the best way to go about it?

 Observed behavior:
 - Read performance seems to be capped out at 110meg/sec
 - Write performance I get upwards of 190meg/sec

 Tuning options I'll be trying:
 block alignment (stride)
 Receiving buffers
 multipath min io changes
 iscsi cmd depth

 Hardware:
 2 x Cisco 3750  with 32gig interconnect
 2 x Dell R900 with 128gig ram and 1 broadcom Quad (5709) and 2 dual
 port intels (pro 1000/MT)
 2 x Dell Equallogic PS5000XV with 15 x SAS in raid 10 config

 multipath.conf:

 device {
         vendor EQLOGIC
         product 100E-00
         path_grouping_policy multibus
         getuid_callout /sbin/scsi_id -g -u -s /block/%n
         features 1 queue_if_no_path
         path_checker readsector0
         failback immediate
         path_selector round-robin 0
         rr_min_io 128
         rr_weight priorities

 }

 iscsi settings:

 node.tpgt = 1
 node.startup = automatic
 iface.hwaddress = default
 iface.iscsi_ifacename = ieth10
 iface.net_ifacename = eth10
 iface.transport_name = tcp
 node.discovery_address = 10.1.253.10
 node.discovery_port = 3260
 node.discovery_type = send_targets
 node.session.initial_cmdsn = 0
 node.session.initial_login_retry_max = 4
 node.session.cmds_max = 1024
 node.session.queue_depth = 128
 node.session.auth.authmethod = None
 node.session.timeo.replacement_timeout = 120
 node.session.err_timeo.abort_timeout = 15
 node.session.err_timeo.lu_reset_timeout = 30
 node.session.err_timeo.host_reset_timeout = 60
 node.session.iscsi.FastAbort = Yes
 node.session.iscsi.InitialR2T = No
 node.session.iscsi.ImmediateData = Yes
 node.session.iscsi.FirstBurstLength = 262144
 node.session.iscsi.MaxBurstLength = 16776192
 node.session.iscsi.DefaultTime2Retain = 0
 node.session.iscsi.DefaultTime2Wait = 2
 node.session.iscsi.MaxConnections = 1
 node.session.iscsi.MaxOutstandingR2T = 1
 node.session.iscsi.ERL = 0
 node.conn[0].address = 10.1.253.10
 node.conn[0].port = 3260
 node.conn[0].startup = manual
 node.conn[0].tcp.window_size = 524288
 node.conn[0].tcp.type_of_service = 0
 node.conn[0].timeo.logout_timeout = 15
 node.conn[0].timeo.login_timeout = 15
 node.conn[0].timeo.auth_timeout = 45
 node.conn[0].timeo.noop_out_interval = 10
 node.conn[0].timeo.noop_out_timeout = 30
 node.conn[0].iscsi.MaxRecvDataSegmentLength = 262144
 node.conn[0].iscsi.HeaderDigest = None,CRC32C
 node.conn[0].iscsi.DataDigest = None
 node.conn[0].iscsi.IFMarker = No
 node.conn[0].iscsi.OFMarker = No

 /etc/sysctl.conf

 net.core.rmem_default= 65536
 net.core.rmem_max=2097152
 net.core.wmem_default = 65536
 net.core.wmem_max = 262144
 net.ipv4.tcp_mem= 98304 131072 196608
 net.ipv4.tcp_window_scaling=1

 #
 # Additional options for Oracle database server
 #ORACLE
 kernel.panic = 2
 kernel.panic_on_oops = 1
 net.ipv4.ip_local_port_range = 1024 65000
 net.core.rmem_default=262144
 net.core.wmem_default=262144
 net.core.rmem_max=524288
 net.core.wmem_max=524288
 fs.aio-max-nr=524288
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---



Re: Tuning iscsi read performance with multipath Redhat 5.3 / SLES 10 SP2 / Oracle Linux / Equallogic

2009-04-24 Thread Konrad Rzeszutek

On Fri, Apr 24, 2009 at 02:14:43PM -0400, Donald Williams wrote:
 Have you tried increasing the disk readahead value?
 #blockdev --setra X /dev/multipath device
 
  The default is 256.Use --getra to see current setting.
 
  Setting it too high will probably hurt your database performance.  Since
 databases tend to be random, not sequential.

I would think that the databases would open the disks with O_DIRECT
bypassing the block cache (And hence the disk readahead value isn't used
at all).

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
open-iscsi group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~--~~~~--~~--~--~---