Hi all,

I got an performance trouble with my iSCSI initiator over bonding interface. Whenever I use bonding, my iscsi performance decreases.

Here is my environment:
Sun X4150 server with Red Hat 5.5
EMC CX4-480 iscsi target
My servers two onboard gigabit ethernet ports are connected to two different Cisco Switches

I mount iscsi disk with "sync" mount option and used dd command to write ~512MByte data to iscsi disk.

When I use single nic I get the best performance ~47MB/s
[r...@xentest mnt]# time dd if=/dev/zero of=sil bs=512K count=1024
1024+0 records in
1024+0 records out
536870912 bytes (537 MB) copied, 11.4042 seconds, 47.1 MB/s

real    0m11.409s
user    0m0.000s
sys     0m0.100s
[r...@xentest mnt]# time dd if=/dev/zero of=sil bs=512K count=1024
1024+0 records in
1024+0 records out
536870912 bytes (537 MB) copied, 11.8799 seconds, 45.2 MB/s

real    0m11.986s
user    0m0.000s
sys     0m0.396s
[r...@xentest mnt]# time dd if=/dev/zero of=sil bs=512K count=1024
1024+0 records in
1024+0 records out
536870912 bytes (537 MB) copied, 10.9093 seconds, 49.2 MB/s

real    0m11.018s
user    0m0.000s
sys     0m0.144s
[r...@xentest mnt]#

However, when I use bonding with mode 1, 2 and 5 iscsi performance decreases dramatically. Is there a trick or config that I missed?

Here is the test output:

--> bonding mode 1 (average 15 MB/s)
[r...@xentest mnt]# time dd if=/dev/zero of=sil bs=512K count=1024
1024+0 records in
1024+0 records out
536870912 bytes (537 MB) copied, 37.0802 seconds, 14.5 MB/s

real    0m37.086s
user    0m0.000s
sys     0m0.008s
[r...@xentest mnt]# time dd if=/dev/zero of=sil bs=512K count=1024
1024+0 records in
1024+0 records out
536870912 bytes (537 MB) copied, 29.6657 seconds, 18.1 MB/s

real    0m29.775s
user    0m0.000s
sys     0m0.284s
[r...@xentest mnt]# time dd if=/dev/zero of=sil bs=512K count=1024
1024+0 records in
1024+0 records out
536870912 bytes (537 MB) copied, 38.9873 seconds, 13.8 MB/s

real    0m39.097s
user    0m0.000s
sys     0m0.120s
[r...@xentest mnt]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.4.0 (October 7, 2008)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth1
MII Status: up
MII Polling Interval (ms): 80
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:1e:68:37:a6:d9

Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:1e:68:37:a6:d8
[r...@xentest mnt]#



--> bonding mode 2 (average 1.9 MB/s)
[r...@xentest mnt]# time dd if=/dev/zero of=sil bs=1024K count=512
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 301.208 seconds, 1.8 MB/s

real    5m0.934s
user    0m0.000s
sys     0m0.228s
[r...@xentest mnt]# time dd if=/dev/zero of=sil bs=1024K count=512
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 270.078 seconds, 2.0 MB/s

real    4m30.184s
user    0m0.000s
sys     0m0.156s
[r...@xentest mnt]#



--> bonding with mode 5 (average ~4 MB/s)
[r...@xentest mnt]# time dd if=/dev/zero of=sil bs=512K count=1024
1024+0 records in
1024+0 records out
536870912 bytes (537 MB) copied, 151.575 seconds, 3.5 MB/s

real    2m31.581s
user    0m0.000s
sys     0m0.220s
[r...@xentest mnt]#
[r...@xentest mnt]# time dd if=/dev/zero of=sil bs=512K count=1024
1024+0 records in
1024+0 records out
536870912 bytes (537 MB) copied, 111.586 seconds, 4.8 MB/s

real    1m51.692s
user    0m0.000s
sys     0m0.436s
[r...@xentest mnt]#


Hakan

_______________________________________________
rhelv5-list mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/rhelv5-list

Reply via email to