Hi Pasi, > > My setup: > > > 1x HP DL585 - SLES10 x86_64 > > 1x HP DL585 - RHEL4 x86_64 > > 1x HP DL380 - SLES10 i586 > > SLES10 or SLES10SP1 ?
SLES10SP1 > > Have you tried installing and using the latest open-iscsi from open-iscsi.org > ? > > > 2x Cisco 2960G (gigabit) switches > > > 2x Infortrend A16E-G2130-4 with 16x 1TB disks each > > > The two Infortrend arrays have all their gigabit ethernet ports > > plugged into one of the cisco switches, then we have 2 fibre > > connections leading to the other cisco switch which has the three > > servers plugged into it. The network is completely isolated from our > > other company networks. > > So you have only 2 gbit/sec of bandwidth between the Cisco switches? That's correct. I've never seen the two links saturated together, the most I've seen is ~95% on the first link and ~50% on the second. > > How many ethernet ports do your iSCSI arrays have (plugged in to the > switches)? Each iSCSI array has four 1Gbit ethernet ports, so all four ports are connected on each array. > > How many ethernet ports each server is using / plugged in to the switch? Each server has two 1Gbit ethernet ports - but only one port is used on each server for iSCSI traffic, the other is for usual LAN traffic. > > > At first I thought it was a network problem, so we replaced our dodgy > > Netgear switches with quality Cisco networking gear, but the problem > > is the same, if anything it's worse because the Cisco switches > > facilitate higher bandwidth (extra ~20mb/s) and the errors seem to be > > more reliably producible. > > Do you see packet drops/errors in any of the ports? Check all ports in both > switches. No drops and no errors on any of the ports on the servers or on the switches. There's no way to tell what is happening on the iSCSI arrays. > > > None of the linux ethernet statistics report any errors (ifconfig) and > > the cisco switches don't report any packet errors either. The > > Infortrend arrays don't provide ethernet statistics. > > Check linux TCP statistics for tcp retransmits? netstat -s Tcp: 9787 active connections openings 4964 passive connection openings 8 failed connection attempts 885 connection resets received 33 connections established 1903902036 segments received 3106760297 segments send out 2108006 segments retransmited 0 bad segments received. 1298 resets sent Looks like there are... any way to just pull the stats for eth1 ? > > > Wireshark (ethereal) shows many errors - clusters of Duplicate ACKs, > > and a few "previous segment lost". > > Are you using ethernet flow control? Check the switch settings, and server > NIC settings.. and possible iSCSI array settings.. Someone replied outside of the forum, suggesting I turn on flow control. It's made things a lot faster, but I still see problems with packets, and eventually iscsi errors. > > In a bigger IP-SAN setup with many servers and switches flow control might be > needed to get a good performance and to prevent tcp retransmits from > happening (=preventing the switch port buffers becoming full and packet drop > happening). > > > Any help would be much appreciated !!! > > Btw have you tried with ext3? XFS is known to have problems with some setups > and versions.. ext3 is worse in my experience. because our partitions are 1, 2, 5TB in size XFS works better for us, especially in the case where the partition has to be scanned for errors. fsck takes hours on large multiple terabyte arrays! xfs_check takes only a few minutes. Although, it could just be the amount of IO that fsck.ext3 does that causes iscsi problems and delays etc. > > I'm not familiar with Infotrend iSCSI arrays so can't comment much about > them.. I get that a lot )-; > > -- Pasi --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "open-iscsi" group. To post to this group, send email to open-iscsi@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/open-iscsi -~----------~----~----~----~------~----~------~--~---