Hello,
We have a problem with windows server accessing zfs over iscsi.

There is SCSI volume on osol svn_109b for windows server, provided by 
iscsitadm. Win server shares it for other win clients over samba protocol. I 
know that the version is outdated, but we can't proceed with upgrade till 
understand what causes the delays.

There is sndr replication in place on osol over 1Gig direct link to another 
101b server.
Data flow doesn't exceeds 60M B  /s on main storage's zpool, usually it's ~15M 
B /s.

The problem: when windows server tries to read/write more heavily to iscsi vol, 
then usual (10MB/s goes to 50MB/s), doing backup, for example, or formatting a 
new volume over iscsi - then network completely freezes. Windows server stops 
to respond over RDP and the same time connected win clients can't do anything, 
they are fr ozen  too. Problem goes  away  if  we  cancel the job accessing 
volume over iscsi.

The only tune that has been made on win server was TcpAckFrequency set to 1
According to 
http://download.microsoft.com/download/A/E/9/AE91DEA1-66D9-417C-ADE4-92D824B871AF/uGuide.doc
Tune on sol included only
set rdc:rdc_rpc_tmout to allow AVS update bitmaps correctly (this is even not 
needed on a local link).

I included couple of files. nnd and netstat -s for solaris and tcp dump from 
win server, we did during the last interruption. The pass for rar is 1qqa

Unfortunately, there is no trace made on solaris (yet).

10.24.1.29 is a win server, 10.24.1.101 is opensolaris

We don't understand how windows manages to send jumbo packets (65535), packet 
91 for example. Solaris responds with bunch of ACKs to such packets.

Then, at some point tcp Windows Size decreases in packets sent from solaris, 
for example in these:
844 - 38848
845 - 31392 
this way it decreases down to 1268 bytes. 

Can somebody take a look on this information and advise what changes/tune 
should be made to network to escape such issues?
-- 
This message posted from opensolaris.org
_______________________________________________
networking-discuss mailing list
[email protected]

Reply via email to