I have set up a Solaris 10 U2 06/06 system that has basic patches to the latest -19 kernel patch and latest zfs genesis etc as recommended. I have set up a basic pool (local) and a bunch of sub-pools (local/mail, local/mail/shire.net, local/mail/shire.net/o, local/jailextras/shire.net/irsfl, etc). I am exporting these with [EMAIL PROTECTED],[EMAIL PROTECTED] and then mounting a few of these pools on a FreeBSD system using nfsv3.
The FreeBSD has about 4 of my 10 or so subpools mounted. 2 are email imap account tests, 1 is generic storage, and one is a FreeBSD jail root. FreeBSD mounts them with using TCP /sbin/mount_nfs -s -i -3 -T foo-i1:/local/mail/shire.net/o/obar /local/2/hobbiton/local/mail/shire.net/o/obar The systems are both directly connected to a gigabit switch using 1000btx-fdx and both have an MTU set at 9000. The Solaris side is an e1000g port (the system has 2 bge and 2 e1000g ports all configured) and the FreeBSD is a bge port. etc. I have heard that there are some ZFS/NFS sync performance problems etc that will be fixed in U3 or are fixed in OpenSolaris. I do not think my issue is related to that. I have also seen some of that with sometimes having pisspoor performance on writing. I have experienced the following issue several times since I started experimenting with this a few days ago. I periodically will get NFS server not responding errors on the FreeBSD machine for one of the mounted pools, and it will last 4-8 minutes or so and then come alive again and be fine for many hours. When this happens, access to the other mounted pools still works fine and logged directly in to the Solaris machine I am able to access the file systems (pools) just fine. Example error message: Sep 24 03:09:44 freebsdclient kernel: nfs server solzfs-i1:/local/jailextras/shire.net/irsfl: not responding Sep 24 03:10:15 freebsdclient kernel: nfs server solzfs-i1:/local/jailextras/shire.net/irsfl: not responding Sep 24 03:12:19 freebsdclient last message repeated 4 times Sep 24 03:14:54 freebsdclient last message repeated 5 times I would be interested in getting feedback on what might be the problem and also ways to track this down etc. Is this a know issue? Have others seen the nfs server sharing ZFS time out (but not for all pools)? Etc. Is there any functional difference with setting up the ZFS pools as legacy mounts and using a traditional share command to share them over nfs? I am mostly a Solaris noob and am happy to learn and can try anything people want me to test. Thanks in advance for any comments or help. thanks Chad This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss