On Wed, Mar 10, 2010 at 5:38 AM, Nathan Kroenert <nathan at tuneunix.com> wrote: > Hey all - > > Am just starting to look into the whole Sun Ops Centre thing, and it seems > quite nifty for doing LDOM provisioning etc. > > My first attempt was using a Storage Library that lives on an NFS server to > provide the LDOM virtual-disk files. > > I checked my baseline speed from my LDOM control/IO domain to the NFS > server, and 40-50MB/s (over gigabit) is pretty much standard. The disks > backing it SATA, and that's about as fast as I'd expect. > > My understanding is that it breaks down something like > > Big file created on NFS share by SunOpCentre > ?-> IO Domain mounts NFS share over IP > ? ? ?-> IO domain sees 'data' file, and uses that as the > ? ? ? ? backing store for the vdsdev > ? ? ? ? ?-> LDOM runs, and creates a zpool ontop of that. > > And it's wayyy slow. About 4MB/s for sequential writes... (That's opposed to > the 40 - 50 MB/s for the sequential from the IO domain) > > Also of interest is that when copying from the IO domain direct to the NFS > share, I can drive up to 500IOPS on the disks in the pool. I don't usually > get much more than 150 IOPS when I'm doing it from the guest ldom. > > Soooo - Before I start getting all medieval on this - I guess I should ask: > Is this what I should expect? Should I expect blowful performance because > *every* write operation will be treated as synchronous by NFS because of the > way the filebased backend does it's disk accesses, and it's pushing a bunch > of synchronisation primitives to ensure consistency on disk? (Or something > like that?!?) > > Has anyone seen *good* performance with this style of configuration?
I forget the exact numbers, but here's how the story went for me... I had a few folks from Sun and a reseller sitting around in a conference room with me doing my first ldom installations with Ops Center with a configuration like you say. Since Ops Center requires that it does the installation of the primary ldom, none of my standard tuning was in place. Then I added the following to /etc/system: * This speeds up NFS sequential reads by ~ 3x set nfs:nfs3_bsize=0x100000 And set the following networking tunables: ndd -set /dev/tcp tcp_xmit_hiwat 131072 ndd -set /dev/tcp tcp_recv_hiwat 131072 I found that compared to the defaults in ~ S10u4 on a T2100, those settings make it so that I can do TCP at gigabit wire speed and NFS throughput jumps from < 30 MB/sec to over 90 MB/sec. Then, in /etc/vfstab modify the mount entry for the NFS storage library to use the "forcedirectio" mount option. This makes it so that the control domain doesn't waste it's time or memory trying to cache file system blocks that are already being cached in the guest ldom by the zfs arc. Reboot the control domain (there is no separate IO domain with Ops Center 2.5). -- Mike Gerdts http://mgerdts.blogspot.com/
