j...@jamver.id.au said:
> For a predominantly NFS server purpose, it really looks like a case of the
> slog has to outperform your main pool for continuous write speed as well as
> an instant response time as the primary criterion. Which might as well be a
> fast (or group of fast) SSDs or 15kRPM drives with some NVRAM in front of
> them. 

I wonder if you ran Richard Elling's "zilstat" while running your
workload.  That should tell you how much ZIL bandwidth is needed,
and it would be interesting to see if its stats match with your
other measurements of slog-device traffic.

I did some filebench and "tar extract over NFS" tests of J4400 (500GB,
7200RPM SATA drives), with and without slog, where slog was using the
internal 2.5" 10kRPM SAS drives in an X4150.  These drives were behind
the standard Sun/Adaptec internal RAID controller, 256MB battery-backed
cache memory, all on Solaris-10U7.

We saw slight differences on filebench oltp profile, and a huge speedup
for the "tar extract over NFS" tests with the slog present.  Granted, the
latter was with only one NFS client, so likely did not fill NVRAM.  Pretty
good results for a poor-person's slog, though:
        http://acc.ohsu.edu/~hakansom/j4400_bench.html

Just as an aside, and based on my experience as a user/admin of various
NFS-server vendors, the old Prestoserve cards, and NetApp filers, seem
to get very good improvements with relatively small amounts of NVRAM
(128K, 1MB, 256MB, etc.).  None of the filers I've seen have ever had
tens of GB of NVRAM.

Regards,

Marion


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to