I wonder exactly what's going on. Perhaps it is the cache flushes that is causing the SCSI errors when trying to use the SSD (Intel X25-E and X25-M) disks? Btw, I'm seeing the same behaviour on both an X4500 (SATA/Marwell controller) and the X4240 (SAS/LSI controller). Well, almost. On the X4500 I didn't seen the errors printed on the console, but things behaved strangely - and I did see the same speedup.
If SVM silently disables cache flushes then perhaps there should be a HUGE warning printed somewhere (ZFS FAQ? Solaris documentation? In zpool when creating/adding devices?) about using ZFS with SVM? I wonder what the potential danger might be _if_ SVM disables cache flushes for the SLOG... Sure, that might mean a missed update on the filesystem, but since the data disks on the pool is raw disk devices the ZFS filesystem should be stable (sans any possibly missed updates). I think I can live with that. What I don't want is a corrupt 16TB zpool in case of a power outage... Message was edited by: pen -- This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss