Are there any plans to support record sizes larger than 128k? We use ZFS file systems for disk staging on our backup servers (compression is a nice feature here), and we typically configure the disk staging process to read and write large blocks (typically 1MB or so). This reduces the number of I/Os that take place to our storage arrays, and our testing has shown that we can push considerably more I/O with 1MB+ block sizes.
Thanks for any insight, - Ryan -- UNIX Administrator http://prefetch.net _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss