On Jan 2, 2010, at 1:47 AM, Andras Spitzer wrote:
Mike,

As far as I know only Hitachi is using such a huge chunk size :

"So each vendor’s implementation of TP uses a different block size. HDS use 42MB on the USP, EMC use 768KB on DMX, IBM allow a variable size from 32KB to 256KB on the SVC and 3Par use blocks of just 16KB. The reasons for this are many and varied and for legacy hardware are a reflection of the underlying hardware architecture."

http://gestaltit.com/all/tech/storage/chris/thin-provisioning-holy-grail-utilisation/

Also, here Hu explains the reason why they believe 42M is the most efficient :

http://blogs.hds.com/hu/2009/07/chunk-size-matters.html

He has some good points in his arguments.

Yes, and they apply to ZFS dedup as well... :-)
 -- richard

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to