On Wed, May 04, 2011 at 02:55:55PM -0700, Brandon High wrote:
> On Wed, May 4, 2011 at 12:29 PM, Erik Trimble <erik.trim...@oracle.com> wrote:
> >        I suspect that NetApp does the following to limit their resource
> > usage:   they presume the presence of some sort of cache that can be
> > dedicated to the DDT (and, since they also control the hardware, they can
> > make sure there is always one present).  Thus, they can make their code
> 
> AFAIK, NetApp has more restrictive requirements about how much data
> can be dedup'd on each type of hardware.
> 
> See page 29 of http://media.netapp.com/documents/tr-3505.pdf - Smaller
> pieces of hardware can only dedup 1TB volumes, and even the big-daddy
> filers will only dedup up to 16TB per volume, even if the volume size
> is 32TB (the largest volume available for dedup).
> 
> NetApp solves the problem by putting rigid constraints around the
> problem, whereas ZFS lets you enable dedup for any size dataset. Both
> approaches have limitations, and it sucks when you hit them.
> 
> -B

That is very true, although worth mentioning you can have quite a few
of the dedupe/SIS enabled FlexVols on even the lower-end filers (our
FAS2050 has a bunch of 2TB SIS enabled FlexVols).

The FAS2050 of course has a fairly small memory footprint... 

I do like the additional flexibility you have with ZFS, just trying to
get a handle on the memory requirements.

Are any of you out there using dedupe ZFS file systems to store VMware
VMDK (or any VM tech. really)?  Curious what recordsize you use and
what your hardware specs / experiences have been.

Ray
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to