Does dedup work at the pool level or the filesystem/dataset level?
For example, if I were to do this:

bash-3.2$ mkfile 100m /tmp/largefile
bash-3.2$ zfs set dedup=off tank
bash-3.2$ zfs set dedup=on tank/dir1
bash-3.2$ zfs set dedup=on tank/dir2
bash-3.2$ zfs set dedup=on tank/dir3
bash-3.2$ cp /tmp/largefile /tank/dir1/largefile
bash-3.2$ cp /tmp/largefile /tank/dir2/largefile
bash-3.2$ cp /tmp/largefile /tank/dir3/largefile

Would largefile get dedup'ed?  Would I need to set dedup on for the
pool, and then disable where it isn't wanted/needed?

Also, will we need to move our data around (send/recv or whatever your
preferred method is) to take advantage of dedup?  I was hoping the
blockpointer rewrite code would allow an admin to simply turn on dedup
and let ZFS process the pool, eliminating excess redundancy as it
went.

-- 
Breandan Dezendorf
brean...@dezendorf.com
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to