Suggest you read the ZFS Best Practices Guide (again). http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#Storage_Pools
Mike Tomas Ögren wrote:
On 19 March, 2009 - Harry Putnam sent me these 1,4K bytes:I'm finally getting close to the setup I wanted, after quite a bit of experimentation and bugging these lists endlessly. So first, thanks for your tolerance and patience. My setup consists of 4 disks. One holds the OS (rpool) and 3 more all the same model and brand, all 500gb. I've created a zpool in raidz1 configuration with: zpool create zbk raidz1 c3d0 c4d0 c4d1 No errors showed up and zpool status shows no problems with those three: pool: zbk state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM zbk ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c3d0 ONLINE 0 0 0 c4d0 ONLINE 0 0 0 c4d1 ONLINE 0 0 0 However, I appear to have lost an awfull lot of space... even above what I expercted. df -h [...] zbk 913G 26K 913G 1% /zbk It appears something like 1 entire disk is gobbled up by raidz1. The same disks configured in zpool with no raidz1 shows 1.4tb with df. I was under the impression raidz1 would take something like 20%.. but this is more like 33.33%. So, is this to be expected or is something wrong here?Not a percentage at all.. raidz1 "takes" 1 disk. raidz2 takes 2 disks. This is to be able to handle 1 vs 2 any-disk failures. Then there's the 1000 vs 1024 factor as well. Your HD manufacturer says 500GB, the rest of the computer industry says ~465.. /Tomas
smime.p7s
Description: S/MIME Cryptographic Signature
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss