On May 2, 2010, at 8:47 AM, Steve Staples wrote:

> Hi there!
> 
> I am new to the list, and to OpenSolaris, as well as ZPS.
> 
> I am creating a zpool/zfs to use on my NAS server, and basically I want some
> redundancy for my files/media.   What I am looking to do, is get a bunch of
> 2TB drives, and mount them mirrored, and in a zpool so that I don't have to
> worry about running out of room. (I know, pretty typical I guess).
> 
> My problem is, is that not all 2TB hard drives are the same size (even
> though they should be 2 trillion bytes, there is still sometimes a +/- (I've
> only noticed this 2x so far) ) and if I create them mirrored, and one fails,
> and then I replace the drive, and for some reason, it is 1byte smaller, it
> will not work.
> 
> How would I go about fixing this "problem"?

This problem is already fixed for you in ZFS. For disk sizes in 2TB it may 
tolerate difference in size up to approximately a little bit less than half a 
metaslab size which is currently likely to be 16GB, thus it may tolerate 
difference in size of up to, say, 7.5GB. 

I think that in most cases difference in sizes is below that figure.

You can see it for yourself:

bash-4.0# mkfile -n 2000000000000 d0
bash-4.0# zpool create pool `pwd`/d0
bash-4.0# mkfile -n 1992869543936 d1
bash-4.0# zpool attach pool `pwd`/d0 `pwd`/d1
bash-4.0# zpool status pool
  pool: pool
 state: ONLINE
 scrub: resilver completed after 0h0m with 0 errors on Sun May  2 15:25:24 2010
config:

        NAME             STATE     READ WRITE CKSUM
        pool             ONLINE       0     0     0
          mirror-0       ONLINE       0     0     0
            /var/tmp/d0  ONLINE       0     0     0
            /var/tmp/d1  ONLINE       0     0     0  83.5K resilvered

errors: No known data errors
bash-4.0# zpool detach pool `pwd`/d1

So you can see that even though difference in size between d0 and d1 is 
7130456064 (~6.6GB), it can still be attached just fine. Let's now detach d1 
and make it 1 byte smaller:

bash-4.0# mkfile -n 1992869543935 d1
bash-4.0# zpool attach pool `pwd`/d0 `pwd`/d1
cannot attach /var/tmp/d1 to /var/tmp/d0: device is too small
bash-4.0# 

This time is is no longer possible to attach it, because size is not enough to 
fit the same number (116) of 16G metaslabs;

> ****THIS is just a thought, I am looking for thoughts and opinions on doing
> this... it prolly would be a bad idea, but hey, does it hurt to ask?****
> 
> I have been thinking, and would it be a good idea, to have on the 2TB
> drives, say 1TB or 500GB "files" and then mount them as mirrored?   So
> basically, have a 2TB hard drive, set up like:
> 
> (where drive1 and drive2 are the paths to the mount points)
> Mkfile 465gb /drive1/drive1part1
> Mkfile 465gb /drive1/drive1part2
> Mkfile 465gb /drive1/drive1part3
> Mkfile 465gb /drive1/drive1part4
> 
> Mkfile 465gb /drive2/drive2part1
> Mkfile 465gb /drive2/drive2part2
> Mkfile 465gb /drive2/drive2part3
> Mkfile 465gb /drive2/drive2part4
> 
> (I use 465gb, as 2TB = 2trillion bytes, / 4 = 465.66 gb)
> 
> And then add them to the zpool
> Zpool add medianas mirror /drive1/drive1part1 /drive2/drive2/part1
> Zpool add medianas mirror /drive1/drive1part2 /drive2/drive2/part2
> Zpool add medianas mirror /drive1/drive1part3 /drive2/drive2/part3
> Zpool add medianas mirror /drive1/drive1part4 /drive2/drive2/part4

This is not a good idea


regards
victor

> And then, if a drive goes and I only have a 500gb and a 1.5tb drives, they
> could be replaced that way?
> 
> I am sure there are performance issues in doing this, but would the
> performance outweigh the possibility of hard drive failure and replacing
> drives?
> 
> Sorry for posting a novel, but I am just concerned about failure on bigger
> drives, and putting my media/files into basically what consists of a JBOD
> type array (on steroids).
> 
> Steve
> 
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to