This is my first ZFS pool.  I'm using an X4500 with 48 TB drives.  Solaris is 
5/09.
After the create zfs list shows 40.8T but after creating 4 
filesystems/mountpoints the available drops 8.8TB to 32.1TB.  What happened to 
the 8.8TB. Is this much overhead normal?


zpool create -f zpool1 raidz c1t0d0 c2t0d0 c3t0d0 c5t0d0 c6t0d0 \
                       raidz c1t1d0 c2t1d0 c3t1d0 c4t1d0 c5t1d0 \
                       raidz c6t1d0 c1t2d0 c2t2d0 c3t2d0 c4t2d0 \
                       raidz c5t2d0 c6t2d0 c1t3d0 c2t3d0 c3t3d0 \
                       raidz c4t3d0 c5t3d0 c6t3d0 c1t4d0 c2t4d0 \
                       raidz c3t4d0 c5t4d0 c6t4d0 c1t5d0 c2t5d0 \
                       raidz c3t5d0 c4t5d0 c5t5d0 c6t5d0 c1t6d0 \
                       raidz c2t6d0 c3t6d0 c4t6d0 c5t6d0 c6t6d0 \
                       raidz c1t7d0 c2t7d0 c3t7d0 c4t7d0 c5t7d0 \
                       spare c6t7d0 c4t0d0 c4t4d0
zpool list
NAME     SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
zpool1  40.8T   176K  [b]40.8T[/b]     0%  ONLINE  - 
## create multiple file systems in the pool
zfs create -o mountpoint=/backup1fs zpool1/backup1fs
zfs create -o mountpoint=/backup2fs zpool1/backup2fs
zfs create -o mountpoint=/backup3fs zpool1/backup3fs
zfs create -o mountpoint=/backup4fs zpool1/backup4fs
zfs list
NAME               USED  AVAIL  REFER  MOUNTPOINT
zpool1             364K  [b]32.1T[/b]  28.8K  /zpool1
zpool1/backup1fs  28.8K  32.1T  28.8K  /backup1fs
zpool1/backup2fs  28.8K  32.1T  28.8K  /backup2fs
zpool1/backup3fs  28.8K  32.1T  28.8K  /backup3fs
zpool1/backup4fs  28.8K  32.1T  28.8K  /backup4fs

Thanks,
Glen
(PS. As I said this is my first time working with ZFS, if this is a dumb 
question - just say so.)
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to