Re: [zfs-discuss] avail drops to 32.1T from 40.8T after create -o mountpoint

2009-07-30 Thread Bill Sommerfeld
On Wed, 2009-07-29 at 06:50 -0700, Glen Gunselman wrote:
> There was a time when manufacturers know about base-2 but those days  
> are long gone.

Oh, they know all about base-2; it's just that disks seem bigger when
you use base-10 units.

Measure a disk's size in 10^(3n)-based KB/MB/GB/TB units, and you get a
bigger number than its size in the natural-for-software 2^(10n)-sized
units.

So it's obvious which numbers end up on the marketing glossies, and it's
all downhill from there...

- Bill


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] avail drops to 32.1T from 40.8T after create -o mountpoint

2009-07-29 Thread Scott Lawson



Glen Gunselman wrote:

Here is the output from my J4500 with 48 x 1 TB
disks. It is almost the 
exact same configuration as

yours. This is used for Netbackup. As Mario just
pointed out, "zpool 
list" includes the parity drive

in the space calculation whereas "zfs list" doesn't.

[r...@xxx /]#> zpool status




Scoot,

Thanks for the sample zpool status output.  I will be using the storage for 
NetBackup, also.  (I am booting the X4500 from a SAN - 6140 - and using a SL48 
w/2 LTO4 drives.)

Glen
  

Glen,

If you want any more info about our configuration drop me a line. It 
works ver very well and we have had

no issues at all.

This System is a T5220 (323 GB RAM)with the 48 TB J4500 connected via 
SAS. System also has 3 dual port fibre channel
HBA's feeding 6 LTO4 drives in a 540 slot SL500. The server is 10 gig 
attached straight to our network core routers and
needless to say achieves very high throughput. I have seen it pushing 
the full capacity of the SAS link to the J4500 quite

commonly. This is probably the choke point for this system.

/Scott

--
___


Scott Lawson
Systems Architect
Manukau Institute of Technology
Information Communication Technology Services Private Bag 94006 Manukau
City Auckland New Zealand

Phone  : +64 09 968 7611
Fax: +64 09 968 7641
Mobile : +64 27 568 7611

mailto:sc...@manukau.ac.nz

http://www.manukau.ac.nz




perl -e 'print
$i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'

 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] avail drops to 32.1T from 40.8T after create -o mountpoint

2009-07-29 Thread Victor Latushkin

On 29.07.09 16:59, Mark J Musante wrote:

On Tue, 28 Jul 2009, Glen Gunselman wrote:



# zpool list
NAME SIZE   USED  AVAILCAP  HEALTH  ALTROOT
zpool1  40.8T   176K  40.8T 0%  ONLINE  -



# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
zpool1 364K  32.1T  28.8K  /zpool1


This is normal, and admittedly somewhat confusing (see CR 6308817).  
Even if you had not created the additional zfs datasets, it still would 
have listed 40T and 32T.


Here's an example using five 1G disks in a raidz:

-bash-3.2# zpool list
NAME   SIZE   USED  AVAILCAP  HEALTH  ALTROOT
tank  4.97G   132K  4.97G 0%  ONLINE  -
-bash-3.2# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
tank  98.3K  3.91G  28.8K  /tank

The AVAIL column in the zpool output shows 5G, whereas it shows 4G in 
the zfs list.  The difference is the 1G parity.  If we use raidz2, we'd 
expect 2G to be used for the parity, and this is borne out in a quick 
test using the same disks:


-bash-3.2# zpool list
NAME   SIZE   USED  AVAILCAP  HEALTH  ALTROOT
tank  4.97G   189K  4.97G 0%  ONLINE  -
-bash-3.2# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
tank   105K  2.91G  32.2K  /tank


Contrast that with a five-way mirror:

-bash-3.2# zpool list
NAME   SIZE   USED  AVAILCAP  HEALTH  ALTROOT
tank  1016M  73.5K  1016M 0%  ONLINE  -
-bash-3.2# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
tank69K   984M18K  /tank


Mirror case shows one more thing worth to mention - difference between 
available space reported by zpool and zfs is explained by a reservation 
set aside by ZFS for internal purposes - it is 32MB or 1/64 of pool 
capacity whichever is bigger (32MB in this example). Same reservation 
applies to RAID-Z case as well, though it is difficult to see it ;-)


victor
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] avail drops to 32.1T from 40.8T after create -o mountpoint

2009-07-29 Thread Mark J Musante

On Wed, 29 Jul 2009, Glen Gunselman wrote:


Where would I see CR 6308817 my usual search tools aren't find it.


http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6308817


Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] avail drops to 32.1T from 40.8T after create -o mountpoint

2009-07-29 Thread Glen Gunselman
> This is normal, and admittedly somewhat confusing
> (see CR 6308817).  Even 
> if you had not created the additional zfs datasets,
> it still would have 
> listed 40T and 32T.
> 

Mark, 

Thanks for the examples.  

Where would I see CR 6308817 my usual search tools aren't find it.

Glen
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] avail drops to 32.1T from 40.8T after create -o mountpoint

2009-07-29 Thread Glen Gunselman
> Here is the output from my J4500 with 48 x 1 TB
> disks. It is almost the 
> exact same configuration as
> yours. This is used for Netbackup. As Mario just
> pointed out, "zpool 
> list" includes the parity drive
> in the space calculation whereas "zfs list" doesn't.
> 
> [r...@xxx /]#> zpool status
> 

Scoot,

Thanks for the sample zpool status output.  I will be using the storage for 
NetBackup, also.  (I am booting the X4500 from a SAN - 6140 - and using a SL48 
w/2 LTO4 drives.)

Glen
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] avail drops to 32.1T from 40.8T after create -o mountpoint

2009-07-29 Thread Glen Gunselman
> IIRC zpool list includes the parity drives in the disk space calculation
and zfs list doesn't.

> Terabyte drives are more likely 900-something GB drives thanks to that
base-2 vs. base-10 confusion HD manufacturers introduced. Using that
900GB figure I get to both 40TB and 32TB for with and without parity
drives. Spares aren't counted.

I see format/verify shows the disk size as 931GB

Volume name = <>
ascii name  = 
bytes/sector=  512
sectors = 1953525166
accessible sectors = 1953525133
Part  TagFlag First Sector  Size  Last Sector
  0usrwm   256   931.51GB   1953508749
  1 unassignedwm 000
  2 unassignedwm 000
  3 unassignedwm 000
  4 unassignedwm 000
  5 unassignedwm 000
  6 unassignedwm 000
  8   reservedwm1953508750 8.00MB   1953525133

I totally over looked the count the spares/don't count the spares issue. When 
they (the manufacturers) round up and then multiply by 48 the difference 
between what the sales brochure shows and what you end up with becomes 
significant.

There was a time when manufacturers know about base-2 but those days are long 
gone.

Thanks for the reply,
Glen
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] avail drops to 32.1T from 40.8T after create -o mountpoint

2009-07-29 Thread Mark J Musante

On Tue, 28 Jul 2009, Glen Gunselman wrote:



# zpool list
NAME SIZE   USED  AVAILCAP  HEALTH  ALTROOT
zpool1  40.8T   176K  40.8T 0%  ONLINE  -



# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
zpool1 364K  32.1T  28.8K  /zpool1


This is normal, and admittedly somewhat confusing (see CR 6308817).  Even 
if you had not created the additional zfs datasets, it still would have 
listed 40T and 32T.


Here's an example using five 1G disks in a raidz:

-bash-3.2# zpool list
NAME   SIZE   USED  AVAILCAP  HEALTH  ALTROOT
tank  4.97G   132K  4.97G 0%  ONLINE  -
-bash-3.2# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
tank  98.3K  3.91G  28.8K  /tank

The AVAIL column in the zpool output shows 5G, whereas it shows 4G in the 
zfs list.  The difference is the 1G parity.  If we use raidz2, we'd expect 
2G to be used for the parity, and this is borne out in a quick test using 
the same disks:


-bash-3.2# zpool list
NAME   SIZE   USED  AVAILCAP  HEALTH  ALTROOT
tank  4.97G   189K  4.97G 0%  ONLINE  -
-bash-3.2# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
tank   105K  2.91G  32.2K  /tank


Contrast that with a five-way mirror:

-bash-3.2# zpool list
NAME   SIZE   USED  AVAILCAP  HEALTH  ALTROOT
tank  1016M  73.5K  1016M 0%  ONLINE  -
-bash-3.2# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
tank69K   984M18K  /tank

Now they both show the pool capacity to be around 1G.


Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] avail drops to 32.1T from 40.8T after create -o mountpoint

2009-07-28 Thread Scott Lawson



Glen Gunselman wrote:

This is my first ZFS pool.  I'm using an X4500 with 48 TB drives.  Solaris is 
5/09.
After the create zfs list shows 40.8T but after creating 4 
filesystems/mountpoints the available drops 8.8TB to 32.1TB.  What happened to 
the 8.8TB. Is this much overhead normal?


zpool create -f zpool1 raidz c1t0d0 c2t0d0 c3t0d0 c5t0d0 c6t0d0 \
   raidz c1t1d0 c2t1d0 c3t1d0 c4t1d0 c5t1d0 \
   raidz c6t1d0 c1t2d0 c2t2d0 c3t2d0 c4t2d0 \
   raidz c5t2d0 c6t2d0 c1t3d0 c2t3d0 c3t3d0 \
   raidz c4t3d0 c5t3d0 c6t3d0 c1t4d0 c2t4d0 \
   raidz c3t4d0 c5t4d0 c6t4d0 c1t5d0 c2t5d0 \
   raidz c3t5d0 c4t5d0 c5t5d0 c6t5d0 c1t6d0 \
   raidz c2t6d0 c3t6d0 c4t6d0 c5t6d0 c6t6d0 \
   raidz c1t7d0 c2t7d0 c3t7d0 c4t7d0 c5t7d0 \
   spare c6t7d0 c4t0d0 c4t4d0
zpool list
NAME SIZE   USED  AVAILCAP  HEALTH  ALTROOT
zpool1  40.8T   176K  [b]40.8T[/b] 0%  ONLINE  - 
## create multiple file systems in the pool

zfs create -o mountpoint=/backup1fs zpool1/backup1fs
zfs create -o mountpoint=/backup2fs zpool1/backup2fs
zfs create -o mountpoint=/backup3fs zpool1/backup3fs
zfs create -o mountpoint=/backup4fs zpool1/backup4fs
zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
zpool1 364K  [b]32.1T[/b]  28.8K  /zpool1
zpool1/backup1fs  28.8K  32.1T  28.8K  /backup1fs
zpool1/backup2fs  28.8K  32.1T  28.8K  /backup2fs
zpool1/backup3fs  28.8K  32.1T  28.8K  /backup3fs
zpool1/backup4fs  28.8K  32.1T  28.8K  /backup4fs

Thanks,
Glen
(PS. As I said this is my first time working with ZFS, if this is a dumb 
question - just say so.)
  
Here is the output from my J4500 with 48 x 1 TB disks. It is almost the 
exact same configuration as
yours. This is used for Netbackup. As Mario just pointed out, "zpool 
list" includes the parity drive

in the space calculation whereas "zfs list" doesn't.

[r...@xxx /]#> zpool status

errors: No known data errors

pool: nbupool
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
nbupool ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c2t2d0 ONLINE 0 0 0
c2t3d0 ONLINE 0 0 0
c2t4d0 ONLINE 0 0 0
c2t5d0 ONLINE 0 0 0
c2t6d0 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c2t7d0 ONLINE 0 0 0
c2t8d0 ONLINE 0 0 0
c2t9d0 ONLINE 0 0 0
c2t10d0 ONLINE 0 0 0
c2t11d0 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c2t12d0 ONLINE 0 0 0
c2t13d0 ONLINE 0 0 0
c2t14d0 ONLINE 0 0 0
c2t15d0 ONLINE 0 0 0
c2t16d0 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c2t17d0 ONLINE 0 0 0
c2t18d0 ONLINE 0 0 0
c2t19d0 ONLINE 0 0 0
c2t20d0 ONLINE 0 0 0
c2t21d0 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c2t22d0 ONLINE 0 0 0
c2t23d0 ONLINE 0 0 0
c2t24d0 ONLINE 0 0 0
c2t25d0 ONLINE 0 0 0
c2t26d0 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c2t27d0 ONLINE 0 0 0
c2t28d0 ONLINE 0 0 0
c2t29d0 ONLINE 0 0 0
c2t30d0 ONLINE 0 0 0
c2t31d0 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c2t32d0 ONLINE 0 0 0
c2t33d0 ONLINE 0 0 0
c2t34d0 ONLINE 0 0 0
c2t35d0 ONLINE 0 0 0
c2t36d0 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c2t37d0 ONLINE 0 0 0
c2t38d0 ONLINE 0 0 0
c2t39d0 ONLINE 0 0 0
c2t40d0 ONLINE 0 0 0
c2t41d0 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c2t42d0 ONLINE 0 0 0
c2t43d0 ONLINE 0 0 0
c2t44d0 ONLINE 0 0 0
c2t45d0 ONLINE 0 0 0
c2t46d0 ONLINE 0 0 0
spares
c2t47d0 AVAIL
c2t48d0 AVAIL
c2t49d0 AVAIL

errors: No known data errors
[r...@xxx /]#> zfs list
NAME USED AVAIL REFER MOUNTPOINT
NBU 113G 20.6G 113G /NBU
nbupool 27.5T 4.58T 30.4K /nbupool
nbupool/backup1 6.90T 4.58T 6.90T /backup1
nbupool/backup2 6.79T 4.58T 6.79T /backup2
nbupool/backup3 7.28T 4.58T 7.28T /backup3
nbupool/backup4 6.43T 4.58T 6.43T /backup4
nbupool/nbushareddisk 20.1G 4.58T 20.1G /nbushareddisk
nbupool/zfscachetest 69.2G 4.58T 69.2G /nbupool/zfscachetest

[r...@xxx /]#> zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
NBU 136G 113G 22.8G 83% ONLINE -
nbupool 40.8T 34.4T 6.37T 84% ONLINE -
[r...@solnbu1 /]#>


--
___


Scott Lawson
Systems Architect
Manukau Institute of Technology
Information Communication Technology Services Private Bag 94006 Manukau
City Auckland New Zealand

Phone  : +64 09 968 7611
Fax: +64 09 968 7641
Mobile : +64 27 568 7611

mailto:sc...@manukau.ac.nz

http://www.manukau.ac.nz




perl -e 'print
$i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);'

 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] avail drops to 32.1T from 40.8T after create -o mountpoint

2009-07-28 Thread Mario Goebbels

This is my first ZFS pool.  I'm using an X4500 with 48 TB drives.  Solaris is 
5/09.
After the create zfs list shows 40.8T but after creating 4 
filesystems/mountpoints the available drops 8.8TB to 32.1TB.  What happened to 
the 8.8TB. Is this much overhead normal?


IIRC zpool list includes the parity drives in the disk space calculation 
and zfs list doesn't.


Terabyte drives are more likely 900-something GB drives thanks to that 
base-2 vs. base-10 confusion HD manufacturers introduced. Using that 
900GB figure I get to both 40TB and 32TB for with and without parity 
drives. Spares aren't counted.


Regards,
-mg
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] avail drops to 32.1T from 40.8T after create -o mountpoint

2009-07-28 Thread Glen Gunselman
This is my first ZFS pool.  I'm using an X4500 with 48 TB drives.  Solaris is 
5/09.
After the create zfs list shows 40.8T but after creating 4 
filesystems/mountpoints the available drops 8.8TB to 32.1TB.  What happened to 
the 8.8TB. Is this much overhead normal?


zpool create -f zpool1 raidz c1t0d0 c2t0d0 c3t0d0 c5t0d0 c6t0d0 \
   raidz c1t1d0 c2t1d0 c3t1d0 c4t1d0 c5t1d0 \
   raidz c6t1d0 c1t2d0 c2t2d0 c3t2d0 c4t2d0 \
   raidz c5t2d0 c6t2d0 c1t3d0 c2t3d0 c3t3d0 \
   raidz c4t3d0 c5t3d0 c6t3d0 c1t4d0 c2t4d0 \
   raidz c3t4d0 c5t4d0 c6t4d0 c1t5d0 c2t5d0 \
   raidz c3t5d0 c4t5d0 c5t5d0 c6t5d0 c1t6d0 \
   raidz c2t6d0 c3t6d0 c4t6d0 c5t6d0 c6t6d0 \
   raidz c1t7d0 c2t7d0 c3t7d0 c4t7d0 c5t7d0 \
   spare c6t7d0 c4t0d0 c4t4d0
zpool list
NAME SIZE   USED  AVAILCAP  HEALTH  ALTROOT
zpool1  40.8T   176K  [b]40.8T[/b] 0%  ONLINE  - 
## create multiple file systems in the pool
zfs create -o mountpoint=/backup1fs zpool1/backup1fs
zfs create -o mountpoint=/backup2fs zpool1/backup2fs
zfs create -o mountpoint=/backup3fs zpool1/backup3fs
zfs create -o mountpoint=/backup4fs zpool1/backup4fs
zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
zpool1 364K  [b]32.1T[/b]  28.8K  /zpool1
zpool1/backup1fs  28.8K  32.1T  28.8K  /backup1fs
zpool1/backup2fs  28.8K  32.1T  28.8K  /backup2fs
zpool1/backup3fs  28.8K  32.1T  28.8K  /backup3fs
zpool1/backup4fs  28.8K  32.1T  28.8K  /backup4fs

Thanks,
Glen
(PS. As I said this is my first time working with ZFS, if this is a dumb 
question - just say so.)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss