Re: [zfs-discuss] zpool list vs zfs list, size differs...

2009-02-07 Thread Johan Andersson

Tomas Ă–gren wrote:

On 07 February, 2009 - Johan Andersson sent me these 1,5K bytes:

  

Hi,

New to OpenSolaris and ZFS...
Wondering about a size difference I see on my newly installed 
OpenSolaris system, Homebuilt AMD Phenom system with SATA3 disks...


[code]
jo...@krynn:~$ zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
rpool 696G 7.67G 688G 1% ONLINE -
zpool 2.72T 135K 2.72T 0% ONLINE -



The pool has disks that can hold ...
4*7500/1024/1024/1024/1024 =~ 2.72TB

  

jo...@krynn:~$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 11.5G 674G 72K /rpool
rpool/ROOT 3.78G 674G 18K legacy
rpool/ROOT/opensolaris 3.78G 674G 3.65G /
rpool/dump 3.87G 674G 3.87G -
rpool/export 18.7M 674G 19K /export
rpool/export/home 18.7M 674G 50K /export/home
rpool/export/home/admin 18.6M 674G 18.6M /export/home/admin
rpool/swap 3.87G 677G 16K -
zpool 94.3K 2.00T 26.9K /zpool



In that pool, due to raidz, you can store about ...
3*7500/1024/1024/1024/1024 =~ 2TB

  
The disks are all 750GB SATA3 disks, why is zpool listing the raidz 
zpool as 2.74TB but zfs list the /zpool filesys as 2.0TB?

Is this a limit of my server in some way or something I can "tune" up?



Space worth about 1x750GB is lost to parity with raidz..

/Tomas
  

Thanks,
I didnt realize that the pool counted in the parity data...
I should have though... if I had bothered to calc it

*duh*


/Johan A
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool list vs zfs list, size differs...

2009-02-07 Thread Johan Andersson
Hi,

New to OpenSolaris and ZFS...
Wondering about a size difference I see on my newly installed 
OpenSolaris system, Homebuilt AMD Phenom system with SATA3 disks...

[code]
jo...@krynn:~$ zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
rpool 696G 7.67G 688G 1% ONLINE -
zpool 2.72T 135K 2.72T 0% ONLINE -

jo...@krynn:~$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 11.5G 674G 72K /rpool
rpool/ROOT 3.78G 674G 18K legacy
rpool/ROOT/opensolaris 3.78G 674G 3.65G /
rpool/dump 3.87G 674G 3.87G -
rpool/export 18.7M 674G 19K /export
rpool/export/home 18.7M 674G 50K /export/home
rpool/export/home/admin 18.6M 674G 18.6M /export/home/admin
rpool/swap 3.87G 677G 16K -
zpool 94.3K 2.00T 26.9K /zpool

jo...@krynn:~$ zpool status
pool: rpool
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror ONLINE 0 0 0
c3d0s0 ONLINE 0 0 0
c4d0s0 ONLINE 0 0 0

errors: No known data errors

pool: zpool
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
zpool ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c3d1 ONLINE 0 0 0
c4d1 ONLINE 0 0 0
c6d0 ONLINE 0 0 0
c6d1 ONLINE 0 0 0

errors: No known data errors
[/code]

The disks are all 750GB SATA3 disks, why is zpool listing the raidz 
zpool as 2.74TB but zfs list the /zpool filesys as 2.0TB?
Is this a limit of my server in some way or something I can "tune" up?

/Johan A
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss