Re: [zfs-discuss] zpool vs df

2009-03-09 Thread Lars-Gunnar Persson

This was enlightening! Thanks a lot and sorry for the noise.

Lars-Gunnar Persson

On 9. mars. 2009, at 14.27, Tim wrote:




On Mon, Mar 9, 2009 at 7:07 AM, Lars-Gunnar Persson > wrote:
I've a interesting situation. I've created two pool now and one pool  
named "Data" and another named "raid5". Check the details here:


bash-3.00# zpool list
NAMESIZEUSED   AVAILCAP  HEALTH  
ALTROOT

Data   10.7T   9.82T892G91%  ONLINE -
raid5  10.9T145K   10.9T 0%  ONLINE -

As you see, the sizes are approximately the same. If I run the df  
command, it reports:


bash-3.00# df -h /Data
Filesystem size   used  avail capacity  Mounted on
Data11T   108M   154G 1%/Data
bash-3.00# df -h /raid5
Filesystem size   used  avail capacity  Mounted on
raid5  8.9T40K   8.9T 1%/raid5

You see that the Data has 11 TB when zpool reported 10.7 TB and the  
raid5 has 10.9TB in zpool but only 8.9 TB when using df. Thats a  
difference of 2 TB. Where did they go?


Any explanation would be find.

Regards,

Lars-Gunnar Persson

Parity drives.  zpool list shows total size including parity  
drives.  df is showing usable after subtracting parity drives.


--Tim




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool vs df

2009-03-09 Thread Lars-Gunnar Persson

Here is what zpool status reports:

bash-3.00# zpool status
  pool: Data
 state: ONLINE
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
Data ONLINE   0 0 0
  c4t5000402001FC442Cd0  ONLINE   0 0 0

errors: No known data errors

  pool: raid5
 state: ONLINE
 scrub: none requested
config:

NAME   STATE READ  
WRITE CKSUM
raid5  ONLINE   0  
0 0
  raidz1   ONLINE   0  
0 0
c7t6000402001FC442C609DCA22d0  ONLINE   0  
0 0
c7t6000402001FC442C609DCA4Ad0  ONLINE   0  
0 0
c7t6000402001FC442C609DCAA2d0  ONLINE   0  
0 0
c7t6000402001FC442C609DCABFd0  ONLINE   0  
0 0
c7t6000402001FC442C609DCADBd0  ONLINE   0  
0 0
c7t6000402001FC442C609DCAF8d0  ONLINE   0  
0 0


errors: No known data errors


On 9. mars. 2009, at 14.29, Tomas Ögren wrote:


On 09 March, 2009 - Lars-Gunnar Persson sent me these 1,1K bytes:


I've a interesting situation. I've created two pool now and one pool
named "Data" and another named "raid5". Check the details here:

bash-3.00# zpool list
NAMESIZEUSED   AVAILCAP  HEALTH  
ALTROOT

Data   10.7T   9.82T892G91%  ONLINE -
raid5  10.9T145K   10.9T 0%  ONLINE -

As you see, the sizes are approximately the same. If I run the df
command, it reports:

bash-3.00# df -h /Data
Filesystem size   used  avail capacity  Mounted on
Data11T   108M   154G 1%/Data
bash-3.00# df -h /raid5
Filesystem size   used  avail capacity  Mounted on
raid5  8.9T40K   8.9T 1%/raid5

You see that the Data has 11 TB when zpool reported 10.7 TB and the
raid5 has 10.9TB in zpool but only 8.9 TB when using df. Thats a
difference of 2 TB. Where did they go?


To your raid5 (raidz) parity.

Check 'zpool status' to see how your two pools differ.. zpool list  
shows

the disk space you have.. zfs/df shows how much you can store there..

/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



.--.
|Lars-Gunnar  
Persson   |
|IT- 
sjef   |
| 
  |
|Nansen senteret for miljø og  
fjernmåling  |
|Adresse  : Thormøhlensgate 47, 5006  
Bergen|
|Direkte  : 55 20 58 31, sentralbord: 55 20 58 00, fax: 55 20 58  
01|
|Internett: http://www.nersc.no, e-post: lars- 
gunnar.pers...@nersc.no  |

'--'

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool vs df

2009-03-09 Thread Tomas Ögren
On 09 March, 2009 - Lars-Gunnar Persson sent me these 1,1K bytes:

> I've a interesting situation. I've created two pool now and one pool  
> named "Data" and another named "raid5". Check the details here:
>
> bash-3.00# zpool list
> NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
> Data   10.7T   9.82T892G91%  ONLINE -
> raid5  10.9T145K   10.9T 0%  ONLINE -
>
> As you see, the sizes are approximately the same. If I run the df  
> command, it reports:
>
> bash-3.00# df -h /Data
> Filesystem size   used  avail capacity  Mounted on
> Data11T   108M   154G 1%/Data
> bash-3.00# df -h /raid5
> Filesystem size   used  avail capacity  Mounted on
> raid5  8.9T40K   8.9T 1%/raid5
>
> You see that the Data has 11 TB when zpool reported 10.7 TB and the  
> raid5 has 10.9TB in zpool but only 8.9 TB when using df. Thats a  
> difference of 2 TB. Where did they go?

To your raid5 (raidz) parity.

Check 'zpool status' to see how your two pools differ.. zpool list shows
the disk space you have.. zfs/df shows how much you can store there..

/Tomas
-- 
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool vs df

2009-03-09 Thread Lars-Gunnar Persson
I've a interesting situation. I've created two pool now and one pool  
named "Data" and another named "raid5". Check the details here:


bash-3.00# zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
Data   10.7T   9.82T892G91%  ONLINE -
raid5  10.9T145K   10.9T 0%  ONLINE -

As you see, the sizes are approximately the same. If I run the df  
command, it reports:


bash-3.00# df -h /Data
Filesystem size   used  avail capacity  Mounted on
Data11T   108M   154G 1%/Data
bash-3.00# df -h /raid5
Filesystem size   used  avail capacity  Mounted on
raid5  8.9T40K   8.9T 1%/raid5

You see that the Data has 11 TB when zpool reported 10.7 TB and the  
raid5 has 10.9TB in zpool but only 8.9 TB when using df. Thats a  
difference of 2 TB. Where did they go?


Any explanation would be find.

Regards,

Lars-Gunnar Persson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss