Hi Jim,

first of all I'm sure this behaviour is a bug or has been changed sometime in the past, because I've used this configuration a lot of times.


If I understand you right it is as you said.
Here's an example and you can see what happened. The sam-fs is filled to only 6% and the zvol ist full.




*archiv1:~ # zfs list*
NAME                       USED  AVAIL  REFER  MOUNTPOINT

sampool                      405G    2.49G         18K    /sampool
sampool/samdev1       405G         0K       405G    -


*archiv1:~ # samcmd f*

File systems samcmd     4.6.85 11:18:32 Jul 28 2009
samcmd on archiv1

ty eq state device_name status high low mountpoint server
ms      1      on               samfs1  m----2----d 80% 70% /samfs
md    11      on /dev/zvol/dsk/sampool/samdev1


*archiv1:~ # samcmd m*

Mass storage status samcmd     4.6.85 11:19:09 Jul 28 2009
samcmd on archiv1

ty eq status use state ord capacity free ra part high low ms 1 m----2----d 6% on 405.000G 380.469G 1M 16 80% 70%
md    11                       6%    on     0      405.000G   380.469G







Jim Klimov schrieb:
Hello tobex,

While the original question may have been answered by posts above, I'm 
interested:
when you say "according to zfs list the zvol is 100% full", does it only mean that it uses all 20Gb on the pool (like a non-sparse uncompressed file), or does it also imply that you can't write into the samfs although its structures are only 20% used?

If by any chance the latter - I think it would count as a bug. If the former - 
see
the posts above for explanations and workarounds :)

Thanks in advance for such detail,
Jim
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to