Dear All,
I'm sorry, I cannot provide verbose zpool information anymore. I was a
bit in a hurry to put the file system into production and that's why I
have reformatted the servers with ldiskfs.
On Tue, Aug 25, 2015 at 5:54 AM, Alexander I Kulyavtsev wrote:
> I was assuming the question was abou
Hmm,
I was assuming the question was about total space as I struggled for some time
to understand why do I have 99 TB total available space per OSS, after
installing zfs lustre, while ldiskfs OSTs have 120 TB on the same hardware. The
20% difference was partially (10%) accounted by different ra
I could be wrong, but I don't think that the original poster was asking
why the SIZE field of zpool list was wrong, but rather why the AVAIL
space in zfs list was lower than he expected.
I would find it easier to answer the question if I knew his drive count
and drive size.
Chris
On 08/24/2
Same question here.
6TB/65TB is 11% . In our case about the same fraction was "missing."
My speculation was, It may happen if at some point between zpool and linux the
value reported in TB is interpreted as in TiB, and then converted to TB. Or
unneeded conversion MB to MiB done twice, etc.
He
If you provide the "zpool list -v" output it might give us a little
clearer view of what you have going on.
Chris
On 08/19/2015 06:18 AM, Götz Waschk wrote:
Dear Lustre experts,
I have configured two different Lustre instances, both using Lustre
2.5.3, one with ldiskfs on RAID-6 hardware RAID
Dear Lustre experts,
I have configured two different Lustre instances, both using Lustre
2.5.3, one with ldiskfs on RAID-6 hardware RAID and one using ZFS and
RAID-Z2, using the same type of hardware. I was wondering, why I 24 TB
less space available, when I should have the same amount of parity
u