corruption_errs

2018-08-27 Thread John Petrini
Hi List,

I'm seeing corruption errors when running btrfs device stats but I'm
not sure what that means exactly. I've just completed a full scrub and
it reported no errors. I'm hoping someone here can enlighten me.
Thanks!

[/dev/sdd].write_io_errs   0
[/dev/sdd].read_io_errs0
[/dev/sdd].flush_io_errs   0
[/dev/sdd].corruption_errs 331
[/dev/sdd].generation_errs 0
[/dev/sde].write_io_errs   0
[/dev/sde].read_io_errs0
[/dev/sde].flush_io_errs   0
[/dev/sde].corruption_errs 324
[/dev/sde].generation_errs 0
[/dev/sdi].write_io_errs   0
[/dev/sdi].read_io_errs0
[/dev/sdi].flush_io_errs   0
[/dev/sdi].corruption_errs 381
[/dev/sdi].generation_errs 0
[/dev/sdk].write_io_errs   0
[/dev/sdk].read_io_errs0
[/dev/sdk].flush_io_errs   0
[/dev/sdk].corruption_errs 492
[/dev/sdk].generation_errs 0
[/dev/sdl].write_io_errs   0
[/dev/sdl].read_io_errs0
[/dev/sdl].flush_io_errs   0
[/dev/sdl].corruption_errs 449
[/dev/sdl].generation_errs 0
[/dev/sdj].write_io_errs   0
[/dev/sdj].read_io_errs0
[/dev/sdj].flush_io_errs   0
[/dev/sdj].corruption_errs 391
[/dev/sdj].generation_errs 0
[/dev/sdg].write_io_errs   0
[/dev/sdg].read_io_errs0
[/dev/sdg].flush_io_errs   0
[/dev/sdg].corruption_errs 485
[/dev/sdg].generation_errs 0
[/dev/sdh].write_io_errs   0
[/dev/sdh].read_io_errs0
[/dev/sdh].flush_io_errs   0
[/dev/sdh].corruption_errs 444
[/dev/sdh].generation_errs 0
[/dev/sdb].write_io_errs   0
[/dev/sdb].read_io_errs0
[/dev/sdb].flush_io_errs   0
[/dev/sdb].corruption_errs 398
[/dev/sdb].generation_errs 0
[/dev/sdc].write_io_errs   0
[/dev/sdc].read_io_errs0
[/dev/sdc].flush_io_errs   0
[/dev/sdc].corruption_errs 400
[/dev/sdc].generation_errs 0


Re: Volume appears full but TB's of space available

2017-04-07 Thread John Petrini
The use case actually is not Ceph, I was just drawing a comparison
between Ceph's object replication strategy vs BTRF's chunk mirroring.

I do find the conversation interesting however as I work with Ceph
quite a lot but have always gone with the default XFS filesystem for
on OSD's.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Volume appears full but TB's of space available

2017-04-07 Thread John Petrini
When you say "running BTRFS raid1 on top of LVM RAID0 volumes" do you
mean creating two LVM RAID-0 volumes and then putting BTRFS RAID1 on
the two resulting logical volumes?
___

John Petrini

NOC Systems Administrator   //   CoreDial, LLC   //   coredial.com
//
Hillcrest I, 751 Arbor Way, Suite 150, Blue Bell PA, 19422
P: 215.297.4400 x232   //   F: 215.297.4401   //   E: jpetr...@coredial.com


Interested in sponsoring PartnerConnex 2017? Learn more.

The information transmitted is intended only for the person or entity
to which it is addressed and may contain confidential and/or
privileged material. Any review, retransmission,  dissemination or
other use of, or taking of any action in reliance upon, this
information by persons or entities other than the intended recipient
is prohibited. If you received this in error, please contact the
sender and delete the material from any computer.


On Fri, Apr 7, 2017 at 12:51 PM, Austin S. Hemmelgarn
 wrote:
> On 2017-04-07 12:04, Chris Murphy wrote:
>>
>> On Fri, Apr 7, 2017 at 5:41 AM, Austin S. Hemmelgarn
>>  wrote:
>>
>>> I'm rather fond of running BTRFS raid1 on top of LVM RAID0 volumes,
>>> which while it provides no better data safety than BTRFS raid10 mode,
>>> gets
>>> noticeably better performance.
>>
>>
>> This does in fact have better data safety than Btrfs raid10 because it
>> is possible to lose more than one drive without data loss. You can
>> only lose drives on one side of the mirroring, however. This is a
>> conventional raid0+1, so it's not as scalable as raid10 when it comes
>> to rebuild time.
>>
> That's a good point that I don't often remember, and I'm pretty sure that
> such an array will rebuild slower from a single device loss than BTRFS
> raid10 would, but most of that should be that BTRFS is smart enough to only
> rewrite what it has to.
>
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Volume appears full but TB's of space available

2017-04-07 Thread John Petrini
Hi Austin,

Thanks for taking to time to provide all of this great information!

You've got me curious about RAID1. If I were to convert the array to
RAID1 could it then sustain a multi drive failure? Or in other words
do I actually end up with mirrored pairs or can a chunk still be
mirrored to any disk in the array? Are there performance implications
to using RAID1 vs RAID10?
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Volume appears full but TB's of space available

2017-04-06 Thread John Petrini
Interesting. That's the first time I'm hearing this. If that's the
case I feel like it's a stretch to call it RAID10 at all. It sounds a
lot more like basic replication similar to Ceph only Ceph understands
failure domains and therefore can be configured to handle device
failure (albeit at a higher level)

I do of course keep backups but I chose RAID10 for the mix of
performance and reliability. It doesn't seems worth it losing 50% of
my usable space for the performance gain alone.

Thank you for letting me know about this. Knowing that I think I may
have to reconsider my choice here. I've really been enjoying the
flexibility of BTRS which is why I switched to it in the first place
but with experimental RAID5/6 and what you've just told me I'm
beginning to doubt that it's the right choice.

What's more concerning is that I haven't found a good way to monitor
BTRFS. I might be able to accept that the array can only handle a
single drive failure if I was confident that I could detect it but so
far I haven't found a good solution for this.
___

John Petrini

NOC Systems Administrator   //   CoreDial, LLC   //   coredial.com
//
Hillcrest I, 751 Arbor Way, Suite 150, Blue Bell PA, 19422
P: 215.297.4400 x232   //   F: 215.297.4401   //   E: jpetr...@coredial.com


Interested in sponsoring PartnerConnex 2017? Learn more.

The information transmitted is intended only for the person or entity
to which it is addressed and may contain confidential and/or
privileged material. Any review, retransmission,  dissemination or
other use of, or taking of any action in reliance upon, this
information by persons or entities other than the intended recipient
is prohibited. If you received this in error, please contact the
sender and delete the material from any computer.


On Thu, Apr 6, 2017 at 10:42 PM, Chris Murphy  wrote:
> On Thu, Apr 6, 2017 at 7:31 PM, John Petrini  wrote:
>> Hi Chris,
>>
>> I've followed your advice and converted the system chunk to raid10. I
>> hadn't noticed it was raid0 and it's scary to think that I've been
>> running this array for three months like that. Thank you for saving me
>> a lot of pain down the road!
>
> For what it's worth, it is imperative to keep frequent backups with
> Btrfs raid10, it is in some ways more like raid0+1. It can only
> tolerate the loss of a single device. It will continue to function
> with 2+ devices in a very deceptive degraded state, until it
> inevitably hits dual missing chunks of metadata or data, and then it
> will faceplant. And then you'll be looking at a scrape operation.
>
> It's not like raid10 where you can lose one of each mirrored pair.
> Btrfs raid10 mirrors chunks, not drives. So your metadata and data are
> all distributed across all of the drives, and that in effect means you
> can only lose 1 drive. If you lose a 2nd drive, some amount of
> metadata and data will have been lost.
>
>
> --
> Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Volume appears full but TB's of space available

2017-04-06 Thread John Petrini
Hi Chris,

I've followed your advice and converted the system chunk to raid10. I
hadn't noticed it was raid0 and it's scary to think that I've been
running this array for three months like that. Thank you for saving me
a lot of pain down the road!

Also thank you for the clarification on the output - this is making a
lot more sense.

Regards,

John Petrini
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Volume appears full but TB's of space available

2017-04-06 Thread John Petrini
Okay so I came across this bug report:
https://bugzilla.redhat.com/show_bug.cgi?id=1243986

It looks like I'm just misinterpreting the output of btrfs fi df. What
should I be looking at to determine the actual free space? Is Free
(estimated):   13.83TiB (min: 13.83TiB) the proper metric?

Simply running df does not seem to report the usage properly

/dev/sdj  25T   11T  5.9T  65% /mnt/storage-array

Thank you,

John Petrini
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Volume appears full but TB's of space available

2017-04-06 Thread John Petrini
Hello List,

I have a volume that appears to be full despite having multiple
Terabytes of free space available. Just yesterday I ran a re-balance
but it didn't change anything. I've just added two more disks to the
array and am currently in the process of another re-balance but the
available space has not increased.

Currently I can still write to the volume (I haven't tried any large
writes) so I'm not sure if this is just a reporting issue or if writes
will eventually fail.

Any help is appreciated. Here's the details:

uname -a
Linux yuengling.johnpetrini.com 4.4.0-66-generic #87-Ubuntu SMP Fri
Mar 3 15:29:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

btrfs --version
btrfs-progs v4.4

sudo btrfs fi df /mnt/storage-array/
Data, RAID10: total=10.72TiB, used=10.72TiB
System, RAID0: total=128.00MiB, used=944.00KiB
Metadata, RAID10: total=14.00GiB, used=12.63GiB
GlobalReserve, single: total=512.00MiB, used=0.00B

sudo btrfs fi show /mnt/storage-array/
Label: none  uuid: e113ab87-7869-4ec7-9508-95691f455018
Total devices 10 FS bytes used 10.73TiB
devid1 size 4.55TiB used 2.65TiB path /dev/sdj
devid2 size 4.55TiB used 2.65TiB path /dev/sdk
devid3 size 3.64TiB used 2.65TiB path /dev/sdd
devid4 size 3.64TiB used 2.65TiB path /dev/sdf
devid5 size 3.64TiB used 2.65TiB path /dev/sdg
devid6 size 3.64TiB used 2.65TiB path /dev/sde
devid7 size 3.64TiB used 2.65TiB path /dev/sdb
devid8 size 3.64TiB used 2.65TiB path /dev/sdc
devid9 size 9.10TiB used 149.00GiB path /dev/sdh
devid   10 size 9.10TiB used 149.00GiB path /dev/sdi

sudo btrfs fi usage /mnt/storage-array/
Overall:
Device size:   49.12TiB
Device allocated:   21.47TiB
Device unallocated:   27.65TiB
Device missing:  0.00B
Used:   21.45TiB
Free (estimated):   13.83TiB (min: 13.83TiB)
Data ratio:   2.00
Metadata ratio:   2.00
Global reserve:  512.00MiB (used: 0.00B)

Data,RAID10: Size:10.72TiB, Used:10.71TiB
   /dev/sdb1.32TiB
   /dev/sdc1.32TiB
   /dev/sdd1.32TiB
   /dev/sde1.32TiB
   /dev/sdf1.32TiB
   /dev/sdg1.32TiB
   /dev/sdh   72.00GiB
   /dev/sdi   72.00GiB
   /dev/sdj1.32TiB
   /dev/sdk1.32TiB

Metadata,RAID10: Size:14.00GiB, Used:12.63GiB
   /dev/sdb1.75GiB
   /dev/sdc1.75GiB
   /dev/sdd1.75GiB
   /dev/sde1.75GiB
   /dev/sdf1.75GiB
   /dev/sdg1.75GiB
   /dev/sdj1.75GiB
   /dev/sdk1.75GiB

System,RAID0: Size:128.00MiB, Used:944.00KiB
   /dev/sdb   16.00MiB
   /dev/sdc   16.00MiB
   /dev/sdd   16.00MiB
   /dev/sde   16.00MiB
   /dev/sdf   16.00MiB
   /dev/sdg   16.00MiB
   /dev/sdj   16.00MiB
   /dev/sdk   16.00MiB

Unallocated:
   /dev/sdb2.31TiB
   /dev/sdc2.31TiB
   /dev/sdd2.31TiB
   /dev/sde2.31TiB
   /dev/sdf2.31TiB
   /dev/sdg2.31TiB
   /dev/sdh9.03TiB
   /dev/sdi9.03TiB
   /dev/sdj3.22TiB
   /dev/sdk3.22TiB

Thank You,

John Petrini
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html