On 2017-04-08 01:12, Duncan wrote:
Austin S. Hemmelgarn posted on Fri, 07 Apr 2017 07:41:22 -0400 as
excerpted:
2. Results from 'btrfs scrub'. This is somewhat tricky because scrub is
either asynchronous or blocks for a _long_ time. The simplest option
I've found is to fire off an asynchronou
Austin S. Hemmelgarn posted on Fri, 07 Apr 2017 07:41:22 -0400 as
excerpted:
> 2. Results from 'btrfs scrub'. This is somewhat tricky because scrub is
> either asynchronous or blocks for a _long_ time. The simplest option
> I've found is to fire off an asynchronous scrub to run during down-time,
On 2017-04-07 13:05, John Petrini wrote:
The use case actually is not Ceph, I was just drawing a comparison
between Ceph's object replication strategy vs BTRF's chunk mirroring.
That's actually a really good comparison that I hadn't thought of
before. From what I can tell from my limited unders
The use case actually is not Ceph, I was just drawing a comparison
between Ceph's object replication strategy vs BTRF's chunk mirroring.
I do find the conversation interesting however as I work with Ceph
quite a lot but have always gone with the default XFS filesystem for
on OSD's.
--
To unsubscri
On 2017-04-07 12:58, John Petrini wrote:
When you say "running BTRFS raid1 on top of LVM RAID0 volumes" do you
mean creating two LVM RAID-0 volumes and then putting BTRFS RAID1 on
the two resulting logical volumes?
Yes, although it doesn't have to be LVM, it could just as easily be MD
or even ha
When you say "running BTRFS raid1 on top of LVM RAID0 volumes" do you
mean creating two LVM RAID-0 volumes and then putting BTRFS RAID1 on
the two resulting logical volumes?
___
John Petrini
NOC Systems Administrator // CoreDial, LLC // coredial.com
//
Hillcrest I, 751 Arbor Way, Suite 15
On 2017-04-07 12:28, Chris Murphy wrote:
On Fri, Apr 7, 2017 at 7:50 AM, Austin S. Hemmelgarn
wrote:
If you care about both performance and data safety, I would suggest using
BTRFS raid1 mode on top of LVM or MD RAID0 together with having good backups
and good monitoring. Statistically speaki
On 2017-04-07 12:04, Chris Murphy wrote:
On Fri, Apr 7, 2017 at 5:41 AM, Austin S. Hemmelgarn
wrote:
I'm rather fond of running BTRFS raid1 on top of LVM RAID0 volumes,
which while it provides no better data safety than BTRFS raid10 mode, gets
noticeably better performance.
This does in fact
On Fri, Apr 7, 2017 at 7:50 AM, Austin S. Hemmelgarn
wrote:
> If you care about both performance and data safety, I would suggest using
> BTRFS raid1 mode on top of LVM or MD RAID0 together with having good backups
> and good monitoring. Statistically speaking, catastrophic hardware failures
> a
On Fri, Apr 7, 2017 at 5:41 AM, Austin S. Hemmelgarn
wrote:
> I'm rather fond of running BTRFS raid1 on top of LVM RAID0 volumes,
> which while it provides no better data safety than BTRFS raid10 mode, gets
> noticeably better performance.
This does in fact have better data safety than Btrfs rai
On 2017-04-07 09:28, John Petrini wrote:
Hi Austin,
Thanks for taking to time to provide all of this great information!
Glad I could help.
You've got me curious about RAID1. If I were to convert the array to
RAID1 could it then sustain a multi drive failure? Or in other words
do I actually en
Hi Austin,
Thanks for taking to time to provide all of this great information!
You've got me curious about RAID1. If I were to convert the array to
RAID1 could it then sustain a multi drive failure? Or in other words
do I actually end up with mirrored pairs or can a chunk still be
mirrored to any
On 2017-04-06 23:25, John Petrini wrote:
Interesting. That's the first time I'm hearing this. If that's the
case I feel like it's a stretch to call it RAID10 at all. It sounds a
lot more like basic replication similar to Ceph only Ceph understands
failure domains and therefore can be configured t
Interesting. That's the first time I'm hearing this. If that's the
case I feel like it's a stretch to call it RAID10 at all. It sounds a
lot more like basic replication similar to Ceph only Ceph understands
failure domains and therefore can be configured to handle device
failure (albeit at a higher
On Thu, Apr 6, 2017 at 7:31 PM, John Petrini wrote:
> Hi Chris,
>
> I've followed your advice and converted the system chunk to raid10. I
> hadn't noticed it was raid0 and it's scary to think that I've been
> running this array for three months like that. Thank you for saving me
> a lot of pain do
Hi Chris,
I've followed your advice and converted the system chunk to raid10. I
hadn't noticed it was raid0 and it's scary to think that I've been
running this array for three months like that. Thank you for saving me
a lot of pain down the road!
Also thank you for the clarification on the output
On Thu, Apr 6, 2017 at 7:15 PM, John Petrini wrote:
> Okay so I came across this bug report:
> https://bugzilla.redhat.com/show_bug.cgi?id=1243986
>
> It looks like I'm just misinterpreting the output of btrfs fi df. What
> should I be looking at to determine the actual free space? Is Free
> (esti
On Thu, Apr 6, 2017 at 6:47 PM, John Petrini wrote:
> sudo btrfs fi df /mnt/storage-array/
> Data, RAID10: total=10.72TiB, used=10.72TiB
> System, RAID0: total=128.00MiB, used=944.00KiB
> Metadata, RAID10: total=14.00GiB, used=12.63GiB
> GlobalReserve, single: total=512.00MiB, used=0.00B
The thi
Okay so I came across this bug report:
https://bugzilla.redhat.com/show_bug.cgi?id=1243986
It looks like I'm just misinterpreting the output of btrfs fi df. What
should I be looking at to determine the actual free space? Is Free
(estimated): 13.83TiB (min: 13.83TiB) the proper metric?
Simply ru
19 matches
Mail list logo