Am Thu, 21 Jul 2016 00:19:41 +0200
schrieb Kai Krakow :
> Am Fri, 15 Jul 2016 20:45:32 +0200
> schrieb Matt :
>
> > > On 15 Jul 2016, at 14:10, Austin S. Hemmelgarn
> > > wrote:
> > >
> > > On 2016-07-15 05:51, Matt wrote:
> [...]
> > > The tool you want is `btrfs restore`. You'll need
Am Fri, 15 Jul 2016 20:45:32 +0200
schrieb Matt :
> > On 15 Jul 2016, at 14:10, Austin S. Hemmelgarn
> > wrote:
> >
> > On 2016-07-15 05:51, Matt wrote:
> >> Hello
> >>
> >> I glued together 6 disks in linear lvm fashion (no RAID) to obtain
> >> one large file system (see below). One of the
On Fri, Jul 15, 2016 at 12:52 PM, Austin S. Hemmelgarn
wrote:
> Your own 'btrfs fi df' output clearly says that more than 99% of your data
> chunks are in a RAID0 profile, hence my statement.
Somewhen in ancient Btrfs list history, there was a call to change the
mkfs default for multiple device
On 2016-07-15 14:45, Matt wrote:
On 15 Jul 2016, at 14:10, Austin S. Hemmelgarn wrote:
On 2016-07-15 05:51, Matt wrote:
Hello
I glued together 6 disks in linear lvm fashion (no RAID) to obtain one large
file system (see below). One of the 6 disk failed. What is the best way to
recover fr
> On 15 Jul 2016, at 14:10, Austin S. Hemmelgarn wrote:
>
> On 2016-07-15 05:51, Matt wrote:
>> Hello
>>
>> I glued together 6 disks in linear lvm fashion (no RAID) to obtain one large
>> file system (see below). One of the 6 disk failed. What is the best way to
>> recover from this?
>>
> T
On 2016-07-15 05:51, Matt wrote:
Hello
I glued together 6 disks in linear lvm fashion (no RAID) to obtain one large
file system (see below). One of the 6 disk failed. What is the best way to
recover from this?
Thanks to RAID1 of the metadata I can still access the data residing on the
remai
Hello
I glued together 6 disks in linear lvm fashion (no RAID) to obtain one large
file system (see below). One of the 6 disk failed. What is the best way to
recover from this?
Thanks to RAID1 of the metadata I can still access the data residing on the
remaining 5 disks after mounting ro,forc