On Sun, Sep 11, 2016 at 2:06 PM, Adam Borowski wrote:
> On Sun, Sep 11, 2016 at 09:48:35PM +0200, Martin Steigerwald wrote:
>> Hmm… I found this from being referred to by reading Debian wiki page on
>> BTRFS¹.
>>
>> I use compress=lzo on BTRFS RAID 1 since April 2014 and I
On Sun, Sep 11, 2016 at 09:48:35PM +0200, Martin Steigerwald wrote:
> Hmm… I found this from being referred to by reading Debian wiki page on
> BTRFS¹.
>
> I use compress=lzo on BTRFS RAID 1 since April 2014 and I never found an
> issue. Steven, your filesystem wasn´t RAID 1 but RAID 5 or 6?
>
Am Sonntag, 26. Juni 2016, 13:13:04 CEST schrieb Steven Haigh:
> On 26/06/16 12:30, Duncan wrote:
> > Steven Haigh posted on Sun, 26 Jun 2016 02:39:23 +1000 as excerpted:
> >> In every case, it was a flurry of csum error messages, then instant
> >> death.
> >
> > This is very possibly a known bug
On 26/06/16 12:30, Duncan wrote:
> Steven Haigh posted on Sun, 26 Jun 2016 02:39:23 +1000 as excerpted:
>
>> In every case, it was a flurry of csum error messages, then instant
>> death.
>
> This is very possibly a known bug in btrfs, that occurs even in raid1
> where a later scrub repairs all
Steven Haigh posted on Sun, 26 Jun 2016 02:39:23 +1000 as excerpted:
> In every case, it was a flurry of csum error messages, then instant
> death.
This is very possibly a known bug in btrfs, that occurs even in raid1
where a later scrub repairs all csum errors. While in theory btrfs raid1
On Sat, Jun 25, 2016 at 10:39 AM, Steven Haigh wrote:
> Well, I did end up recovering the data that I cared about. I'm not
> really keen to ride the BTRFS RAID6 train again any time soon :\
>
> I now have the same as I've had for years - md RAID6 with XFS on top of
> it. I'm
On 26/06/16 02:25, Chris Murphy wrote:
> On Fri, Jun 24, 2016 at 10:19 PM, Steven Haigh wrote:
>
>>
>> Interesting though that EVERY crash references:
>> kernel BUG at fs/btrfs/extent_io.c:2401!
>
> Yeah because you're mounted ro, and if this is 4.4.13 unmodified btrfs
On Fri, Jun 24, 2016 at 10:19 PM, Steven Haigh wrote:
>
> Interesting though that EVERY crash references:
> kernel BUG at fs/btrfs/extent_io.c:2401!
Yeah because you're mounted ro, and if this is 4.4.13 unmodified btrfs
from kernel.org then that's the 3rd line:
if
On 25/06/2016 3:50 AM, Austin S. Hemmelgarn wrote:
> On 2016-06-24 13:43, Steven Haigh wrote:
>> On 25/06/16 03:40, Austin S. Hemmelgarn wrote:
>>> On 2016-06-24 13:05, Steven Haigh wrote:
On 25/06/16 02:59, ronnie sahlberg wrote:
What I have in mind here is that a file seems to get
On 2016-06-24 13:43, Steven Haigh wrote:
On 25/06/16 03:40, Austin S. Hemmelgarn wrote:
On 2016-06-24 13:05, Steven Haigh wrote:
On 25/06/16 02:59, ronnie sahlberg wrote:
What I have in mind here is that a file seems to get CREATED when I copy
the file that crashes the system in the target
On 25/06/16 03:40, Austin S. Hemmelgarn wrote:
> On 2016-06-24 13:05, Steven Haigh wrote:
>> On 25/06/16 02:59, ronnie sahlberg wrote:
>> What I have in mind here is that a file seems to get CREATED when I copy
>> the file that crashes the system in the target directory. I'm thinking
>> if I 'cp
On 2016-06-24 13:05, Steven Haigh wrote:
On 25/06/16 02:59, ronnie sahlberg wrote:
What I have in mind here is that a file seems to get CREATED when I copy
the file that crashes the system in the target directory. I'm thinking
if I 'cp -an source/ target/' that it will make this somewhat easier
On 25/06/16 02:59, ronnie sahlberg wrote:
> What I would do in this situation :
>
> 1, Immediately stop writing to these disks/filesystem. ONLY access it
> in read-only mode until you have salvaged what can be salvaged.
That's ok - I can't even mount it in RW mode :)
> 2, get a new 5T UDB drive
What I would do in this situation :
1, Immediately stop writing to these disks/filesystem. ONLY access it
in read-only mode until you have salvaged what can be salvaged.
2, get a new 5T UDB drive (they are cheap) and copy file by file off the array.
3, when you hit files that cause panics, make a
On 25/06/16 00:52, Steven Haigh wrote:
> Ok, so I figured that despite what the BTRFS wiki seems to imply, the
> 'multi parity' support just isn't stable enough to be used. So, I'm
> trying to revert to what I had before.
>
> My setup consist of:
> * 2 x 3Tb drives +
> * 3 x 2Tb
Ok, so I figured that despite what the BTRFS wiki seems to imply, the
'multi parity' support just isn't stable enough to be used. So, I'm
trying to revert to what I had before.
My setup consist of:
* 2 x 3Tb drives +
* 3 x 2Tb drives.
I've got (had?) about 4.9Tb of data.
My idea
16 matches
Mail list logo