Re: compress=lzo safe to use? (was: Re: Trying to rescue my data :()

2016-09-11 Thread Chris Murphy
On Sun, Sep 11, 2016 at 2:06 PM, Adam Borowski wrote: > On Sun, Sep 11, 2016 at 09:48:35PM +0200, Martin Steigerwald wrote: >> Hmm… I found this from being referred to by reading Debian wiki page on >> BTRFS¹. >> >> I use compress=lzo on BTRFS RAID 1 since April 2014 and I

Re: compress=lzo safe to use? (was: Re: Trying to rescue my data :()

2016-09-11 Thread Adam Borowski
On Sun, Sep 11, 2016 at 09:48:35PM +0200, Martin Steigerwald wrote: > Hmm… I found this from being referred to by reading Debian wiki page on > BTRFS¹. > > I use compress=lzo on BTRFS RAID 1 since April 2014 and I never found an > issue. Steven, your filesystem wasn´t RAID 1 but RAID 5 or 6? >

compress=lzo safe to use? (was: Re: Trying to rescue my data :()

2016-09-11 Thread Martin Steigerwald
Am Sonntag, 26. Juni 2016, 13:13:04 CEST schrieb Steven Haigh: > On 26/06/16 12:30, Duncan wrote: > > Steven Haigh posted on Sun, 26 Jun 2016 02:39:23 +1000 as excerpted: > >> In every case, it was a flurry of csum error messages, then instant > >> death. > > > > This is very possibly a known bug

Re: Trying to rescue my data :(

2016-06-25 Thread Steven Haigh
On 26/06/16 12:30, Duncan wrote: > Steven Haigh posted on Sun, 26 Jun 2016 02:39:23 +1000 as excerpted: > >> In every case, it was a flurry of csum error messages, then instant >> death. > > This is very possibly a known bug in btrfs, that occurs even in raid1 > where a later scrub repairs all

Re: Trying to rescue my data :(

2016-06-25 Thread Duncan
Steven Haigh posted on Sun, 26 Jun 2016 02:39:23 +1000 as excerpted: > In every case, it was a flurry of csum error messages, then instant > death. This is very possibly a known bug in btrfs, that occurs even in raid1 where a later scrub repairs all csum errors. While in theory btrfs raid1

Re: Trying to rescue my data :(

2016-06-25 Thread Chris Murphy
On Sat, Jun 25, 2016 at 10:39 AM, Steven Haigh wrote: > Well, I did end up recovering the data that I cared about. I'm not > really keen to ride the BTRFS RAID6 train again any time soon :\ > > I now have the same as I've had for years - md RAID6 with XFS on top of > it. I'm

Re: Trying to rescue my data :(

2016-06-25 Thread Steven Haigh
On 26/06/16 02:25, Chris Murphy wrote: > On Fri, Jun 24, 2016 at 10:19 PM, Steven Haigh wrote: > >> >> Interesting though that EVERY crash references: >> kernel BUG at fs/btrfs/extent_io.c:2401! > > Yeah because you're mounted ro, and if this is 4.4.13 unmodified btrfs

Re: Trying to rescue my data :(

2016-06-25 Thread Chris Murphy
On Fri, Jun 24, 2016 at 10:19 PM, Steven Haigh wrote: > > Interesting though that EVERY crash references: > kernel BUG at fs/btrfs/extent_io.c:2401! Yeah because you're mounted ro, and if this is 4.4.13 unmodified btrfs from kernel.org then that's the 3rd line: if

Re: Trying to rescue my data :(

2016-06-24 Thread Steven Haigh
On 25/06/2016 3:50 AM, Austin S. Hemmelgarn wrote: > On 2016-06-24 13:43, Steven Haigh wrote: >> On 25/06/16 03:40, Austin S. Hemmelgarn wrote: >>> On 2016-06-24 13:05, Steven Haigh wrote: On 25/06/16 02:59, ronnie sahlberg wrote: What I have in mind here is that a file seems to get

Re: Trying to rescue my data :(

2016-06-24 Thread Austin S. Hemmelgarn
On 2016-06-24 13:43, Steven Haigh wrote: On 25/06/16 03:40, Austin S. Hemmelgarn wrote: On 2016-06-24 13:05, Steven Haigh wrote: On 25/06/16 02:59, ronnie sahlberg wrote: What I have in mind here is that a file seems to get CREATED when I copy the file that crashes the system in the target

Re: Trying to rescue my data :(

2016-06-24 Thread Steven Haigh
On 25/06/16 03:40, Austin S. Hemmelgarn wrote: > On 2016-06-24 13:05, Steven Haigh wrote: >> On 25/06/16 02:59, ronnie sahlberg wrote: >> What I have in mind here is that a file seems to get CREATED when I copy >> the file that crashes the system in the target directory. I'm thinking >> if I 'cp

Re: Trying to rescue my data :(

2016-06-24 Thread Austin S. Hemmelgarn
On 2016-06-24 13:05, Steven Haigh wrote: On 25/06/16 02:59, ronnie sahlberg wrote: What I have in mind here is that a file seems to get CREATED when I copy the file that crashes the system in the target directory. I'm thinking if I 'cp -an source/ target/' that it will make this somewhat easier

Re: Trying to rescue my data :(

2016-06-24 Thread Steven Haigh
On 25/06/16 02:59, ronnie sahlberg wrote: > What I would do in this situation : > > 1, Immediately stop writing to these disks/filesystem. ONLY access it > in read-only mode until you have salvaged what can be salvaged. That's ok - I can't even mount it in RW mode :) > 2, get a new 5T UDB drive

Re: Trying to rescue my data :(

2016-06-24 Thread ronnie sahlberg
What I would do in this situation : 1, Immediately stop writing to these disks/filesystem. ONLY access it in read-only mode until you have salvaged what can be salvaged. 2, get a new 5T UDB drive (they are cheap) and copy file by file off the array. 3, when you hit files that cause panics, make a

Re: Trying to rescue my data :(

2016-06-24 Thread Steven Haigh
On 25/06/16 00:52, Steven Haigh wrote: > Ok, so I figured that despite what the BTRFS wiki seems to imply, the > 'multi parity' support just isn't stable enough to be used. So, I'm > trying to revert to what I had before. > > My setup consist of: > * 2 x 3Tb drives + > * 3 x 2Tb

Trying to rescue my data :(

2016-06-24 Thread Steven Haigh
Ok, so I figured that despite what the BTRFS wiki seems to imply, the 'multi parity' support just isn't stable enough to be used. So, I'm trying to revert to what I had before. My setup consist of: * 2 x 3Tb drives + * 3 x 2Tb drives. I've got (had?) about 4.9Tb of data. My idea