On 2017年08月23日 00:37, Robert LeBlanc wrote:
Thanks for the explanations. Chris, I don't think 'degraded' did
anything to help the mounting, I just passed it in to see if it would
help (I'm not sure if btrfs is "smart" enough to ignore a drive if it
would increase the chance of mounting the
Thanks for the explanations. Chris, I don't think 'degraded' did
anything to help the mounting, I just passed it in to see if it would
help (I'm not sure if btrfs is "smart" enough to ignore a drive if it
would increase the chance of mounting the volume even if it is
degraded, but one could hope).
On 2017年08月22日 13:19, Robert LeBlanc wrote:
Chris and Qu thanks for your help. I was able to restore the data off
the volume. I only could not read one file that I tried to rsync (a
MySQl bin log), but it wasn't critical as I had an off-site snapshot
from that morning and ownclould could
On Mon, Aug 21, 2017 at 11:19 PM, Robert LeBlanc wrote:
> Chris and Qu thanks for your help. I was able to restore the data off
> the volume. I only could not read one file that I tried to rsync (a
> MySQl bin log), but it wasn't critical as I had an off-site snapshot
> from
Chris and Qu thanks for your help. I was able to restore the data off
the volume. I only could not read one file that I tried to rsync (a
MySQl bin log), but it wasn't critical as I had an off-site snapshot
from that morning and ownclould could resync the files that were
changed anyway. This
On Mon, Aug 21, 2017 at 10:31 AM, Robert LeBlanc wrote:
> Qu,
>
> Sorry, I'm not on the list (I was for a few years about three years ago).
>
> I looked at the backup roots like you mentioned.
>
> # ./btrfs inspect dump-super -f /dev/bcache0
> superblock: bytenr=65536,
Qu,
Sorry, I'm not on the list (I was for a few years about three years ago).
I looked at the backup roots like you mentioned.
# ./btrfs inspect dump-super -f /dev/bcache0
superblock: bytenr=65536, device=/dev/bcache0
-
csum_type
I lost enough Btrfs m=d=s=RAID5 filesystems in past experiments (I
didn't try using RAID5 for metadata and system chunks in the last few
years) to faulty SATA cables + hotplug enabled SATA controllers (where
a disk could disappear and reappear "as the wind blew"). Since then, I
made a habit of
On 2017年08月21日 12:33, Robert LeBlanc wrote:
I've been running btrfs in a raid5 for about a year now with bcache in
front of it. Yesterday, one of my drives was acting really slow, so I
was going to move it to a different port. I guess I get too
comfortable hot plugging drives in at work and
It seems like I accidentally managed to break my Btrfs/RAID5
filesystem, yet again, in a similar fashion.
This time around, I ran into some random libata driver issue (?)
instead of a faulty hardware part but the end result is quiet similar.
I issued the command (replacing X with valid letters
On 6 November 2015 at 10:03, Janos Toth F. wrote:
>
> Although I updated the firmware of the drives. (I found an IMPORTANT
> update when I went there to download SeaTools, although there was no
> change log to tell me why this was important). This might changed the
> error
I created a fresh RAID-5 mode Btrfs on the same 3 disks (including the
faulty one which is still producing numerous random read errors) and
Btrfs now seems to work exactly as I would anticipate.
I copied some data and verified the checksum. The data is readable and
correct regardless of the
On 2015-11-04 23:06, Duncan wrote:
(Tho I should mention, while not on zfs, I've actually had my own
problems with ECC RAM too. In my case, the RAM was certified to run at
speeds faster than it was actually reliable at, such that actually stored
data, what the ECC protects, was fine, the data
Duncan wrote:
Austin S Hemmelgarn posted on Wed, 04 Nov 2015 13:45:37 -0500 as
excerpted:
On 2015-11-04 13:01, Janos Toth F. wrote:
But the worst part is that there are some ISO files which were
seemingly copied without errors but their external checksums (the one
which I can calculate with
Austin S Hemmelgarn posted on Wed, 04 Nov 2015 13:45:37 -0500 as
excerpted:
> On 2015-11-04 13:01, Janos Toth F. wrote:
>> But the worst part is that there are some ISO files which were
>> seemingly copied without errors but their external checksums (the one
>> which I can calculate with md5sum
Well. Now I am really confused about Btrfs RAID-5!
So, I replaced all SATA cables (which are explicitly marked for beeing
aimed at SATA3 speeds) and all the 3x2Tb WD Red 2.0 drives with 3x4Tb
Seagate Contellation ES 3 drives and started from sratch. I
secure-erased every drives, created an empty
On 2015-11-04 13:01, Janos Toth F. wrote:
But the worst part is that there are some ISO files which were
seemingly copied without errors but their external checksums (the one
which I can calculate with md5sum and compare to the one supplied by
the publisher of the ISO file) don't match!
Well...
I went through all the recovery options I could find (starting from
read-only to "extraordinarily dangerous"). Nothing seemed to work.
A Windows based proprietary recovery software (ReclaiMe) could scratch
the surface but only that (it showed me the whole original folder
structure after a few
If it is for mostly archival storage, I would suggest you take a look
at snapraid.
On Wed, Oct 21, 2015 at 9:09 AM, Janos Toth F. wrote:
> I went through all the recovery options I could find (starting from
> read-only to "extraordinarily dangerous"). Nothing seemed to
Maybe hold off erasing the drives a little in case someone wants to
collect some extra data for diagnosing how/why the filesystem got into
this unrecoverable state.
A single device having issues should not cause the whole filesystem to
become unrecoverable.
On Wed, Oct 21, 2015 at 9:09 AM, Janos
I am afraid the filesystem right now is really damaged regardless of
it's state upon the unexpected cable failure because I tried some
dangerous options after read-only restore/recovery methods all failed
(including zero-log, followed by init-csum-tree and even
chunk-recovery -> all of them just
https://btrfs.wiki.kernel.org/index.php/Restore
This should still be possible with even a degraded/unmounted raid5. It
is a bit tedious to figure out how to use it but if you've got some
things you want off the volume, it's not so difficult to prevent
trying it.
Chris Murphy
--
To unsubscribe
I tried several things, including the degraded mount option. One example:
# mount /dev/sdb /data -o ro,degraded,nodatasum,notreelog
mount: wrong fs type, bad option, bad superblock on /dev/sdb,
missing codepage or helper program, or other error
In some cases useful info is found in
I tried that after every possible combinations of RO mount failed. I used it in
the past for an USB attached drive where an USB-SATA adapter had some issues (I
plugged it into a standard USB2 port even though it expected USB3 power
current, so a high-current or several standard USB2 ports
Janos Toth F. posted on Mon, 19 Oct 2015 10:39:06 +0200 as excerpted:
> I was in the middle of replacing the drives of my NAS one-by-one (I
> wished to move to bigger and faster storage at the end), so I used one
> more SATA drive + SATA cable than usual. Unfortunately, the extra cable
> turned
On Sun, 24 May 2015 01:02:21 AM Jan Voet wrote:
Doing a 'btrfs balance cancel' immediately after the array was mounted
seems to have done the trick. A subsequent 'btrfs check' didn't show any
errors at all and all the data seems to be there. :-)
I add rootflags=skip_balance to the kernel
Jan Voet jan.voet at gmail.com writes:
Duncan 1i5t5.duncan at cox.net writes:
FWIW, btrfs raid5 (and raid6, together called raid56 mode) is still
extremely new, only normal runtime implemented as originally introduced,
with complete repair from a device failure only completely
Chris Murphy posted on Fri, 22 May 2015 13:15:09 -0600 as excerpted:
On Thu, May 21, 2015 at 10:43 PM, Duncan 1i5t5.dun...@cox.net wrote:
For in-production use, therefore, btrfs raid56 mode, while now at least
in theory complete, is really too immature at this point to recommend.
At some
On Thu, May 21, 2015 at 10:43 PM, Duncan 1i5t5.dun...@cox.net wrote:
For in-production use, therefore, btrfs raid56 mode, while now at least
in theory complete, is really too immature at this point to recommend.
At some point perhaps a developer will have time to state the expected
stability
Duncan 1i5t5.duncan at cox.net writes:
FWIW, btrfs raid5 (and raid6, together called raid56 mode) is still
extremely new, only normal runtime implemented as originally introduced,
with complete repair from a device failure only completely implemented in
kernel 3.19, and while in theory
Jan Voet posted on Thu, 21 May 2015 21:43:36 + as excerpted:
I recently upgraded a quite old home NAS system (Celeron M based) to
Ubuntu 14.04 with an upgraded linux kernel (3.19.8) and BTRFS tools
v3.17.
This system has 5 brand new 6TB drives (HGST) with all drives directly
handled by
On Mon, Feb 09, 2015 at 05:24:42PM -0500, Rich Freeman wrote:
How does btrfs raid5 handle mixed-size disks? The docs weren't
terribly clear on this.
Suppose I have 4x3TB and 1x1TB disks. Using conventional lvm+mdadm in
raid5 mode I'd expect to be able to fit about 10TB of space on those
Juergen Sauer posted on Wed, 12 Nov 2014 18:26:56 +0100 as excerpted:
Current Status:
# root@pc6:~# btrfs fi show /dev/sda1
# parent transid verify failed on 209362944 wanted 293924 found 293922
# parent transid verify failed on 209362944 wanted 293924 found 293922
What does parent transid
Hello Duncan,
Of course if you'd been following the list as btrfs testers really
should still be doing at this point, you'd have seen all this covered
before. And of course, if you had done pre-deployment testing before
you stuck valuable data on that btrfs raid5, you'd have noted the
Tetja Rediske posted on Mon, 03 Feb 2014 17:12:24 +0100 as excerpted:
[...]
What happened before:
One disk was faulty, I added a new one and removed the old one, followed
by a balance.
So far so good.
Some days after this I accidently removed a SATA Power Connector from
another
On Oct 22, 2013, Duncan 1i5t5.dun...@cox.net wrote:
the quick failure should they try raid56 in its current state simply
alerts them to the problem they already had.
What quick failure? There's no such thing in place AFAIK. It seems to
do all the work properly, the limitations in the current
On Thu, Oct 24, 2013 at 10:22:28PM +0800, lilofile wrote:
Oct 24 21:25:36 host1 kernel: [ 3000.809563] [81315c14]
blkdev_issue_discard+0x1b4/0x1c0
There's an discard/TRIM operation being done on all of the devices,
current progs do not report that and it's really confusing. Fixed in
lilofile posted on Mon, 21 Oct 2013 23:45:58 +0800 as excerpted:
hi:
since RAID 5/6 code merged into Btrfs from 2013.2, no update and
bug are found in maillist? is any development plan with btrfs raid5?
such as adjusting stripe width、 reconstruction?
compared to md raid5 what
shuo lv posted on Tue, 22 Oct 2013 10:30:06 +0800 as excerpted:
hi:
since RAID 5/6 code merged into Btrfs from 2013.2, no update and bug are
found in maillist? is any development plan with btrfs raid5? such as
adjusting stripe width、 reconstruction?
compared to md raid5 what is advantage in
On Tue, Oct 22, 2013 at 01:27:44PM +, Duncan wrote:
since RAID 5/6 code merged into Btrfs from 2013.2, no update and
bug are found in maillist? is any development plan with btrfs raid5?
such as adjusting stripe width、 reconstruction?
compared to md raid5 what is
On Oct 22, 2013, Duncan 1i5t5.dun...@cox.net wrote:
This is because there's a hole in the recovery process in case of a
lost device, making it dangerous to use except for the pure test-case.
It's not just that; any I/O error in raid56 chunks will trigger a BUG
and make the filesystem unusable
On 2013/10/22 07:18 PM, Alexandre Oliva wrote:
... and
it is surely an improvement over the current state of raid56 in btrfs,
so it might be a good idea to put it in.
I suspect the issue is that, while it sortof works, we don't really want
to push people to use it half-baked. This is reassuring
42 matches
Mail list logo