I tried that after every possible combinations of RO mount failed. I used it in
the past for an USB attached drive where an USB-SATA adapter had some issues (I
plugged it into a standard USB2 port even though it expected USB3 power
current, so a high-current or several standard USB2 ports
Janos Toth F. posted on Mon, 19 Oct 2015 10:39:06 +0200 as excerpted:
> I was in the middle of replacing the drives of my NAS one-by-one (I
> wished to move to bigger and faster storage at the end), so I used one
> more SATA drive + SATA cable than usual. Unfortunately, the extra cable
> turned
I was in the middle of replacing the drives of my NAS one-by-one (I
wished to move to bigger and faster storage at the end), so I used one
more SATA drive + SATA cable than usual. Unfortunately, the extra
cable turned out to be faulty and it looks like it caused some heavy
damage to the file
On Sun, 24 May 2015 01:02:21 AM Jan Voet wrote:
Doing a 'btrfs balance cancel' immediately after the array was mounted
seems to have done the trick. A subsequent 'btrfs check' didn't show any
errors at all and all the data seems to be there. :-)
I add rootflags=skip_balance to the kernel
Jan Voet jan.voet at gmail.com writes:
Duncan 1i5t5.duncan at cox.net writes:
FWIW, btrfs raid5 (and raid6, together called raid56 mode) is still
extremely new, only normal runtime implemented as originally introduced,
with complete repair from a device failure only completely
Chris Murphy posted on Fri, 22 May 2015 13:15:09 -0600 as excerpted:
On Thu, May 21, 2015 at 10:43 PM, Duncan 1i5t5.dun...@cox.net wrote:
For in-production use, therefore, btrfs raid56 mode, while now at least
in theory complete, is really too immature at this point to recommend.
At some
On Thu, May 21, 2015 at 10:43 PM, Duncan 1i5t5.dun...@cox.net wrote:
For in-production use, therefore, btrfs raid56 mode, while now at least
in theory complete, is really too immature at this point to recommend.
At some point perhaps a developer will have time to state the expected
stability
Duncan 1i5t5.duncan at cox.net writes:
FWIW, btrfs raid5 (and raid6, together called raid56 mode) is still
extremely new, only normal runtime implemented as originally introduced,
with complete repair from a device failure only completely implemented in
kernel 3.19, and while in theory
Hi,
I recently upgraded a quite old home NAS system (Celeron M based) to Ubuntu
14.04 with an upgraded linux kernel (3.19.8) and BTRFS tools v3.17. This
system has 5 brand new 6TB drives (HGST) with all drives directly handled by
BTRFS, both data and metadata in RAID5.
After loading up the
by BTRFS, both data and metadata in RAID5.
FWIW, btrfs raid5 (and raid6, together called raid56 mode) is still
extremely new, only normal runtime implemented as originally introduced,
with complete repair from a device failure only completely implemented in
kernel 3.19, and while in theory complete
Hello,
I had a 3 disk raid5 system with btrfs installed. Unfortunately one of the disks
crashed. Now I cannot mount the system any more, not even with the degraded
option. I suspect the failed disk to have a hw failure. I Think part of the
problem might be that I configured the system to not only
,
device 2 is missing
Sorry, I cannot copy/paste as the machine does not boot anymore.
Can anyone give me some help or can explain to me what other kind of
info you need? Thanks.
Full recovery support for btrfs raid5 is very *VERY* new. Kernel 3.19
was the first version that was supposed
How does btrfs raid5 handle mixed-size disks? The docs weren't
terribly clear on this.
Suppose I have 4x3TB and 1x1TB disks. Using conventional lvm+mdadm in
raid5 mode I'd expect to be able to fit about 10TB of space on those
(2TB striped across 4 disks plus 1TB striped across 5 disks after
On Mon, Feb 09, 2015 at 05:24:42PM -0500, Rich Freeman wrote:
How does btrfs raid5 handle mixed-size disks? The docs weren't
terribly clear on this.
Suppose I have 4x3TB and 1x1TB disks. Using conventional lvm+mdadm in
raid5 mode I'd expect to be able to fit about 10TB of space on those
Am 03.02.2015 um 01:24 schrieb Tobias Holst:
Hi.
Hi,
There is a known bug when you re-plug in a missing hdd of a btrfs raid
without wiping the device before. In worst case this results in a
totally corrupted filesystem as it did sometimes during my tests of
the raid6 implementation. With
corrupted...
Give it a try :)
Regards
Tobias
2015-01-27 10:12 GMT+01:00 Alexander Fieroch
alexander.fier...@mpi-dortmund.mpg.de:
Hello,
I'm testing btrfs RAID5 on three encrypted hdds (dm-crypt) and I'm
simulating a harddisk failure by unplugging one device while writing some
files.
Now
Hello,
I'm testing btrfs RAID5 on three encrypted hdds (dm-crypt) and I'm
simulating a harddisk failure by unplugging one device while writing
some files.
Now the filesystem is damaged. By now is there any chance to repair the
filesystem?
My operating system is ubuntu server (vivid
Juergen Sauer posted on Wed, 12 Nov 2014 18:26:56 +0100 as excerpted:
Current Status:
# root@pc6:~# btrfs fi show /dev/sda1
# parent transid verify failed on 209362944 wanted 293924 found 293922
# parent transid verify failed on 209362944 wanted 293924 found 293922
What does parent transid
hibernated. =:^)
:)
I didn't recongnize bevore that the hibernation is so what faulty.
It does not care for me anymore, disabled it.
It's like the old story. A test machine was just used for production,
after it worked very fantastic. BTRFS on the raid5/6 seznario has really
great potential.
Btrfs
This reproduces in a not tainted kernel 3.17.0-0.rc1.git0.1.fc22.x86_64. I
still used btrfs-progs v3.14.2-167-ge514381 to create the new raid5 volume, so
it seems whatever fixed it in for-linus is not in for-linus2.
[ 45.935848] BTRFS info (device sdc): disk space caching is enabled
[
Summary:
Corrupt a file on a btrfs raid5 volume, mount then read the file, I get an
oops. System is totally hung up, ssh no longer works, etc.
Versions:
kernel-3.16.0-1.cmlb729fdm810v4.fc21.x86_64
btrfs-progs-3.14.2-3.fc21
Kernel 3.16.0-1.fc21.x86_64 with the following patches to btrfs/send.c
I'm unable to reproduce this with kernel and progs built from integration
branch (I think, anyway); this is what I built:
git clone git://repo.or.cz/btrfs-progs-unstable/devel.git
cd devel
git checkout integration-20140729
git clone
Hello Duncan,
Of course if you'd been following the list as btrfs testers really
should still be doing at this point, you'd have seen all this covered
before. And of course, if you had done pre-deployment testing before
you stuck valuable data on that btrfs raid5, you'd have noted
Hi,
since Freenode is doomed today, i ask the direct way.
Following Filesystem:
Label: 'data' uuid: 3a6fd6d7-5943-4cad-b56f-2e6dcabff453
Total devices 6 FS bytes used 7.02TiB
devid1 size 1.82TiB used 1.82TiB path /dev/sda3
devid2 size 2.73TiB used 2.48TiB path
as btrfs testers really should
still be doing at this point, you'd have seen all this covered before.
And of course, if you had done pre-deployment testing before you stuck
valuable data on that btrfs raid5, you'd have noted the problems, even
without reading about it on-list or on the wiki
On Oct 22, 2013, Duncan 1i5t5.dun...@cox.net wrote:
the quick failure should they try raid56 in its current state simply
alerts them to the problem they already had.
What quick failure? There's no such thing in place AFAIK. It seems to
do all the work properly, the limitations in the current
when i create raid5 in btrfs ,command like this:
./mkfs.btrfs -d raid5 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf
/dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm -f
WARNING! - Btrfs v0.20-rc1-358-g194aa4a-dirty IS EXPERIMENTAL
WARNING! - see http://btrfs.wiki.kernel.org before
On Thu, Oct 24, 2013 at 10:22:28PM +0800, lilofile wrote:
Oct 24 21:25:36 host1 kernel: [ 3000.809563] [81315c14]
blkdev_issue_discard+0x1b4/0x1c0
There's an discard/TRIM operation being done on all of the devices,
current progs do not report that and it's really confusing. Fixed in
lilofile posted on Mon, 21 Oct 2013 23:45:58 +0800 as excerpted:
hi:
since RAID 5/6 code merged into Btrfs from 2013.2, no update and
bug are found in maillist? is any development plan with btrfs raid5?
such as adjusting stripe width、 reconstruction?
compared to md raid5 what
shuo lv posted on Tue, 22 Oct 2013 10:30:06 +0800 as excerpted:
hi:
since RAID 5/6 code merged into Btrfs from 2013.2, no update and bug are
found in maillist? is any development plan with btrfs raid5? such as
adjusting stripe width、 reconstruction?
compared to md raid5 what is advantage
On Tue, Oct 22, 2013 at 01:27:44PM +, Duncan wrote:
since RAID 5/6 code merged into Btrfs from 2013.2, no update and
bug are found in maillist? is any development plan with btrfs raid5?
such as adjusting stripe width、 reconstruction?
compared to md raid5 what
On Oct 22, 2013, Duncan 1i5t5.dun...@cox.net wrote:
This is because there's a hole in the recovery process in case of a
lost device, making it dangerous to use except for the pure test-case.
It's not just that; any I/O error in raid56 chunks will trigger a BUG
and make the filesystem unusable
On 2013/10/22 07:18 PM, Alexandre Oliva wrote:
... and
it is surely an improvement over the current state of raid56 in btrfs,
so it might be a good idea to put it in.
I suspect the issue is that, while it sortof works, we don't really want
to push people to use it half-baked. This is reassuring
hi:
since RAID 5/6 code merged into Btrfs from 2013.2, no update and bug are
found in maillist? is any development plan with btrfs raid5? such as adjusting
stripe width、 reconstruction?
compared to md raid5 what is advantage in btrfs raid5 ?
--
To unsubscribe from this list: send
hi:
since RAID 5/6 code merged into Btrfs from 2013.2, no update and bug
are found in maillist? is any development plan with btrfs raid5? such
as adjusting stripe width、 reconstruction?
compared to md raid5 what is advantage in btrfs raid5 ?
--
To unsubscribe from this list: send the line
btrfs raid5, The replace operation depends on scrub and makes use of the scrub
code.
And scrub does not yet support RAID5/6. Therefore 'btrfs replace start'
fails with EINVAL on RAID5/6 filesystems.
when does the replace function is added? who can tell me the plan?
--
To unsubscribe from
I know the raid5 code is still new and being worked on, but I was
curious.
With md raid5, I can do this:
mdadm /dev/md7 --replace /dev/sde1
This is cool because it lets you replace a drive with bad sectors where
at least one other drive in the array has bad sectors, and the md layer
will read
Hi Chris,
I am playing with the raid5/6 code, to adapt my disk-usage
patches to the raid5/6 code.
During this develop I found that the chunk allocation is strange.
Looking at the code I found in volume.c the following codes:
3576 static int __btrfs_alloc_chunk(struct btrfs_trans_handle *trans,
Hi all,
I had a technical question about implementation of parity raid levels in btrfs,
like RAID5 and or RAID6.
Considering WAFL, the NetApp filesystem, each block is 4Kb.
When we read 1 block, we have one 4Kb I/O on only one disk of the raid group.
So the raid group has a random read I/O
101 - 139 of 139 matches
Mail list logo