with some raid
10 or 5 equivalent.
zfs seems to be nice, mature solution, but I also prefer to use something
native to Linux.
Best Regards,
Florian
Am 29.11.2016 um 18:20 schrieb Florian Lindner:
> Hello,
>
> I have 4 harddisks with 3TB capacity each. They are all used in a btrfs RAID
&g
Hello,
I have 4 harddisks with 3TB capacity each. They are all used in a btrfs RAID 5.
It has come to my attention, that there
seem to be major flaws in btrfs' raid 5 implementation. Because of that, I want
to convert the the raid 5 to a raid 10
and I have several questions.
* Is that possible
Robert White wrote:
On 11/02/2014 07:18 AM, Florian Lindner wrote:
# btrfsck /dev/sdb1
# btrfsck --init-extent-tree /dev/sdb1
# btrfsck --init-csum-tree /dev/sdb1
Notably missing from all these commands is --repair...
I don't know that's your problem for sure, but it's where I would
Chris Murphy wrote:
On Nov 2, 2014, at 8:18 AM, Florian Lindner mailingli...@xgm.de wrote:
Hello,
all after sudden I can't mount my btrfs home partition anymore. System is
Arch with kernel 3.17.2, but I use snapper which does snapshopts
regularly and I had 3.17.1 before, which afaik
Hello,
all after sudden I can't mount my btrfs home partition anymore. System is
Arch with kernel 3.17.2, but I use snapper which does snapshopts regularly
and I had 3.17.1 before, which afaik had some problems with snapshops.
Trying to mount without any options gives to the syslog:
Nov 02
Florian Lindner wrote:
Hello,
I've just completed a scrub on my home filesystem and now I wanted to
start on my Archiv btrfs RAID 0.
A scrub I started at Jan 02 seems to be stuck somehow. The command outputs
above were generated right now.
Thanks! Changing to canceled:1 in /var/lib/btrfs
Hello,
I've just completed a scrub on my home filesystem and now I wanted to start
on my Archiv btrfs RAID 0.
root@horus ~ # uname -
a
Florian Lindner wrote:
Hello,
I've just completed a scrub on my home filesystem and now I wanted to
start on my Archiv btrfs RAID 0.
Amendment:
anything btrfs related dmesg shows is
[ 38.240950] BTRFS info (device sdd1): csum failed ino 314 off 4129595392
csum 1774389615 expected csum
Hello,
some questions regarding btrfs deduplication.
- What is the state of it? Is it safe to use?
https://btrfs.wiki.kernel.org/index.php/Deduplication does not yield
much information.
- https://pypi.python.org/pypi/bedup says: bedup looks for new and
changed files, making sure that multiple
Hello!
I'm using ArchLinux with kernel Linux horus 3.10.5-1-ARCH #1 SMP PREEMPT.
Mounting and unmounting takes a long time:
# time mount -v /mnt/Archiv
mount: /dev/sde1 mounted on /mnt/Archiv.
mount -v /mnt/Archiv 0,00s user 0,16s system 1% cpu 9,493 total
# sync time umount -v
Hello!
I create a btrfs volume comprised of two partitions:
# mkfs.btrfs -m dup -d single /dev/sdd1 /dev/sde1
metadata ist mirrored on each device, data chunks are scattered more or less
randomly on one disk.
a) If one disk fails, is there any chance of data recovery?
b) If not, is there
Hello,
I habe the problems described in here
https://btrfs.wiki.kernel.org/index.php/Gotchas:
Files with a lot of random writes can become heavily fragmented
(1+ extents) causing trashing on HDDs and excessive multi-second
spikes of CPU load on systems with an SSD or large amount a RAM.
On
Hey!
I recently starting playing with btrfs and subvolume, but it has left
me puzzled:
Distribution is Archlinux, Kernel is 3.4.6.
root@horus /mnt # mkfs.btrfs -L test /dev/sdb1
WARNING! - Btrfs Btrfs v0.19 IS EXPERIMENTAL
WARNING! - see http://btrfs.wiki.kernel.org before using
fs created
13 matches
Mail list logo