Zygo Blaxell wrote:
On Fri, Jan 15, 2021 at 10:32:39AM +0100, waxhead wrote:
Zygo Blaxell wrote:
commit
space_cache / nospace_cache
sdd / ssd_spread / nossd / no_ssdspread
How could those be anything other than filesystem-wide options?
Well being me, I tend to live in a fantasy world
David Sterba wrote:
On Fri, Jan 15, 2021 at 01:02:12AM +0100, waxhead wrote:
I don't think the per-subvolume storage options were ever tracked on
wiki, the closest match is per-subvolume mount options that's still
there
https://btrfs.wiki.kernel.org/index.php/Project
Zygo Blaxell wrote:
commit
space_cache / nospace_cache
sdd / ssd_spread / nossd / no_ssdspread
How could those be anything other than filesystem-wide options?
Well being me, I tend to live in a fantasy world where BTRFS have
complete world domination and has become the VFS layer.
As I hav
David Sterba wrote:
Hi,
On Thu, Jan 14, 2021 at 03:12:26AM +0100, waxhead wrote:
I was looking through the mount options and being a madman with strong
opinions I can't help thinking that a lot of them does not really belong
as mount options at all, but should rather be properties set o
Howdy,
I was looking through the mount options and being a madman with strong
opinions I can't help thinking that a lot of them does not really belong
as mount options at all, but should rather be properties set on the
subvolume - for example the toplevel subvolume.
And any options set on a
Being a long time BTRFS user and frequent reader of the mailing list I
do have some (hopefully practical) questions / requests. Some asked
before perhaps , but I think it is about time with an update. So without
further ado... here we go:
1. THE STATUS PAGE:
The status page has not been update
Sean Greenslade wrote:
On August 28, 2019 5:51:02 PM PDT, Marc Oggier wrote:
Hi All,
I am currently buidling a small data server for an experiment.
I was wondering if the features of the spare volume introduced a couple
of years ago (ttps://patchwork.kernel.org/patch/8687721/) would be
re
Johannes Thumshirn wrote:
This patchset add support for adding new checksum types in BTRFS.
Currently BTRFS only supports CRC32C as data and metadata checksum, which is
good if you only want to detect errors due to data corruption in hardware.
But CRC32C isn't able cover other use-cases like
David Sterba wrote:
On Mon, May 06, 2019 at 05:37:40PM +0300, Timofey Titovets wrote:
From: Timofey Titovets
Currently btrfs raid1/10 bаlance requests to mirrors,
based on pid % num of mirrors.
Regarding the patches to select mirror policy, that Anand sent, I think
we first should provide a
Hendrik Friedel wrote:
Hello,
I intend to move to BTRFS and of course I have some data already.
I currently have several single 4TB drives and I would like to move the
Data onto new drives (2*8TB). I need no raid, as I prefer a backup.
Nevertheless, having raid nice for availability. So why no
Steven Davies wrote:
On 2019-03-19 10:00, Anand Jain wrote:
RFC patch as of now, appreciate your comments. This patch set has
been tested.
This patch introduces a framework so that we can add more policies, and
converts the existing %pid into as a configurable parameter using the
property
Austin S. Hemmelgarn wrote:
On 2019-02-08 13:10, waxhead wrote:
Austin S. Hemmelgarn wrote:
On 2019-02-07 13:53, waxhead wrote:
Austin S. Hemmelgarn wrote:
So why do BTRFS hurry to mount itself even if devices are missing? and
if BTRFS still can mount , why whould it blindly accept a
Austin S. Hemmelgarn wrote:
On 2019-02-07 13:53, waxhead wrote:
Austin S. Hemmelgarn wrote:
On 2019-02-07 06:04, Stefan K wrote:
Thanks, with degraded as kernel parameter and also ind the fstab it
works like expected
That should be the normal behaviour, cause a server must be up and
Austin S. Hemmelgarn wrote:
On 2019-02-07 06:04, Stefan K wrote:
Thanks, with degraded as kernel parameter and also ind the fstab it
works like expected
That should be the normal behaviour, cause a server must be up and
running, and I don't care about a device loss, thats why I use a
RAI
DanglingPointer wrote:
Hi All,
For clarity for the masses, what are the "multiple serious data-loss
bugs" as mentioned in the btrfs wiki?
The bullet points on this page:
https://btrfs.wiki.kernel.org/index.php/RAID56
don't enumerate the bugs. Not even in a high level. If anything what
can
Sterling Windmill wrote:
Out of curiosity, what led to you choosing RAID1 for data but RAID10
for metadata?
I've flip flipped between these two modes myself after finding out
that BTRFS RAID10 doesn't work how I would've expected.
Wondering what made you choose your configuration.
Thanks!
Sure
Duncan wrote:
waxhead posted on Fri, 02 Nov 2018 20:54:40 +0100 as excerpted:
Note that I tend to interpret the btrfs de st / output as if the error
was NOT fixed even if (seems clearly that) it was, so I think the output
is a bit misleading... just saying...
See the btrfs-device manpage
Hi,
my main computer runs on a 7x SSD BTRFS as rootfs with
data:RAID1 and metadata:RAID10.
One SSD is probably about to fail, and it seems that BTRFS fixed it
nicely (thanks everyone!)
I decided to just post the ugly details in case someone just wants to
have a look. Note that I tend to inte
In case BTRFS fails to WRITE to a disk. What happens?
Does the bad area get mapped out somehow? Does it try again until it
succeed or until it "times out" or reach a threshold counter?
Does it eventually try to write to a different disk (in case of using
the raid1/10 profile?)
Adam Hunt wrote:
Back in 2014 Ted Tso introduced the lazytime mount option for ext4 and
shortly thereafter a more generic VFS implementation which was then
merged into mainline. His early patches included support for Btrfs but
those changes were removed prior to the feature being merged. His
chan
Hugo Mills wrote:
On Wed, Jul 18, 2018 at 08:39:48AM +, Duncan wrote:
Duncan posted on Wed, 18 Jul 2018 07:20:09 + as excerpted:
Perhaps it's a case of coder's view (no code doing it that way, it's just
a coincidental oddity conditional on equal sizes), vs. sysadmin's view
(code or n
waxhead wrote:
David Sterba wrote:
An interesting question is the naming of the extended profiles. I picked
something that can be easily understood but it's not a final proposal.
Years ago, Hugo proposed a naming scheme that described the
non-standard raid varieties of the btrfs flavor:
David Sterba wrote:
An interesting question is the naming of the extended profiles. I picked
something that can be easily understood but it's not a final proposal.
Years ago, Hugo proposed a naming scheme that described the
non-standard raid varieties of the btrfs flavor:
https://marc.info/?l=li
Chris Murphy wrote:
On Thu, Jun 21, 2018 at 5:13 PM, waxhead wrote:
According to this:
https://stratis-storage.github.io/StratisSoftwareDesign.pdf
Page 4 , section 1.2
It claims that BTRFS still have significant technical issues that may never
be resolved.
Could someone shed some light on
David Sterba wrote:
On Fri, Jun 22, 2018 at 01:13:31AM +0200, waxhead wrote:
According to this:
https://stratis-storage.github.io/StratisSoftwareDesign.pdf
Page 4 , section 1.2
It claims that BTRFS still have significant technical issues that may
never be resolved.
Could someone shed some
Jukka Larja wrote:
waxhead wrote 24.6.2018 klo 1.01:
Nikolay Borisov wrote:
On 22.06.2018 02:13, waxhead wrote:
According to this:
https://stratis-storage.github.io/StratisSoftwareDesign.pdf
Page 4 , section 1.2
It claims that BTRFS still have significant technical issues that may
never be
Nikolay Borisov wrote:
On 22.06.2018 02:13, waxhead wrote:
According to this:
https://stratis-storage.github.io/StratisSoftwareDesign.pdf
Page 4 , section 1.2
It claims that BTRFS still have significant technical issues that may
never be resolved.
Could someone shed some light on exactly
According to this:
https://stratis-storage.github.io/StratisSoftwareDesign.pdf
Page 4 , section 1.2
It claims that BTRFS still have significant technical issues that may
never be resolved.
Could someone shed some light on exactly what these technical issues
might be?! What are BTRFS biggest te
Gandalf Corvotempesta wrote:
Another kernel release was made.
Any improvements in RAID56?
I didn't see any changes in that sector, is something still being
worked on or it's stuck waiting for something ?
Based on official BTRFS status page, RAID56 is the only "unstable"
item marked in red.
No i
Adam Bahe wrote:
Hello all,
'All' includes me as well, but keep in mind I am not a BTRFS dev.
I have a drive that has been in my btrfs array for about 6 months now.
It was purchased new. Its an IBM-ESXS SAS drive rebranded from an HGST
HUH721010AL4200. Here is the following stats, it passed a
Andrei Borzenkov wrote:
02.05.2018 21:17, waxhead пишет:
Goffredo Baroncelli wrote:
On 05/02/2018 06:55 PM, waxhead wrote:
So again, which problem would solve having the parity checksummed ?
On the best of my knowledge nothing. In any case the data is
checksummed so it is impossible to
Goffredo Baroncelli wrote:
On 05/02/2018 06:55 PM, waxhead wrote:
So again, which problem would solve having the parity checksummed ? On the best
of my knowledge nothing. In any case the data is checksummed so it is
impossible to return corrupted data (modulo bug :-) ).
I am not a BTRFS
Goffredo Baroncelli wrote:
Hi
On 05/02/2018 03:47 AM, Duncan wrote:
Gandalf Corvotempesta posted on Tue, 01 May 2018 21:57:59 + as
excerpted:
Hi to all I've found some patches from Andrea Mazzoleni that adds
support up to 6 parity raid.
Why these are wasn't merged ?
With modern disk size,
Howdy!
I am pondering writing a little C program that use libmicrohttpd and
libbtrfsutil to display some very basic (overview) details about BTRFS.
I was hoping to display the same information that'btrfs fi sh /mnt' and
'btrfs fi us -T /mnt' do, but somewhat combined. Since I recently just
f
Liu Bo wrote:
On Wed, Mar 21, 2018 at 9:50 AM, Menion wrote:
Hi all
I am trying to understand the status of RAID5/6 in BTRFS
I know that there are some discussion ongoing on the RFC patch
proposed by Liu bo
But it seems that everything stopped last summary. Also it mentioned
about a "separate d
Liu Bo wrote:
On Sat, Mar 17, 2018 at 5:26 PM, Liu Bo wrote:
On Fri, Mar 16, 2018 at 2:46 PM, Mike Stevens wrote:
Could you please paste the whole dmesg, it looks like it hit
btrfs_abort_transaction(),
which should give us more information about where goes wrong.
The whole thing is here htt
profile is (unlike raid5 or
raid6) working really well.
PS! I'm not a BTRFS dev so don't run away just yet. Someone else may
magically help you recover, Best of luck!
- Waxhead
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a messag
Austin S. Hemmelgarn wrote:
On 2018-03-09 11:02, Paul Richards wrote:
Hello there,
I have a 3 disk btrfs RAID 1 filesystem, with a single failed drive.
Before I attempt any recovery I’d like to ask what is the recommended
approach? (The wiki docs suggest consulting here before attempting
recov
Just out of curiosity, are there any work going on for enabling
different "RAID" levels per subvolume?!
And out of even more curiosity how is this planned to be handled with
btrfs balance?! When per subvolume "RAID" levels are good to go, how
would you then run the balance filters to convert /
The latest released kernel is 4.15
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Austin S. Hemmelgarn wrote:
On 2018-01-29 12:58, Andrei Borzenkov wrote:
29.01.2018 14:24, Adam Borowski пишет:
...
So any event (the user's request) has already happened. A rc system, of
which systemd is one, knows whether we reached the "want root
filesystem" or
"want secondary filesyste
Hans van Kranenburg wrote:
On 01/23/2018 08:51 PM, waxhead wrote:
Nikolay Borisov wrote:
On 23.01.2018 16:20, Hans van Kranenburg wrote:
[...]
We also had a discussion about the "backup roots" that are stored
besides the superblock, and that they are "better than nothing
Nikolay Borisov wrote:
On 23.01.2018 16:20, Hans van Kranenburg wrote:
On 01/23/2018 10:03 AM, Nikolay Borisov wrote:
On 23.01.2018 09:03, waxhead wrote:
Note: This have been mentioned before, but since I see some issues
related to superblocks I think it would be good to bring up the
Note: This have been mentioned before, but since I see some issues
related to superblocks I think it would be good to bring up the question
again.
According to the information found in the wiki:
https://btrfs.wiki.kernel.org/index.php/On-disk_Format#Superblock
The superblocks are updated syn
Austin S. Hemmelgarn wrote:
So, for a while now I've been recommending small filtered balances to
people as part of regular maintenance for BTRFS filesystems under the
logic that it does help in some cases and can't really hurt (and if done
right, is really inexpensive in terms of resources). Th
Qu Wenruo wrote:
On 2018年01月01日 08:48, Stirling Westrup wrote:
Okay, I want to start this post with a HUGE THANK YOU THANK YOU THANK
YOU to Nikolay Borisov and most especially to Qu Wenruo!
Thanks to their tireless help in answering all my dumb questions I
have managed to get my BTRFS working
Timofey Titovets wrote:
Currently btrfs raid1/10 balancer blance requests to mirrors,
based on pid % num of mirrors.
Update logic and make it understood if underline device are non rotational.
If one of mirrors are non rotational, then all read requests will be moved to
non rotational device.
James Courtier-Dutton wrote:
Hi,
Thank you for your suggestion.
It does not help at all.
btrfs balance's behaviour seems to be unchanged by ionice.
It still takes 100% while working and starves all other processes of
disk access.
I can I get btrfs balance to work in the background, without adve
Roman Mamedov wrote:
On Sat, 18 Nov 2017 02:08:46 +0100
Hans van Kranenburg wrote:
It's using send + balance at the same time. There's something that makes
btrfs explode when you do that.
It's not new in 4.14, I have seen it in 4.7 and 4.9 also, various
different explosions in kernel log. Sin
As a regular BTRFS user I can tell you that there is no such thing as
hot data tracking yet. Some people seem to use bcache together with
btrfs and come asking for help on the mailing list.
Raid5/6 have received a few fixes recently, and it *may* soon me worth
trying out raid5/6 for data, but
ST wrote:
Hello,
I've recently learned about btrfs and consider to utilize for my needs.
I have several questions in this regard:
I manage a dedicated server remotely and have some sort of script that
installs an OS from several images. There I can define partitions and
their FSs.
1. By defaul
Dave wrote:
Has this been discussed here? Has anything changed since it was written?
I have (more or less) been following the mailing list since this feature
was suggested. I have been drooling over it since, but not much have
happened.
Parity-based redundancy (RAID5/6/triple parity and bey
Hi,
On one of my machines I run a BTRFS filesystem with the following
configuration
Kernel: 4.11.0-1-amd64 #1 SMP Debian 4.11.6-1 (2017-06-19) x86_64 GNU/Linux
Disks: 8
Metadata: Raid 10
Data: Raid1
One of the disks is going bad , and while the system still runs fine I
ran some md5sum's on a
Brendan Hide wrote:
The title seems alarmist to me - and I suspect it is going to be
misconstrued. :-/
From the release notes at
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/7.4_Release_Notes/chap-Red_Hat_Enterprise_Linux-7.4_Release_Notes-Deprecated_Functional
Hugo Mills wrote:
You can see about the disk usage in different scenarios with the
online tool at:
http://carfax.org.uk/btrfs-usage/
Hugo.
As a side note, have you ever considered making this online tool (that
should never go away just for the record) part of btrfs-progs e.g. a
p
Chris Murphy wrote:
On Mon, Jul 24, 2017 at 5:27 AM, Cloud Admin wrote:
I am a little bit confused because the balance command is running since
12 hours and only 3GB of data are touched.
That's incredibly slow. Something isn't right.
Using btrfs-debug -b from btrfs-progs, I've selected a f
I am trying to piece together the actual status of the RAID5/6 bit of BTRFS.
The wiki refer to kernel 3.19 which was released in February 2015 so I
assume that the information there is a tad outdated (the last update on
the wiki page was July 2016)
https://btrfs.wiki.kernel.org/index.php/RAID56
Same here, Have been using BTRFS for a 'scratch' disk since about 2014.
The disk have had quite some abuse and no issues yet.
I don't use compression, snapshots or any fancy features.
I have recently moved all of the root filesystem to BTRFS with 5x SSD
disks set up in RAID1 and everything is (st
I am doing some test on BTRFS with both data and metadata in raid1.
uname -a
Linux daffy 4.9.0-1-amd64 #1 SMP Debian 4.9.6-3 (2017-01-28) x86_64
GNU/Linux
btrfs--version
btrfs-progs v4.7.3
01. mkfs.btrfs /dev/sd[fgh]1
02. mount /dev/sdf1 /btrfs_test/
03. btrfs balance start -dconvert=raid1 /
Chris Murphy wrote:
On Thu, Mar 2, 2017 at 6:48 PM, Chris Murphy wrote:
Again, my data is fine. The problem I'm having is this:
https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/tree/Documentation/filesystems/btrfs.txt?id=refs/tags/v4.10.1
Which says in the first line, in p
Hugo Mills wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Sun, Jan 22, 2017 at 11:35:49PM +0100, Christoph Anton Mitterer wrote:
On Sun, 2017-01-22 at 22:22 +0100, Jan Vales wrote:
Therefore my question: whats the status of raid5/6 is in btrfs?
Is it somehow "production"-ready by now?
Pasi Kärkkäinen wrote:
On Mon, Sep 12, 2016 at 09:57:17PM +0200, Martin Steigerwald wrote:
Great.
I made to minor adaption. I added a link to the Status page to my warning in
before the kernel log by feature page. And I also mentioned that at the time
the page was last updated the latest kerne
Zoiled wrote:
Chris Mason wrote:
On 09/11/2016 04:55 AM, Waxhead wrote:
I have been following BTRFS for years and have recently been
starting to
use BTRFS more and more and as always BTRFS' stability is a hot topic.
Some says that BTRFS is a dead end research project while others clai
Martin Steigerwald wrote:
Am Sonntag, 11. September 2016, 13:43:59 CEST schrieb Martin Steigerwald:
The Nouveau graphics driver have a nice feature matrix on it's webpage
and I think that BTRFS perhaps should consider doing something like
that
on it's official wiki as well
BTRFS also has a feat
Martin Steigerwald wrote:
Am Sonntag, 11. September 2016, 13:21:30 CEST schrieb Zoiled:
Martin Steigerwald wrote:
Am Sonntag, 11. September 2016, 10:55:21 CEST schrieb Waxhead:
I have been following BTRFS for years and have recently been starting to
use BTRFS more and more and as always BTRFS
I have been following BTRFS for years and have recently been starting to
use BTRFS more and more and as always BTRFS' stability is a hot topic.
Some says that BTRFS is a dead end research project while others claim
the opposite.
Taking a quick glance at the wiki does not say much about what is
Waxhead wrote:
Chris Murphy wrote:
Well all the generations on all devices are now the same, and so are
the chunk trees. I haven't looked at them in detail to see if there
are any discrepancies among them.
If you don't care much for this file system, then you could try btrfs
chec
Chris Murphy wrote:
Well all the generations on all devices are now the same, and so are
the chunk trees. I haven't looked at them in detail to see if there
are any discrepancies among them.
If you don't care much for this file system, then you could try btrfs
check --repair, using btrfs-progs 4
Chris Murphy wrote:
On Mon, Dec 28, 2015 at 3:55 PM, Waxhead wrote:
I tried the following
btrfs-image -t4 -c9 /dev/sdb1 /btrfs_raid6.img
checksum verify failed on 28734324736 found C3E98F3B wanted EB2392C6
checksum verify failed on 28734324736 found C3E98F3B wanted EB2392C6
checksum
Duncan wrote:
Waxhead posted on Mon, 28 Dec 2015 03:04:33 +0100 as excerpted:
Duncan wrote:
Waxhead posted on Mon, 28 Dec 2015 00:06:46 +0100 as excerpted:
btrfs scrub status /mnt
scrub status for 2832346e-0720-499f-8239-355534e5721b
scrub started at Sun Mar 29 23:21:04 2015
Now
Chris Murphy wrote:
On Sun, Dec 27, 2015 at 7:04 PM, Waxhead wrote:
Since all drives register and since I can even mount the filesystem.
OK so you've umounted the file system, reconnected all devices,
mounted the file system normally, and there are no problems reported
in dmesg?
If so
Duncan wrote:
Waxhead posted on Mon, 28 Dec 2015 00:06:46 +0100 as excerpted:
btrfs scrub status /mnt scrub status for
2832346e-0720-499f-8239-355534e5721b
scrub started at Sun Mar 29 23:21:04 2015 and finished after
00:01:04
total bytes scrubbed: 1.97GiB with 14549 errors
Chris Murphy wrote:
On Sun, Dec 27, 2015 at 6:59 AM, Waxhead wrote:
Hi,
I have a "toy-array" of 6x USB drives hooked up to a hub where I made a
btrfs raid 6 data+metadata filesystem.
I copied some files to the filesystem, ripped out one USB drive and ruined
it dd if=/dev/random
Hi,
I have a "toy-array" of 6x USB drives hooked up to a hub where I made a
btrfs raid 6 data+metadata filesystem.
I copied some files to the filesystem, ripped out one USB drive and
ruined it dd if=/dev/random to various locations on the drive. Put the
USB drive back and the filesystem moun
As far as I understand btrfs stores all data in huge chunks that are
striped, mirrored or "raid5/6'ed" throughout all the disks added to the
filesystem/volume.
How does btrfs deal with different sized disks? let's say that you for
example have 10 different disks that are 100GB,200GB,300GB...10
David Sterba wrote:
On Sat, Feb 11, 2012 at 05:49:41AM +0100, Timo Witte wrote:
What happened to the hot data tracking feature in btrfs? There are a lot
of old patches from aug 2010, but it looks like the feature has been
completly removed from the current version of btrfs. Is this feature
still
After playing around with btrfs for a while, reading about it and also
watching Avi Miller's presentation on youtube I am starting to wonder
why one would need btrfsck at all. I am no expert in filesystems so I
apologize if any of these questions may sound a bit stupid.
1. How "self-healing" i
Hi,
From what I have read BTRFS does replace a bad copy of data with a
known good copy (if it has one). Will BTRFS try to repair the corrupt
data or will it simply silently restore the data without the user
knowing that a file has been "fixed"?
--
To unsubscribe from this list: send the line
Hi,
Can someone shed some light on how BTRFS will manage a bunch of disks of
varying size for the planned raid5/6. e.g. 3x 2TB disk and 1x 250GB
disk? If using a raid5 setup will a 750GB of usable data automatically
be used as a 4 disk raid5 while the rest is used as a 3 disk raid5?! If
so; h
79 matches
Mail list logo