On 2017-03-09 04:49, Peter Grandi wrote:
Consider the common case of a 3-member volume with a 'raid1'
target profile: if the sysadm thinks that a drive should be
replaced, the goal is to take it out *without* converting every
chunk to 'single', because with 2-out-of-3 devices half of the
chunks
On 2017-03-13 07:52, Juan Orti Alcaine wrote:
2017-03-13 12:29 GMT+01:00 Hérikz Nawarro :
Hello everyone,
Today is safe to use btrfs for home storage? No raid, just secure
storage for some files and create snapshots from it.
In my humble opinion, yes. I'm running a
es.c | 156 -
fs/btrfs/volumes.h | 37 +
6 files changed, 188 insertions(+), 101 deletions(-)
Everything appears to work as advertised here, so for the patcheset as a
whole, you can add:
Tested-by: Austin S. Hemmelgarn <ahferro...@gma
On 2017-03-05 14:13, Peter Grandi wrote:
What makes me think that "unmirrored" 'raid1' profile chunks
are "not a thing" is that it is impossible to remove
explicitly a member device from a 'raid1' profile volume:
first one has to 'convert' to 'single', and then the 'remove'
copies back to the
On 2017-03-03 15:10, Kai Krakow wrote:
Am Fri, 3 Mar 2017 07:19:06 -0500
schrieb "Austin S. Hemmelgarn" <ahferro...@gmail.com>:
On 2017-03-03 00:56, Kai Krakow wrote:
Am Thu, 2 Mar 2017 11:37:53 +0100
schrieb Adam Borowski <kilob...@angband.pl>:
On Wed, Mar 01, 20
On 2017-03-02 12:26, Andrei Borzenkov wrote:
02.03.2017 16:41, Duncan пишет:
Chris Murphy posted on Wed, 01 Mar 2017 17:30:37 -0700 as excerpted:
[1717713.408675] BTRFS warning (device dm-8): missing devices (1)
exceeds the limit (0), writeable mount is not allowed
[1717713.446453] BTRFS
On 2017-03-02 19:47, Peter Grandi wrote:
[ ... ] Meanwhile, the problem as I understand it is that at
the first raid1 degraded writable mount, no single-mode chunks
exist, but without the second device, they are created. [
... ]
That does not make any sense, unless there is a fundamental
On 2017-03-03 00:56, Kai Krakow wrote:
Am Thu, 2 Mar 2017 11:37:53 +0100
schrieb Adam Borowski :
On Wed, Mar 01, 2017 at 05:30:37PM -0700, Chris Murphy wrote:
[1717713.408675] BTRFS warning (device dm-8): missing devices (1)
exceeds the limit (0), writeable mount is not
On 2017-04-07 12:28, Chris Murphy wrote:
On Fri, Apr 7, 2017 at 7:50 AM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
If you care about both performance and data safety, I would suggest using
BTRFS raid1 mode on top of LVM or MD RAID0 together with having good backups
and good moni
On 2017-04-07 12:04, Chris Murphy wrote:
On Fri, Apr 7, 2017 at 5:41 AM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
I'm rather fond of running BTRFS raid1 on top of LVM RAID0 volumes,
which while it provides no better data safety than BTRFS raid10 mode, gets
noticeably
On 2017-04-07 13:05, John Petrini wrote:
The use case actually is not Ceph, I was just drawing a comparison
between Ceph's object replication strategy vs BTRF's chunk mirroring.
That's actually a really good comparison that I hadn't thought of
before. From what I can tell from my limited
On 2017-04-07 12:58, John Petrini wrote:
When you say "running BTRFS raid1 on top of LVM RAID0 volumes" do you
mean creating two LVM RAID-0 volumes and then putting BTRFS RAID1 on
the two resulting logical volumes?
Yes, although it doesn't have to be LVM, it could just as easily be MD
or even
On 2017-04-07 09:28, John Petrini wrote:
Hi Austin,
Thanks for taking to time to provide all of this great information!
Glad I could help.
You've got me curious about RAID1. If I were to convert the array to
RAID1 could it then sustain a multi drive failure? Or in other words
do I actually
On 2017-04-17 15:22, Imran Geriskovan wrote:
On 4/17/17, Roman Mamedov <r...@romanrm.net> wrote:
"Austin S. Hemmelgarn" <ahferro...@gmail.com> wrote:
* Compression should help performance and device lifetime most of the
time, unless your CPU is fully utilized on a r
On 2017-04-17 15:39, Chris Murphy wrote:
On Mon, Apr 17, 2017 at 1:26 PM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
On 2017-04-17 14:34, Chris Murphy wrote:
Nope. The first paragraph applies to NVMe machine with ssd mount
option. Few fragments.
The second paragraph applies
On 2017-07-29 19:04, Cloud Admin wrote:
Am Montag, den 24.07.2017, 18:40 +0200 schrieb Cloud Admin:
Am Montag, den 24.07.2017, 10:25 -0400 schrieb Austin S. Hemmelgarn:
On 2017-07-24 10:12, Cloud Admin wrote:
Am Montag, den 24.07.2017, 09:46 -0400 schrieb Austin S.
Hemmelgarn:
On 2017-07-24
On 2017-07-31 06:51, Sebastian Ochmann wrote:
Hello,
I have a quite simple and possibly stupid question. Since I'm
occasionally seeing warnings about failed loading of free space cache, I
wanted to clear and rebuild space cache. So I mounted the filesystem(s)
with -o clear_cache and
On 2017-07-31 08:30, Sebastian Ochmann wrote:
On 31.07.2017 14:08, Austin S. Hemmelgarn wrote:
On 2017-07-31 06:51, Sebastian Ochmann wrote:
Hello,
I have a quite simple and possibly stupid question. Since I'm
occasionally seeing warnings about failed loading of free space
cache, I wanted
On 2017-08-02 13:52, Goffredo Baroncelli wrote:
Hi,
On 2017-08-01 17:00, Austin S. Hemmelgarn wrote:
OK, I just did a dead simple test by hand, and it looks like I was right. The
method I used to check this is as follows:
1. Create and mount a reasonably small filesystem (I used an 8G
On 2017-08-02 00:14, Duncan wrote:
Austin S. Hemmelgarn posted on Tue, 01 Aug 2017 10:47:30 -0400 as
excerpted:
I think I _might_ understand what's going on here. Is that test program
calling fallocate using the desired total size of the file, or just
trying to allocate the range beyond
On 2017-08-02 04:38, Brendan Hide wrote:
The title seems alarmist to me - and I suspect it is going to be
misconstrued. :-/
From the release notes at
On 2017-08-03 12:37, Goffredo Baroncelli wrote:
On 2017-08-03 13:39, Austin S. Hemmelgarn wrote:
On 2017-08-02 17:05, Goffredo Baroncelli wrote:
On 2017-08-02 21:10, Austin S. Hemmelgarn wrote:
On 2017-08-02 13:52, Goffredo Baroncelli wrote:
Hi,
[...]
consider the following scenario
On 2017-08-03 13:15, Marat Khalili wrote:
On August 3, 2017 7:01:06 PM GMT+03:00, Goffredo Baroncelli
The file is physically extended
ghigo@venice:/tmp$ fallocate -l 1000 foo.txt
For clarity let's replace the fallocate above with:
$ head -c 1000 foo.txt
ghigo@venice:/tmp$ ls -l foo.txt
On 2017-08-03 14:08, waxhead wrote:
Brendan Hide wrote:
The title seems alarmist to me - and I suspect it is going to be
misconstrued. :-/
From the release notes at
On 2017-08-03 14:29, Christoph Anton Mitterer wrote:
On Thu, 2017-08-03 at 20:08 +0200, waxhead wrote:
Brendan Hide wrote:
The title seems alarmist to me - and I suspect it is going to be
misconstrued. :-/
From the release notes at
On 2017-08-03 07:44, Marat Khalili wrote:
On 02/08/17 20:52, Goffredo Baroncelli wrote:
consider the following scenario:
a) create a 2GB file
b) fallocate -o 1GB -l 2GB
c) write from 1GB to 3GB
after b), the expectation is that c) always succeed [1]: i.e. there is
enough space on the
On 2017-08-15 10:41, Christoph Anton Mitterer wrote:
On Tue, 2017-08-15 at 07:37 -0400, Austin S. Hemmelgarn wrote:
Go look at Chrome, or Firefox, or Opera, or any other major web
browser.
At minimum, they will safely bail out if they detect corruption in
the
user profile and can trivially
On 2017-08-16 09:31, Christoph Anton Mitterer wrote:
Just out of curiosity:
On Wed, 2017-08-16 at 09:12 -0400, Chris Mason wrote:
Btrfs couples the crcs with COW because
this (which sounds like you want it to stay coupled that way)...
plus
It's possible to protect against all three
On 2017-08-16 09:12, Chris Mason wrote:
My real goal is to make COW fast enough that we can leave it on for the
database applications too. Obviously I haven't quite finished that one
yet ;) But I'd rather keep the building block of all the other btrfs
features in place than try to do crcs
On 2017-08-16 10:11, Christoph Anton Mitterer wrote:
On Wed, 2017-08-16 at 09:53 -0400, Austin S. Hemmelgarn wrote:
Go try BTRFS on top of dm-integrity, or on a
system with T10-DIF or T13-EPP support
When dm-integrity is used... would that be enough for btrfs to do a
proper repair in the RAID
On 2017-08-14 15:54, Christoph Anton Mitterer wrote:
On Mon, 2017-08-14 at 11:53 -0400, Austin S. Hemmelgarn wrote:
Quite a few applications actually _do_ have some degree of secondary
verification or protection from a crash. Go look at almost any
database
software.
Then please give proper
On 2017-08-10 07:32, Austin S. Hemmelgarn wrote:
On 2017-08-10 04:30, Eric Biggers wrote:
On Wed, Aug 09, 2017 at 07:35:53PM -0700, Nick Terrell wrote:
It can compress at speeds approaching lz4, and quality approaching lzma.
Well, for a very loose definition of "approaching", and
On 2017-08-10 13:24, Eric Biggers wrote:
On Thu, Aug 10, 2017 at 07:32:18AM -0400, Austin S. Hemmelgarn wrote:
On 2017-08-10 04:30, Eric Biggers wrote:
On Wed, Aug 09, 2017 at 07:35:53PM -0700, Nick Terrell wrote:
It can compress at speeds approaching lz4, and quality approaching lzma
On 2017-08-11 05:57, Piotr Pawłow wrote:
Hello,
So 4.10 isn't /too/ far out of range yet, but I'd strongly consider
upgrading (or downgrading to 4.9 LTS) as soon as it's reasonably
convenient, before 4.13 in any case. Unless you prefer to go the
distro support route, of course.
I used to
On 2017-08-09 22:39, Nick Terrell wrote:
Add zstd compression and decompression support to BtrFS. zstd at its
fastest level compresses almost as well as zlib, while offering much
faster compression and decompression, approaching lzo speeds.
I benchmarked btrfs with zstd compression against no
On 2017-08-13 07:50, Adam Hunt wrote:
Back in 2014 Ted Tso introduced the lazytime mount option for ext4 and
shortly thereafter a more generic VFS implementation which was then
merged into mainline. His early patches included support for Btrfs but
those changes were removed prior to the feature
On 2017-08-14 08:24, Christoph Anton Mitterer wrote:
On Mon, 2017-08-14 at 14:36 +0800, Qu Wenruo wrote:
And how are you going to write your data and checksum atomically
when
doing in-place updates?
Exactly, that's the main reason I can figure out why btrfs disables
checksum for nodatacow.
On 2017-08-13 21:01, Cerem Cem ASLAN wrote:
Would that be useful to build a BTRFS test machine, which will perform
both software tests (btrfs send | btrfs receive, read/write random
data etc.) and hardware tests, such as abrupt power off test, abruptly
removing a raid-X disk physically, etc.
In
On 2017-08-14 11:13, Graham Cobb wrote:
On 14/08/17 15:23, Austin S. Hemmelgarn wrote:
Assume you have higher level verification.
But almost no applications do. In real life, the decision
making/correction process will be manual and labour-intensive (for
example, running fsck on a virtual
On 2017-08-17 02:25, GWB wrote:
<<
Or else it could be an argument that they
expect Btrfs to do their job while they watch cat videos from the
intertubes. :-)
My favourite quote from the list this week, and, well, obviously, that
is the main selling point of file systems like btrfs, zfs, and
On 2017-08-10 04:30, Eric Biggers wrote:
On Wed, Aug 09, 2017 at 07:35:53PM -0700, Nick Terrell wrote:
It can compress at speeds approaching lz4, and quality approaching lzma.
Well, for a very loose definition of "approaching", and certainly not at the
same time. I doubt there's a use case
On 2017-08-10 15:25, Hugo Mills wrote:
On Thu, Aug 10, 2017 at 01:41:21PM -0400, Chris Mason wrote:
On 08/10/2017 04:30 AM, Eric Biggers wrote:
Theses benchmarks are misleading because they compress the whole file as a
single stream without resetting the dictionary, which isn't how data will
On 2017-07-12 21:09, Adam Borowski wrote:
On Thu, Jul 13, 2017 at 02:50:10AM +0200, David Sterba wrote:
On Mon, Jul 10, 2017 at 09:11:50PM +0300, Dmitrii Tcvetkov wrote:
Tested on top of current mainline master (commit
af3c8d98508d37541d4bf57f13a984a7f73a328c). Didn't find any
regressions.
On 2017-07-07 13:40, Chris Murphy wrote:
On Fri, Jul 7, 2017 at 10:59 AM, Andrei Borzenkov wrote:
07.07.2017 19:42, Chris Murphy пишет:
I'm digging through piles of list emails and not really finding an
answer to this. Maybe it's Friday and I'm just confused...
On 2017-07-10 00:21, Daniel Brady wrote:
On 7/7/2017 1:06 AM, Daniel Brady wrote:
On 7/6/2017 11:48 PM, Roman Mamedov wrote:
On Wed, 5 Jul 2017 22:10:35 -0600
Daniel Brady wrote:
parent transid verify failed
Typically in Btrfs terms this means "you're screwed", fsck
On 2017-07-07 23:07, Adam Borowski wrote:
On Sat, Jul 08, 2017 at 01:40:18AM +0200, Adam Borowski wrote:
On Fri, Jul 07, 2017 at 11:17:49PM +, Nick Terrell wrote:
On 7/6/17, 9:32 AM, "Adam Borowski" wrote:
Got a reproducible crash on amd64:
Thanks for the bug
On 2017-07-09 22:13, Adam Bahe wrote:
I have finished all of the above suggestions, ran a scrub, remounted,
rebooted, made sure the system didn't hang, and then kicked off
another balance on the entire pool. It completed rather quickly but
something still does not seem right.
Label:
On 2017-07-21 07:16, Austin S. Hemmelgarn wrote:
On 2017-07-20 17:27, Nick Terrell wrote:
Well this is embarrassing, forgot to type anything before hitting send...
Hi all,
This patch set adds xxhash, zstd compression, and zstd decompression
modules. It also adds zstd support to BtrFS
On 2017-07-25 08:55, Hérikz Nawarro wrote:
Hello everyone,
I'm migrating to btrfs and i would like to know, in a btrfs filesystem
with 4 disks (multiple sizes) with -d raid0 & -m raid1, how many
drives can i lost without losing the entire array?
Exactly one, but you will lose data if you lose
On 2017-07-24 07:27, Cloud Admin wrote:
Hi,
I have a multi-device pool (three discs) as RAID1. Now I want to add a
new disc to increase the pool. I followed the description on https://bt
rfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices and
used 'btrfs add '. After that I called a
On 2017-07-24 10:12, Cloud Admin wrote:
Am Montag, den 24.07.2017, 09:46 -0400 schrieb Austin S. Hemmelgarn:
On 2017-07-24 07:27, Cloud Admin wrote:
Hi,
I have a multi-device pool (three discs) as RAID1. Now I want to
add a
new disc to increase the pool. I followed the description on https
On 2017-07-22 07:35, Adam Borowski wrote:
On Fri, Jul 21, 2017 at 11:56:21AM -0400, Austin S. Hemmelgarn wrote:
On 2017-07-20 17:27, Nick Terrell wrote:
This patch set adds xxhash, zstd compression, and zstd decompression
modules. It also adds zstd support to BtrFS and SquashFS.
Each patch
On 2017-07-24 14:53, Chris Mason wrote:
On 07/24/2017 02:41 PM, David Sterba wrote:
would it be ok for you to keep ssd_working as before?
I'd really like to get this patch merged soon because "do not use ssd
mode for ssd" has started to be the recommended workaround. Once this
sticks, we
On 2017-07-25 17:45, Hugo Mills wrote:
On Tue, Jul 25, 2017 at 11:29:13PM +0200, waxhead wrote:
Hugo Mills wrote:
You can see about the disk usage in different scenarios with the
online tool at:
http://carfax.org.uk/btrfs-usage/
Hugo.
As a side note, have you ever considered
On 2017-07-26 08:27, Hugo Mills wrote:
On Wed, Jul 26, 2017 at 08:12:19AM -0400, Austin S. Hemmelgarn wrote:
On 2017-07-25 17:45, Hugo Mills wrote:
On Tue, Jul 25, 2017 at 11:29:13PM +0200, waxhead wrote:
Hugo Mills wrote:
You can see about the disk usage in different scenarios
On 2017-07-21 19:21, Hans van Kranenburg wrote:
> On 07/21/2017 05:50 PM, Austin S. Hemmelgarn wrote:
>> On 2017-07-21 07:47, Hans van Kranenburg wrote:
>>> [...]
>>>
>>> Signed-off-by: Hans van Kranenburg <hans.van.kranenb...@mendix.com>
>> Beha
ws things down, I've
been forcing '-o nossd' on my systems for a while now for the
performance improvement), so you can add:
Tested-by: Austin S. Hemmelgarn <ahferro...@gmail.com>
---
fs/btrfs/ctree.h| 4 ++--
fs/btrfs/disk-io.c | 6 ++
fs/btrfs/exten
and had runtime testing running for
about 18 hours now with no issues, so you can add:
Tested-by: Austin S. Hemmelgarn <ahferro...@gmail.com>
For patch 1, I've only compile tested it, but had no issues and got no
warnings about it when booting to test 2-4.
For patch 4, I've compile
On 2017-07-20 17:27, Nick Terrell wrote:
Hi all,
This patch set adds xxhash, zstd compression, and zstd decompression
modules. It also adds zstd support to BtrFS and SquashFS.
Each patch has relevant summaries, benchmarks, and tests.
Best,
Nick Terrell
Changelog:
v1 -> v2:
- Make pointer in
On 2017-06-30 10:21, David Sterba wrote:
On Fri, Jun 30, 2017 at 08:16:20AM -0400, E V wrote:
On Thu, Jun 29, 2017 at 3:41 PM, Nick Terrell wrote:
Add zstd compression and decompression support to BtrFS. zstd at its
fastest level compresses almost as well as zlib, while
On 2017-06-30 19:01, Nick Terrell wrote:
If we're going to make that configurable, there are some things to
consider:
* the underlying compressed format -- does not change for different
levels
This is true for zlib and zstd. lzo in the kernel only supports one
compression level.
I had
On 2017-07-06 08:09, Lionel Bouton wrote:
Le 06/07/2017 à 13:59, Austin S. Hemmelgarn a écrit :
On 2017-07-05 20:25, Nick Terrell wrote:
On 7/5/17, 12:57 PM, "Austin S. Hemmelgarn" <ahferro...@gmail.com>
wrote:
It's the slower compression speed that has me arguing for
On 2017-07-05 23:19, Paul Jones wrote:
While reading the thread about adding zstd compression, it occurred
to me that there is potentially another thing affecting performance -
Compressed extent size. (correct my terminology if it's incorrect). I
have two near identical RAID1 filesystems (used
On 2017-07-05 20:25, Nick Terrell wrote:
On 7/5/17, 12:57 PM, "Austin S. Hemmelgarn" <ahferro...@gmail.com> wrote:
It's the slower compression speed that has me arguing for the
possibility of configurable levels on zlib. 11MB/s is painfully slow
considering that most decent
On 2017-07-05 14:18, Adam Borowski wrote:
On Wed, Jul 05, 2017 at 07:43:27AM -0400, Austin S. Hemmelgarn
wrote:
On 2017-06-30 19:01, Nick Terrell wrote:
There is also the fact of deciding what to use for the default
when specified without a level. This is easy for lzo and zlib,
where we can
On 2017-07-05 15:35, Nick Terrell wrote:
On 7/5/17, 11:45 AM, "Austin S. Hemmelgarn" <ahferro...@gmail.com> wrote:
On 2017-07-05 14:18, Adam Borowski wrote:
On Wed, Jul 05, 2017 at 07:43:27AM -0400, Austin S. Hemmelgarn
wrote:
On 2017-06-30 19:01, Nick Terrell wrote:
There
On 2017-08-01 10:47, Austin S. Hemmelgarn wrote:
On 2017-08-01 10:39, pwm wrote:
Thanks for the links and suggestions.
I did try your suggestions but it didn't solve the underlying problem.
pwm@europium:~$ sudo btrfs balance start -v -dusage=20 /mnt/snap_04
Dumping filters: flags 0x1, state
On 2017-08-01 10:39, pwm wrote:
Thanks for the links and suggestions.
I did try your suggestions but it didn't solve the underlying problem.
pwm@europium:~$ sudo btrfs balance start -v -dusage=20 /mnt/snap_04
Dumping filters: flags 0x1, state 0x0, force is off
DATA (flags 0x2): balancing,
for the help.
Glad I could be helpful!
/Per W
On Tue, 1 Aug 2017, Austin S. Hemmelgarn wrote:
On 2017-08-01 11:24, pwm wrote:
Yes, the test code is as below - trying to match what snapraid tries
to do:
#include
#include
#include
#include
#include
#include
#include
int main() {
int fd
ion, I'd argue that the behavior of BTRFS in this situation
is incorrect.
/Per W
On Tue, 1 Aug 2017, Austin S. Hemmelgarn wrote:
On 2017-08-01 10:47, Austin S. Hemmelgarn wrote:
On 2017-08-01 10:39, pwm wrote:
Thanks for the links and suggestions.
I did try your suggestions but it didn't
A recent thread on the BTRFS mailing list [1] brought up some odd
behavior in BTRFS that I've long suspected but not had prior reason to
test. I've put the fsdevel mailing list on CC since I'm curious to hear
what people there think about this.
Apparently, if you call fallocate() on a file
On 2017-08-01 13:25, Roman Mamedov wrote:
On Tue, 1 Aug 2017 10:14:23 -0600
Liu Bo wrote:
This aims to fix write hole issue on btrfs raid5/6 setup by adding a
separate disk as a journal (aka raid5/6 log), so that after unclean
shutdown we can make sure data and parity
On 2017-08-02 08:55, Lutz Vieweg wrote:
On 08/02/2017 01:25 PM, Austin S. Hemmelgarn wrote:
And this is a worst-case result of the fact that most
distros added BTRFS support long before it was ready.
RedHat still advertises "Ceph", and given Ceph initially recommen
On 2017-08-01 15:07, Holger Hoffstätte wrote:
On 08/01/17 20:15, Holger Hoffstätte wrote:
On 08/01/17 19:34, Austin S. Hemmelgarn wrote:
[..]
Apparently, if you call fallocate() on a file with an offset of 0 and
a length longer than the length of the file itself, BTRFS will
allocate that exact
On 2017-08-02 17:05, Goffredo Baroncelli wrote:
On 2017-08-02 21:10, Austin S. Hemmelgarn wrote:
On 2017-08-02 13:52, Goffredo Baroncelli wrote:
Hi,
[...]
consider the following scenario:
a) create a 2GB file
b) fallocate -o 1GB -l 2GB
c) write from 1GB to 3GB
after b), the expectation
On 2017-08-03 16:45, Brendan Hide wrote:
On 08/03/2017 09:22 PM, Austin S. Hemmelgarn wrote:
On 2017-08-03 14:29, Christoph Anton Mitterer wrote:
On Thu, 2017-08-03 at 20:08 +0200, waxhead wrote:
There are no higher-level management tools (e.g. RAID
management/monitoring, etc.)...
[snip
On 2017-08-04 10:45, Goffredo Baroncelli wrote:
On 2017-08-03 19:23, Austin S. Hemmelgarn wrote:
On 2017-08-03 12:37, Goffredo Baroncelli wrote:
On 2017-08-03 13:39, Austin S. Hemmelgarn wrote:
[...]
Also, as I said below, _THIS WORKS ON ZFS_. That immediately means that a CoW
filesystem
On 2017-08-22 09:53, Ulli Horlacher wrote:
On Tue 2017-08-22 (09:37), Austin S. Hemmelgarn wrote:
root@fex:~# df -T /local/.backup/home
Filesystem Type 1K-blocks Used Available Use% Mounted on
- -1073740800 104252160 967766336 10% /local/.backup/home
Hmm, now I'm
On 2017-08-22 10:43, Peter Grandi wrote:
How do I find the root filesystem of a subvolume?
Example:
root@fex:~# df -T
Filesystem Type 1K-blocks Used Available Use% Mounted on
- -1073740800 104244552 967773976 10% /local/.backup/home
[ ... ]
I know, the root
On 2017-08-22 10:23, Hugo Mills wrote:
On Tue, Aug 22, 2017 at 10:12:25AM -0400, Austin S. Hemmelgarn wrote:
On 2017-08-22 09:53, Ulli Horlacher wrote:
On Tue 2017-08-22 (09:37), Austin S. Hemmelgarn wrote:
root@fex:~# df -T /local/.backup/home
Filesystem Type 1K-blocks Used
On 2017-08-22 13:41, Peter Grandi wrote:
[ ... ]
There is no fixed relationship between the root directory
inode of a subvolume and the root directory inode of any
other subvolume or the main volume.
Actually, there is, because it's inherently rooted in the
hierarchy of the volume itself.
lumes.c | 135 +
fs/btrfs/volumes.h | 18 +++
4 files changed, 255 insertions(+), 1 deletion(-)
All my tests passed, and manual testing shows that it does as
advertised, so for the series as a whole you can add:
Tested-by: Austin S. He
On 2017-05-03 10:17, Christophe de Dinechin wrote:
On 29 Apr 2017, at 21:13, Chris Murphy wrote:
On Sat, Apr 29, 2017 at 2:46 AM, Christophe de Dinechin
wrote:
On 28 Apr 2017, at 22:09, Chris Murphy wrote:
On Fri,
On 2017-05-03 14:12, Andrei Borzenkov wrote:
03.05.2017 14:26, Austin S. Hemmelgarn пишет:
On 2017-05-02 15:50, Goffredo Baroncelli wrote:
On 2017-05-02 20:49, Adam Borowski wrote:
It could be some daemon that waits for btrfs to become complete. Do we
have something?
Such a daemon would
On 2017-05-11 12:17, Robert Mader wrote:
Hello everyone,
I just wanted to ask a short question as I couldn't find a clear answer
anywhere on the net, yet:
Is it currently possible to reserve space for a BTRFS subvolume?
Currently, there is no way to do this directly right now. However, you
On 2017-05-11 19:24, Ochi wrote:
Hello,
here is the journal.log (I hope). It's quite interesting. I rebooted the
machine, performed a mkfs.btrfs on dm-{2,3,4} and dm-3 was missing
afterwards (around timestamp 66.*). However, I then logged into the
machine from another terminal (around timestamp
On 2017-05-15 04:14, Hugo Mills wrote:
On Sun, May 14, 2017 at 04:16:52PM -0700, Marc MERLIN wrote:
On Sun, May 14, 2017 at 09:21:11PM +, Hugo Mills wrote:
2) balance -musage=0
3) balance -musage=20
In most cases, this is going to make ENOSPC problems worse, not
better. The reason for
On 2017-05-12 14:27, Kai Krakow wrote:
Am Tue, 18 Apr 2017 15:02:42 +0200
schrieb Imran Geriskovan <imran.gerisko...@gmail.com>:
On 4/17/17, Austin S. Hemmelgarn <ahferro...@gmail.com> wrote:
Regarding BTRFS specifically:
* Given my recently newfound understanding of what the
On 2017-05-12 14:36, Kai Krakow wrote:
Am Fri, 12 May 2017 15:02:20 +0200
schrieb Imran Geriskovan :
On 5/12/17, Duncan <1i5t5.dun...@cox.net> wrote:
FWIW, I'm in the market for SSDs ATM, and remembered this from a
couple weeks ago so went back to find it. Thanks.
On 2017-05-12 09:54, Ochi wrote:
On 12.05.2017 13:25, Austin S. Hemmelgarn wrote:
On 2017-05-11 19:24, Ochi wrote:
Hello,
here is the journal.log (I hope). It's quite interesting. I rebooted the
machine, performed a mkfs.btrfs on dm-{2,3,4} and dm-3 was missing
afterwards (around timestamp 66
On 2017-05-16 05:53, Tom Hale wrote:
Hi Chris,
On 09/05/17 02:26, Chris Murphy wrote:
Read errors are fixed by overwrites. If the underlying device doesn't
report an error for the write command, it's assumed to succeed. Even
md and LVM raid's do this.
I understand assuming writes succeed in
On 2017-05-15 15:49, Kai Krakow wrote:
Am Mon, 15 May 2017 08:03:48 -0400
schrieb "Austin S. Hemmelgarn" <ahferro...@gmail.com>:
That's why I don't trust any of my data to them. But I still want
the benefit of their speed. So I use SSDs mostly as frontend caches
to HDDs. T
On 2017-05-16 08:21, Tomasz Torcz wrote:
On Tue, May 16, 2017 at 03:58:41AM +0200, Kai Krakow wrote:
Am Mon, 15 May 2017 22:05:05 +0200
schrieb Tomasz Torcz :
My
drive has # smartctl -a /dev/sda | grep LBA 241
Total_LBAs_Written 0x0032 099 099 000Old_age
On 2017-06-21 08:43, Christoph Anton Mitterer wrote:
On Wed, 2017-06-21 at 16:45 +0800, Qu Wenruo wrote:
Btrfs is always using device ID to build up its device mapping.
And for any multi-device implementation (LVM,mdadam) it's never a
good
idea to use device path.
Isn't it rather the other
On 2017-06-21 13:20, Andrei Borzenkov wrote:
21.06.2017 16:41, Austin S. Hemmelgarn пишет:
On 2017-06-21 08:43, Christoph Anton Mitterer wrote:
On Wed, 2017-06-21 at 16:45 +0800, Qu Wenruo wrote:
Btrfs is always using device ID to build up its device mapping.
And for any multi-device
On 2017-06-22 05:37, Shyam Prasad N wrote:
Hi,
I'm planning to use the btrfs-convert tool to convert production data
in ext4 filesystem into btrfs.
What is the stability status of this feature?
As per the below link, this tool is not in frequent use in latest linux kernels.
On 2017-06-23 13:25, Michał Sokołowski wrote:
Hello group.
I am confused: Can somebody please confirm/deny, which RAID subsystem is
affected? BTRFS' RAID5/6 or mdadm (Linux kernel raid) RAID 5/6 ?
All of the issues mentioned here are specific to BTRFS raid5/raid6
profiles, with the exception
On 2017-05-22 22:07, Chris Murphy wrote:
On Mon, May 22, 2017 at 5:57 PM, Marc MERLIN wrote:
On Mon, May 22, 2017 at 05:26:25PM -0600, Chris Murphy wrote:
On Mon, May 22, 2017 at 10:31 AM, Marc MERLIN wrote:
I already have 24GB of RAM in that machine,
On 2017-05-23 14:32, Kai Krakow wrote:
Am Tue, 23 May 2017 07:21:33 -0400
schrieb "Austin S. Hemmelgarn" <ahferro...@gmail.com>:
On 2017-05-22 22:07, Chris Murphy wrote:
On Mon, May 22, 2017 at 5:57 PM, Marc MERLIN <m...@merlins.org>
wrote:
On Mon, May 22, 2017 at 0
On 2017-06-01 10:54, Alexander Peganz wrote:
Hello,
I am trying to understand what differences there are in using btrfs
raid1 vs raid10 in terms of recoverability and also performance.
This has proven itself to be more difficult than expected since all
search results I could come up with
901 - 1000 of 1331 matches
Mail list logo