On 2017-09-06 14:31, Goffredo Baroncelli wrote:
On 09/06/2017 08:02 PM, Austin S. Hemmelgarn wrote:
On 2017-09-06 13:48, Goffredo Baroncelli wrote:
On 09/06/2017 07:16 PM, Austin S. Hemmelgarn wrote:
[...]
Sorry but I don't understand. If you reach the step a3; you have:
- the final disk
On 2017-09-06 13:48, Goffredo Baroncelli wrote:
On 09/06/2017 07:16 PM, Austin S. Hemmelgarn wrote:
On 2017-09-06 12:43, Goffredo Baroncelli wrote:
On 09/06/2017 01:31 PM, Austin S. Hemmelgarn wrote:
On 2017-09-05 15:05, Goffredo Baroncelli wrote:
On 09/05/2017 10:19 AM, Qu Wenruo wrote
On 2017-09-06 12:43, Goffredo Baroncelli wrote:
On 09/06/2017 01:31 PM, Austin S. Hemmelgarn wrote:
On 2017-09-05 15:05, Goffredo Baroncelli wrote:
On 09/05/2017 10:19 AM, Qu Wenruo wrote:
On 2017年09月05日 02:08, David Sterba wrote:
On Mon, Sep 04, 2017 at 03:41:05PM +0900, Qu Wenruo wrote
On 2017-09-05 15:05, Goffredo Baroncelli wrote:
On 09/05/2017 10:19 AM, Qu Wenruo wrote:
On 2017年09月05日 02:08, David Sterba wrote:
On Mon, Sep 04, 2017 at 03:41:05PM +0900, Qu Wenruo wrote:
mkfs.btrfs --rootdir provides user a method to generate btrfs with
pre-written content while without
On 2017-09-05 08:49, Henk Slager wrote:
On Tue, Sep 5, 2017 at 1:45 PM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
- You end up duplicating more data than is strictly necessary. This
is, IIRC, something like 128 KiB for a write.
FWIW< I'm pretty sure you can
On 2017-09-04 06:54, Hugo Mills wrote:
On Mon, Sep 04, 2017 at 12:31:54PM +0300, Marat Khalili wrote:
Hello list,
good time of the day,
More than once I see mentioned in this list that autodefrag option
solves problems with no apparent drawbacks, but it's not the
default. Can you recommend to
On 2017-09-03 19:55, Qu Wenruo wrote:
On 2017年09月04日 02:06, Adam Borowski wrote:
On Sun, Sep 03, 2017 at 07:32:01PM +0200, Cloud Admin wrote:
Hi,
I used the mount option 'compression' on some mounted sub volumes. How
can I revoke the compression? Means to delete the option and get all
data
On 2017-09-01 11:00, Juan Orti Alcaine wrote:
El 1 sept. 2017 15:59, "Austin S. Hemmelgarn" <ahferro...@gmail.com
<mailto:ahferro...@gmail.com>> escribió:
If you are going to use bcache, you don't need separate caches for
each device (and in fact, you're proba
On 2017-09-01 09:54, Qu Wenruo wrote:
On 2017年09月01日 20:47, Austin S. Hemmelgarn wrote:
On 2017-09-01 08:19, Qu Wenruo wrote:
On 2017年09月01日 20:05, Austin S. Hemmelgarn wrote:
On 2017-09-01 07:49, Qu Wenruo wrote:
On 2017年09月01日 19:28, Austin S. Hemmelgarn wrote:
On 2017-08-31 20:13
On 2017-09-01 09:52, Juan Orti Alcaine wrote:
2017-08-31 13:36 GMT+02:00 Roman Mamedov :
If you could implement SSD caching in front of your FS (such as lvmcache or
bcache), that would work wonders for performance in general, and especially
for mount times. I have seen amazing
On 2017-09-01 08:19, Qu Wenruo wrote:
On 2017年09月01日 20:05, Austin S. Hemmelgarn wrote:
On 2017-09-01 07:49, Qu Wenruo wrote:
On 2017年09月01日 19:28, Austin S. Hemmelgarn wrote:
On 2017-08-31 20:13, Qu Wenruo wrote:
On 2017年09月01日 01:27, Goffredo Baroncelli wrote:
Hi All,
I found a bug
On 2017-09-01 07:49, Qu Wenruo wrote:
On 2017年09月01日 19:28, Austin S. Hemmelgarn wrote:
On 2017-08-31 20:13, Qu Wenruo wrote:
On 2017年09月01日 01:27, Goffredo Baroncelli wrote:
Hi All,
I found a bug in mkfs.btrfs, when it is used the option '-r'. It
seems that it is not visible the full
On 2017-08-31 16:29, Goffredo Baroncelli wrote:
On 2017-08-31 20:49, Austin S. Hemmelgarn wrote:
On 2017-08-31 13:27, Goffredo Baroncelli wrote:
Hi All,
I found a bug in mkfs.btrfs, when it is used the option '-r'. It
seems that it is not visible the full disk.
$ uname -a Linux venice.bhome
On 2017-09-01 06:21, ein wrote:
Very comprehensive, thank you. I was asking because I'd like to learn
how really random writes by VM affects BTRFS (vs XFS,Ext4) performance
and try to develop some workaround to reduce/prevent it while having
csums, cow (snapshots) and compression.
I've
On 2017-08-31 20:13, Qu Wenruo wrote:
On 2017年09月01日 01:27, Goffredo Baroncelli wrote:
Hi All,
I found a bug in mkfs.btrfs, when it is used the option '-r'. It seems
that it is not visible the full disk.
Despite the new bug you found, -r has several existing bugs.
Is this actually a bug
On 2017-08-31 13:27, Goffredo Baroncelli wrote:
Hi All,
I found a bug in mkfs.btrfs, when it is used the option '-r'. It seems that it
is not visible the full disk.
$ uname -a
Linux venice.bhome 4.12.8 #268 SMP Thu Aug 17 09:03:26 CEST 2017 x86_64
GNU/Linux
$ btrfs --version
btrfs-progs
On 2017-08-31 07:36, Roman Mamedov wrote:
On Thu, 31 Aug 2017 12:43:19 +0200
Marco Lorenzo Crociani wrote:
Hi,
this 37T filesystem took some times to mount. It has 47
subvolumes/snapshots and is mounted with
noatime,compress=zlib,space_cache. Is it normal, due
On 2017-08-31 07:00, Hans van Kranenburg wrote:
On 08/31/2017 12:43 PM, Marco Lorenzo Crociani wrote:
Hi,
this 37T filesystem took some times to mount. It has 47
subvolumes/snapshots and is mounted with
noatime,compress=zlib,space_cache. Is it normal, due to its size?
Yes, unfortunately it
On 2017-08-31 02:49, Ulli Horlacher wrote:
On Thu 2017-08-24 (18:45), Peter Grandi wrote:
As usual with Btrfs, there are corner cases to avoid: 'defrag'
should be done before 'balance'
Good hint. So far I did it the other way: balance before defrag.
I will switch.
For reference, the reason
On 2017-08-29 12:43, Marek Behún wrote:
Hello,
so I've been studying the linux btrfs code and have come across this:
in inode.c function uncompress_inline the max_size size variable is set
to min(max_size, PAGE_SIZE) and only max_size of output data are
decompressed.
The code for compression
On 2017-08-28 06:32, Adam Borowski wrote:
On Mon, Aug 28, 2017 at 12:49:10PM +0530, shally verma wrote:
Am bit confused over here, is your description based on offline-dedupe
here Or its with inline deduplication?
It doesn't matter _how_ you get to excessive reflinking, the resulting
slowdown
On 2017-08-25 08:55, Ferry Toth wrote:
Op Fri, 25 Aug 2017 07:45:44 -0400, schreef Austin S. Hemmelgarn:
On 2017-08-24 17:56, Ferry Toth wrote:
Op Thu, 24 Aug 2017 22:40:54 +0300, schreef Marat Khalili:
We find that typically apt is very slow on a machine with 50 or so
snapshots and raid10
On 2017-08-24 17:56, Ferry Toth wrote:
Op Thu, 24 Aug 2017 22:40:54 +0300, schreef Marat Khalili:
We find that typically apt is very slow on a machine with 50 or so
snapshots and raid10. Slow as in probably 10x slower as doing the same
update on a machine with 'single' and no snapshots.
Other
On 2017-08-23 17:13, Ulli Horlacher wrote:
On Wed 2017-08-23 (12:42), Peter Grandi wrote:
So, still: What is the problem with user_subvol_rm_allowed?
As usual, it is complicated: mostly that while subvol creation
is very cheap, subvol deletion can be very expensive. But then
so can be
On 2017-08-23 11:28, Chris Murphy wrote:
On Wed, Aug 2, 2017 at 2:27 PM, Liu Bo wrote:
On Wed, Aug 02, 2017 at 10:41:30PM +0200, Goffredo Baroncelli wrote:
What I want to understand, is if it is possible to log only the "partial
stripe" RMW cycle.
I think your
On 2017-08-22 13:41, Peter Grandi wrote:
[ ... ]
There is no fixed relationship between the root directory
inode of a subvolume and the root directory inode of any
other subvolume or the main volume.
Actually, there is, because it's inherently rooted in the
hierarchy of the volume itself.
On 2017-08-22 10:43, Peter Grandi wrote:
How do I find the root filesystem of a subvolume?
Example:
root@fex:~# df -T
Filesystem Type 1K-blocks Used Available Use% Mounted on
- -1073740800 104244552 967773976 10% /local/.backup/home
[ ... ]
I know, the root
On 2017-08-22 10:23, Hugo Mills wrote:
On Tue, Aug 22, 2017 at 10:12:25AM -0400, Austin S. Hemmelgarn wrote:
On 2017-08-22 09:53, Ulli Horlacher wrote:
On Tue 2017-08-22 (09:37), Austin S. Hemmelgarn wrote:
root@fex:~# df -T /local/.backup/home
Filesystem Type 1K-blocks Used
On 2017-08-22 09:53, Ulli Horlacher wrote:
On Tue 2017-08-22 (09:37), Austin S. Hemmelgarn wrote:
root@fex:~# df -T /local/.backup/home
Filesystem Type 1K-blocks Used Available Use% Mounted on
- -1073740800 104252160 967766336 10% /local/.backup/home
Hmm, now I'm
On 2017-08-22 09:30, Ulli Horlacher wrote:
On Tue 2017-08-22 (09:27), Austin S. Hemmelgarn wrote:
root@fex:~# df -T
Filesystem Type 1K-blocks Used Available Use% Mounted on
- -1073740800 104244552 967773976 10% /local/.backup/home
I've never seen
On 2017-08-22 08:50, Ulli Horlacher wrote:
On Tue 2017-08-22 (12:40), Hugo Mills wrote:
On Tue, Aug 22, 2017 at 02:23:50PM +0200, Ulli Horlacher wrote:
How do I find the root filesystem of a subvolume?
Example:
root@fex:~# df -T
Filesystem Type 1K-blocks Used Available Use% Mounted
On 2017-08-17 02:25, GWB wrote:
<<
Or else it could be an argument that they
expect Btrfs to do their job while they watch cat videos from the
intertubes. :-)
My favourite quote from the list this week, and, well, obviously, that
is the main selling point of file systems like btrfs, zfs, and
On 2017-08-16 10:11, Christoph Anton Mitterer wrote:
On Wed, 2017-08-16 at 09:53 -0400, Austin S. Hemmelgarn wrote:
Go try BTRFS on top of dm-integrity, or on a
system with T10-DIF or T13-EPP support
When dm-integrity is used... would that be enough for btrfs to do a
proper repair in the RAID
On 2017-08-16 09:12, Chris Mason wrote:
My real goal is to make COW fast enough that we can leave it on for the
database applications too. Obviously I haven't quite finished that one
yet ;) But I'd rather keep the building block of all the other btrfs
features in place than try to do crcs
On 2017-08-16 09:31, Christoph Anton Mitterer wrote:
Just out of curiosity:
On Wed, 2017-08-16 at 09:12 -0400, Chris Mason wrote:
Btrfs couples the crcs with COW because
this (which sounds like you want it to stay coupled that way)...
plus
It's possible to protect against all three
On 2017-08-15 10:41, Christoph Anton Mitterer wrote:
On Tue, 2017-08-15 at 07:37 -0400, Austin S. Hemmelgarn wrote:
Go look at Chrome, or Firefox, or Opera, or any other major web
browser.
At minimum, they will safely bail out if they detect corruption in
the
user profile and can trivially
On 2017-08-14 15:54, Christoph Anton Mitterer wrote:
On Mon, 2017-08-14 at 11:53 -0400, Austin S. Hemmelgarn wrote:
Quite a few applications actually _do_ have some degree of secondary
verification or protection from a crash. Go look at almost any
database
software.
Then please give proper
On 2017-08-14 11:13, Graham Cobb wrote:
On 14/08/17 15:23, Austin S. Hemmelgarn wrote:
Assume you have higher level verification.
But almost no applications do. In real life, the decision
making/correction process will be manual and labour-intensive (for
example, running fsck on a virtual
On 2017-08-14 08:24, Christoph Anton Mitterer wrote:
On Mon, 2017-08-14 at 14:36 +0800, Qu Wenruo wrote:
And how are you going to write your data and checksum atomically
when
doing in-place updates?
Exactly, that's the main reason I can figure out why btrfs disables
checksum for nodatacow.
On 2017-08-13 21:01, Cerem Cem ASLAN wrote:
Would that be useful to build a BTRFS test machine, which will perform
both software tests (btrfs send | btrfs receive, read/write random
data etc.) and hardware tests, such as abrupt power off test, abruptly
removing a raid-X disk physically, etc.
In
On 2017-08-13 07:50, Adam Hunt wrote:
Back in 2014 Ted Tso introduced the lazytime mount option for ext4 and
shortly thereafter a more generic VFS implementation which was then
merged into mainline. His early patches included support for Btrfs but
those changes were removed prior to the feature
On 2017-08-11 05:57, Piotr Pawłow wrote:
Hello,
So 4.10 isn't /too/ far out of range yet, but I'd strongly consider
upgrading (or downgrading to 4.9 LTS) as soon as it's reasonably
convenient, before 4.13 in any case. Unless you prefer to go the
distro support route, of course.
I used to
On 2017-08-09 22:39, Nick Terrell wrote:
Add zstd compression and decompression support to BtrFS. zstd at its
fastest level compresses almost as well as zlib, while offering much
faster compression and decompression, approaching lzo speeds.
I benchmarked btrfs with zstd compression against no
On 2017-08-10 15:25, Hugo Mills wrote:
On Thu, Aug 10, 2017 at 01:41:21PM -0400, Chris Mason wrote:
On 08/10/2017 04:30 AM, Eric Biggers wrote:
Theses benchmarks are misleading because they compress the whole file as a
single stream without resetting the dictionary, which isn't how data will
On 2017-08-10 13:24, Eric Biggers wrote:
On Thu, Aug 10, 2017 at 07:32:18AM -0400, Austin S. Hemmelgarn wrote:
On 2017-08-10 04:30, Eric Biggers wrote:
On Wed, Aug 09, 2017 at 07:35:53PM -0700, Nick Terrell wrote:
It can compress at speeds approaching lz4, and quality approaching lzma
On 2017-08-10 07:32, Austin S. Hemmelgarn wrote:
On 2017-08-10 04:30, Eric Biggers wrote:
On Wed, Aug 09, 2017 at 07:35:53PM -0700, Nick Terrell wrote:
It can compress at speeds approaching lz4, and quality approaching lzma.
Well, for a very loose definition of "approaching", and
On 2017-08-10 04:30, Eric Biggers wrote:
On Wed, Aug 09, 2017 at 07:35:53PM -0700, Nick Terrell wrote:
It can compress at speeds approaching lz4, and quality approaching lzma.
Well, for a very loose definition of "approaching", and certainly not at the
same time. I doubt there's a use case
On 2017-08-04 10:45, Goffredo Baroncelli wrote:
On 2017-08-03 19:23, Austin S. Hemmelgarn wrote:
On 2017-08-03 12:37, Goffredo Baroncelli wrote:
On 2017-08-03 13:39, Austin S. Hemmelgarn wrote:
[...]
Also, as I said below, _THIS WORKS ON ZFS_. That immediately means that a CoW
filesystem
On 2017-08-03 16:45, Brendan Hide wrote:
On 08/03/2017 09:22 PM, Austin S. Hemmelgarn wrote:
On 2017-08-03 14:29, Christoph Anton Mitterer wrote:
On Thu, 2017-08-03 at 20:08 +0200, waxhead wrote:
There are no higher-level management tools (e.g. RAID
management/monitoring, etc.)...
[snip
On 2017-08-03 14:29, Christoph Anton Mitterer wrote:
On Thu, 2017-08-03 at 20:08 +0200, waxhead wrote:
Brendan Hide wrote:
The title seems alarmist to me - and I suspect it is going to be
misconstrued. :-/
From the release notes at
On 2017-08-03 14:08, waxhead wrote:
Brendan Hide wrote:
The title seems alarmist to me - and I suspect it is going to be
misconstrued. :-/
From the release notes at
On 2017-08-03 13:15, Marat Khalili wrote:
On August 3, 2017 7:01:06 PM GMT+03:00, Goffredo Baroncelli
The file is physically extended
ghigo@venice:/tmp$ fallocate -l 1000 foo.txt
For clarity let's replace the fallocate above with:
$ head -c 1000 foo.txt
ghigo@venice:/tmp$ ls -l foo.txt
On 2017-08-03 12:37, Goffredo Baroncelli wrote:
On 2017-08-03 13:39, Austin S. Hemmelgarn wrote:
On 2017-08-02 17:05, Goffredo Baroncelli wrote:
On 2017-08-02 21:10, Austin S. Hemmelgarn wrote:
On 2017-08-02 13:52, Goffredo Baroncelli wrote:
Hi,
[...]
consider the following scenario
On 2017-08-03 07:44, Marat Khalili wrote:
On 02/08/17 20:52, Goffredo Baroncelli wrote:
consider the following scenario:
a) create a 2GB file
b) fallocate -o 1GB -l 2GB
c) write from 1GB to 3GB
after b), the expectation is that c) always succeed [1]: i.e. there is
enough space on the
On 2017-08-02 17:05, Goffredo Baroncelli wrote:
On 2017-08-02 21:10, Austin S. Hemmelgarn wrote:
On 2017-08-02 13:52, Goffredo Baroncelli wrote:
Hi,
[...]
consider the following scenario:
a) create a 2GB file
b) fallocate -o 1GB -l 2GB
c) write from 1GB to 3GB
after b), the expectation
On 2017-08-02 13:52, Goffredo Baroncelli wrote:
Hi,
On 2017-08-01 17:00, Austin S. Hemmelgarn wrote:
OK, I just did a dead simple test by hand, and it looks like I was right. The
method I used to check this is as follows:
1. Create and mount a reasonably small filesystem (I used an 8G
On 2017-08-02 08:55, Lutz Vieweg wrote:
On 08/02/2017 01:25 PM, Austin S. Hemmelgarn wrote:
And this is a worst-case result of the fact that most
distros added BTRFS support long before it was ready.
RedHat still advertises "Ceph", and given Ceph initially recommen
On 2017-08-02 04:38, Brendan Hide wrote:
The title seems alarmist to me - and I suspect it is going to be
misconstrued. :-/
From the release notes at
On 2017-08-02 00:14, Duncan wrote:
Austin S. Hemmelgarn posted on Tue, 01 Aug 2017 10:47:30 -0400 as
excerpted:
I think I _might_ understand what's going on here. Is that test program
calling fallocate using the desired total size of the file, or just
trying to allocate the range beyond
On 2017-08-01 15:07, Holger Hoffstätte wrote:
On 08/01/17 20:15, Holger Hoffstätte wrote:
On 08/01/17 19:34, Austin S. Hemmelgarn wrote:
[..]
Apparently, if you call fallocate() on a file with an offset of 0 and
a length longer than the length of the file itself, BTRFS will
allocate that exact
On 2017-08-01 13:25, Roman Mamedov wrote:
On Tue, 1 Aug 2017 10:14:23 -0600
Liu Bo wrote:
This aims to fix write hole issue on btrfs raid5/6 setup by adding a
separate disk as a journal (aka raid5/6 log), so that after unclean
shutdown we can make sure data and parity
A recent thread on the BTRFS mailing list [1] brought up some odd
behavior in BTRFS that I've long suspected but not had prior reason to
test. I've put the fsdevel mailing list on CC since I'm curious to hear
what people there think about this.
Apparently, if you call fallocate() on a file
for the help.
Glad I could be helpful!
/Per W
On Tue, 1 Aug 2017, Austin S. Hemmelgarn wrote:
On 2017-08-01 11:24, pwm wrote:
Yes, the test code is as below - trying to match what snapraid tries
to do:
#include
#include
#include
#include
#include
#include
#include
int main() {
int fd
ion, I'd argue that the behavior of BTRFS in this situation
is incorrect.
/Per W
On Tue, 1 Aug 2017, Austin S. Hemmelgarn wrote:
On 2017-08-01 10:47, Austin S. Hemmelgarn wrote:
On 2017-08-01 10:39, pwm wrote:
Thanks for the links and suggestions.
I did try your suggestions but it didn't
On 2017-08-01 10:47, Austin S. Hemmelgarn wrote:
On 2017-08-01 10:39, pwm wrote:
Thanks for the links and suggestions.
I did try your suggestions but it didn't solve the underlying problem.
pwm@europium:~$ sudo btrfs balance start -v -dusage=20 /mnt/snap_04
Dumping filters: flags 0x1, state
On 2017-08-01 10:39, pwm wrote:
Thanks for the links and suggestions.
I did try your suggestions but it didn't solve the underlying problem.
pwm@europium:~$ sudo btrfs balance start -v -dusage=20 /mnt/snap_04
Dumping filters: flags 0x1, state 0x0, force is off
DATA (flags 0x2): balancing,
On 2017-07-31 08:30, Sebastian Ochmann wrote:
On 31.07.2017 14:08, Austin S. Hemmelgarn wrote:
On 2017-07-31 06:51, Sebastian Ochmann wrote:
Hello,
I have a quite simple and possibly stupid question. Since I'm
occasionally seeing warnings about failed loading of free space
cache, I wanted
On 2017-07-31 06:51, Sebastian Ochmann wrote:
Hello,
I have a quite simple and possibly stupid question. Since I'm
occasionally seeing warnings about failed loading of free space cache, I
wanted to clear and rebuild space cache. So I mounted the filesystem(s)
with -o clear_cache and
On 2017-07-29 19:04, Cloud Admin wrote:
Am Montag, den 24.07.2017, 18:40 +0200 schrieb Cloud Admin:
Am Montag, den 24.07.2017, 10:25 -0400 schrieb Austin S. Hemmelgarn:
On 2017-07-24 10:12, Cloud Admin wrote:
Am Montag, den 24.07.2017, 09:46 -0400 schrieb Austin S.
Hemmelgarn:
On 2017-07-24
On 2017-07-26 08:27, Hugo Mills wrote:
On Wed, Jul 26, 2017 at 08:12:19AM -0400, Austin S. Hemmelgarn wrote:
On 2017-07-25 17:45, Hugo Mills wrote:
On Tue, Jul 25, 2017 at 11:29:13PM +0200, waxhead wrote:
Hugo Mills wrote:
You can see about the disk usage in different scenarios
On 2017-07-25 17:45, Hugo Mills wrote:
On Tue, Jul 25, 2017 at 11:29:13PM +0200, waxhead wrote:
Hugo Mills wrote:
You can see about the disk usage in different scenarios with the
online tool at:
http://carfax.org.uk/btrfs-usage/
Hugo.
As a side note, have you ever considered
On 2017-07-25 08:55, Hérikz Nawarro wrote:
Hello everyone,
I'm migrating to btrfs and i would like to know, in a btrfs filesystem
with 4 disks (multiple sizes) with -d raid0 & -m raid1, how many
drives can i lost without losing the entire array?
Exactly one, but you will lose data if you lose
On 2017-07-24 14:53, Chris Mason wrote:
On 07/24/2017 02:41 PM, David Sterba wrote:
would it be ok for you to keep ssd_working as before?
I'd really like to get this patch merged soon because "do not use ssd
mode for ssd" has started to be the recommended workaround. Once this
sticks, we
On 2017-07-24 10:12, Cloud Admin wrote:
Am Montag, den 24.07.2017, 09:46 -0400 schrieb Austin S. Hemmelgarn:
On 2017-07-24 07:27, Cloud Admin wrote:
Hi,
I have a multi-device pool (three discs) as RAID1. Now I want to
add a
new disc to increase the pool. I followed the description on https
On 2017-07-24 07:27, Cloud Admin wrote:
Hi,
I have a multi-device pool (three discs) as RAID1. Now I want to add a
new disc to increase the pool. I followed the description on https://bt
rfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices and
used 'btrfs add '. After that I called a
On 2017-07-22 07:35, Adam Borowski wrote:
On Fri, Jul 21, 2017 at 11:56:21AM -0400, Austin S. Hemmelgarn wrote:
On 2017-07-20 17:27, Nick Terrell wrote:
This patch set adds xxhash, zstd compression, and zstd decompression
modules. It also adds zstd support to BtrFS and SquashFS.
Each patch
On 2017-07-21 19:21, Hans van Kranenburg wrote:
> On 07/21/2017 05:50 PM, Austin S. Hemmelgarn wrote:
>> On 2017-07-21 07:47, Hans van Kranenburg wrote:
>>> [...]
>>>
>>> Signed-off-by: Hans van Kranenburg <hans.van.kranenb...@mendix.com>
>> Beha
and had runtime testing running for
about 18 hours now with no issues, so you can add:
Tested-by: Austin S. Hemmelgarn <ahferro...@gmail.com>
For patch 1, I've only compile tested it, but had no issues and got no
warnings about it when booting to test 2-4.
For patch 4, I've compile
ws things down, I've
been forcing '-o nossd' on my systems for a while now for the
performance improvement), so you can add:
Tested-by: Austin S. Hemmelgarn <ahferro...@gmail.com>
---
fs/btrfs/ctree.h| 4 ++--
fs/btrfs/disk-io.c | 6 ++
fs/btrfs/exten
On 2017-07-21 07:16, Austin S. Hemmelgarn wrote:
On 2017-07-20 17:27, Nick Terrell wrote:
Well this is embarrassing, forgot to type anything before hitting send...
Hi all,
This patch set adds xxhash, zstd compression, and zstd decompression
modules. It also adds zstd support to BtrFS
On 2017-07-20 17:27, Nick Terrell wrote:
Hi all,
This patch set adds xxhash, zstd compression, and zstd decompression
modules. It also adds zstd support to BtrFS and SquashFS.
Each patch has relevant summaries, benchmarks, and tests.
Best,
Nick Terrell
Changelog:
v1 -> v2:
- Make pointer in
On 2017-07-12 21:09, Adam Borowski wrote:
On Thu, Jul 13, 2017 at 02:50:10AM +0200, David Sterba wrote:
On Mon, Jul 10, 2017 at 09:11:50PM +0300, Dmitrii Tcvetkov wrote:
Tested on top of current mainline master (commit
af3c8d98508d37541d4bf57f13a984a7f73a328c). Didn't find any
regressions.
On 2017-07-10 00:21, Daniel Brady wrote:
On 7/7/2017 1:06 AM, Daniel Brady wrote:
On 7/6/2017 11:48 PM, Roman Mamedov wrote:
On Wed, 5 Jul 2017 22:10:35 -0600
Daniel Brady wrote:
parent transid verify failed
Typically in Btrfs terms this means "you're screwed", fsck
On 2017-07-09 22:13, Adam Bahe wrote:
I have finished all of the above suggestions, ran a scrub, remounted,
rebooted, made sure the system didn't hang, and then kicked off
another balance on the entire pool. It completed rather quickly but
something still does not seem right.
Label:
On 2017-07-07 23:07, Adam Borowski wrote:
On Sat, Jul 08, 2017 at 01:40:18AM +0200, Adam Borowski wrote:
On Fri, Jul 07, 2017 at 11:17:49PM +, Nick Terrell wrote:
On 7/6/17, 9:32 AM, "Adam Borowski" wrote:
Got a reproducible crash on amd64:
Thanks for the bug
On 2017-07-07 13:40, Chris Murphy wrote:
On Fri, Jul 7, 2017 at 10:59 AM, Andrei Borzenkov wrote:
07.07.2017 19:42, Chris Murphy пишет:
I'm digging through piles of list emails and not really finding an
answer to this. Maybe it's Friday and I'm just confused...
On 2017-07-06 08:09, Lionel Bouton wrote:
Le 06/07/2017 à 13:59, Austin S. Hemmelgarn a écrit :
On 2017-07-05 20:25, Nick Terrell wrote:
On 7/5/17, 12:57 PM, "Austin S. Hemmelgarn" <ahferro...@gmail.com>
wrote:
It's the slower compression speed that has me arguing for
On 2017-07-05 20:25, Nick Terrell wrote:
On 7/5/17, 12:57 PM, "Austin S. Hemmelgarn" <ahferro...@gmail.com> wrote:
It's the slower compression speed that has me arguing for the
possibility of configurable levels on zlib. 11MB/s is painfully slow
considering that most decent
On 2017-07-05 23:19, Paul Jones wrote:
While reading the thread about adding zstd compression, it occurred
to me that there is potentially another thing affecting performance -
Compressed extent size. (correct my terminology if it's incorrect). I
have two near identical RAID1 filesystems (used
On 2017-07-05 15:35, Nick Terrell wrote:
On 7/5/17, 11:45 AM, "Austin S. Hemmelgarn" <ahferro...@gmail.com> wrote:
On 2017-07-05 14:18, Adam Borowski wrote:
On Wed, Jul 05, 2017 at 07:43:27AM -0400, Austin S. Hemmelgarn
wrote:
On 2017-06-30 19:01, Nick Terrell wrote:
There
On 2017-07-05 14:18, Adam Borowski wrote:
On Wed, Jul 05, 2017 at 07:43:27AM -0400, Austin S. Hemmelgarn
wrote:
On 2017-06-30 19:01, Nick Terrell wrote:
There is also the fact of deciding what to use for the default
when specified without a level. This is easy for lzo and zlib,
where we can
On 2017-06-30 19:01, Nick Terrell wrote:
If we're going to make that configurable, there are some things to
consider:
* the underlying compressed format -- does not change for different
levels
This is true for zlib and zstd. lzo in the kernel only supports one
compression level.
I had
On 2017-06-30 10:21, David Sterba wrote:
On Fri, Jun 30, 2017 at 08:16:20AM -0400, E V wrote:
On Thu, Jun 29, 2017 at 3:41 PM, Nick Terrell wrote:
Add zstd compression and decompression support to BtrFS. zstd at its
fastest level compresses almost as well as zlib, while
On 2017-06-26 22:49, Qu Wenruo wrote:
At 06/27/2017 09:59 AM, Anand Jain wrote:
On 06/27/2017 09:05 AM, Qu Wenruo wrote:
At 06/27/2017 02:59 AM, David Sterba wrote:
On Thu, Mar 09, 2017 at 09:34:35AM +0800, Qu Wenruo wrote:
Btrfs currently uses num_tolerated_disk_barrier_failures to do
On 2017-06-23 13:25, Michał Sokołowski wrote:
Hello group.
I am confused: Can somebody please confirm/deny, which RAID subsystem is
affected? BTRFS' RAID5/6 or mdadm (Linux kernel raid) RAID 5/6 ?
All of the issues mentioned here are specific to BTRFS raid5/raid6
profiles, with the exception
On 2017-06-22 05:37, Shyam Prasad N wrote:
Hi,
I'm planning to use the btrfs-convert tool to convert production data
in ext4 filesystem into btrfs.
What is the stability status of this feature?
As per the below link, this tool is not in frequent use in latest linux kernels.
On 2017-06-21 13:20, Andrei Borzenkov wrote:
21.06.2017 16:41, Austin S. Hemmelgarn пишет:
On 2017-06-21 08:43, Christoph Anton Mitterer wrote:
On Wed, 2017-06-21 at 16:45 +0800, Qu Wenruo wrote:
Btrfs is always using device ID to build up its device mapping.
And for any multi-device
On 2017-06-21 08:43, Christoph Anton Mitterer wrote:
On Wed, 2017-06-21 at 16:45 +0800, Qu Wenruo wrote:
Btrfs is always using device ID to build up its device mapping.
And for any multi-device implementation (LVM,mdadam) it's never a
good
idea to use device path.
Isn't it rather the other
On 2017-06-01 10:54, Alexander Peganz wrote:
Hello,
I am trying to understand what differences there are in using btrfs
raid1 vs raid10 in terms of recoverability and also performance.
This has proven itself to be more difficult than expected since all
search results I could come up with
On 2017-05-23 14:32, Kai Krakow wrote:
Am Tue, 23 May 2017 07:21:33 -0400
schrieb "Austin S. Hemmelgarn" <ahferro...@gmail.com>:
On 2017-05-22 22:07, Chris Murphy wrote:
On Mon, May 22, 2017 at 5:57 PM, Marc MERLIN <m...@merlins.org>
wrote:
On Mon, May 22, 2017 at 0
301 - 400 of 1331 matches
Mail list logo