On 2017-11-16 08:43, Duncan wrote:
Austin S. Hemmelgarn posted on Thu, 16 Nov 2017 07:30:47 -0500 as
excerpted:
On 2017-11-15 16:31, Duncan wrote:
Austin S. Hemmelgarn posted on Wed, 15 Nov 2017 07:57:06 -0500 as
excerpted:
The 'compress' and 'compress-force' mount options only impact newly
On 2017-11-16 07:33, Zdenek Kabelac wrote:
Dne 16.11.2017 v 11:04 Qu Wenruo napsal(a):
On 2017年11月16日 17:43, Zdenek Kabelac wrote:
Dne 16.11.2017 v 09:08 Qu Wenruo napsal(a):
[What we have]
The nearest infrastructure I found in kernel is
bio_integrity_payload.
Hi
We already have
On 2017-11-15 16:31, Duncan wrote:
Austin S. Hemmelgarn posted on Wed, 15 Nov 2017 07:57:06 -0500 as
excerpted:
The 'compress' and 'compress-force' mount options only impact newly
written data. The compression used is stored with the metadata for the
extents themselves, so any existing data
On 2017-11-15 05:35, Imran Geriskovan wrote:
On 11/15/17, Lukas Pirl wrote:
you might be interested in the thread "Read before you deploy
btrfs + zstd"¹.
Thanks. I've read it. Bootloader is not an issue since /boot is on
another uncompressed fs.
Let me make my question
On 2017-11-15 02:11, waxhead wrote:
As a regular BTRFS user I can tell you that there is no such thing as
hot data tracking yet. Some people seem to use bcache together with
btrfs and come asking for help on the mailing list.
Bcache works fine recently. It was only with older versions that
On 2017-11-15 04:26, Marat Khalili wrote:
On 15/11/17 10:11, waxhead wrote:
hint: you need more than two for raid1 if you want to stay safe
Huh? Two is not enough? Having three or more makes a difference? (Or,
you mean hot spare?)
They're probably referring to an issue where a two device
On 2017-11-14 03:36, Klaus Agnoletti wrote:
Hi list
I used to have 3x2TB in a btrfs in raid0. A few weeks ago, one of the
2TB disks started giving me I/O errors in dmesg like this:
[388659.173819] ata5.00: exception Emask 0x0 SAct 0x7fff SErr 0x0 action 0x0
[388659.175589] ata5.00:
On 2017-11-14 07:48, Roman Mamedov wrote:
On Tue, 14 Nov 2017 10:36:22 +0200
Klaus Agnoletti wrote:
Obviously, I want /dev/sdd emptied and deleted from the raid.
* Unmount the RAID0 FS
* copy the bad drive using `dd_rescue`[1] into a file on the 6TB drive
On 2017-11-14 02:34, Martin Steigerwald wrote:
Hello David.
David Sterba - 13.11.17, 23:50:
while 4.14 is still fresh, let me address some concerns I've seen on linux
forums already.
The newly added ZSTD support is a feature that has broader impact than
just the runtime compression. The
On 2017-11-11 19:28, Qu Wenruo wrote:
On 2017年11月12日 04:12, Hans van Kranenburg wrote:
Hi,
On 11/11/2017 04:48 AM, Qu Wenruo wrote:
On 2017年11月11日 11:13, Hans van Kranenburg wrote:
On 11/11/2017 03:30 AM, Qu Wenruo wrote:
One more chance to recover is never a bad idea.
It is a bad
On 2017-11-08 13:31, Chris Murphy wrote:
On Wed, Nov 8, 2017 at 11:10 AM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
On 2017-11-08 12:54, Chris Murphy wrote:
On Wed, Nov 8, 2017 at 10:22 AM, Hugo Mills <h...@carfax.org.uk> wrote:
On Wed, Nov 08, 2017 at 10:17:28AM
On 2017-11-08 12:54, Chris Murphy wrote:
On Wed, Nov 8, 2017 at 10:22 AM, Hugo Mills <h...@carfax.org.uk> wrote:
On Wed, Nov 08, 2017 at 10:17:28AM -0700, Chris Murphy wrote:
On Wed, Nov 8, 2017 at 5:13 AM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
It definitely does fi
On 2017-11-07 23:50, Chris Murphy wrote:
On Tue, Nov 7, 2017 at 6:02 AM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
* Optional automatic correction of errors detected during normal usage.
Right now, you have to run a scrub to correct errors. Such a design makes
sense with MD a
On 2017-11-07 02:01, Dave wrote:
On Sat, Nov 4, 2017 at 1:25 PM, Chris Murphy wrote:
On Sat, Nov 4, 2017 at 1:26 AM, Dave wrote:
On Mon, Oct 30, 2017 at 5:37 PM, Chris Murphy wrote:
That is not a general purpose
On 2017-11-06 17:37, Chris Murphy wrote:
I'm doing copies from one subvolume to another, through a mounted top
level (id5) at /mnt/int.
This copies the whole file conventionally (no shared extents)
$ sudo cp /mnt/int/home/chris/Downloads/Fedora-Server-dvd-x86_64-27-1.6.iso
On 2017-11-06 13:35, Chris Murphy wrote:
On Mon, Nov 6, 2017 at 6:51 AM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
This brings to mind another 'feature' of BTRFS that I came across recently,
namely that subvolumes that aren't explicitly mounted still show up as mount
points acc
On 2017-11-06 13:45, Chris Murphy wrote:
On Mon, Nov 6, 2017 at 6:29 AM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
With ATA devices (including SATA), except on newer SSD's, TRIM commands
can't be queued,
SATA spec 3.1 includes queued trim. There are SATA spec 3.1 pr
On 2017-11-05 03:01, Andrei Borzenkov wrote:
04.11.2017 21:55, Chris Murphy пишет:
On Sat, Nov 4, 2017 at 12:27 PM, Andrei Borzenkov wrote:
04.11.2017 10:05, Adam Borowski пишет:
On Sat, Nov 04, 2017 at 09:26:36AM +0300, Andrei Borzenkov wrote:
04.11.2017 07:49, Adam
On 2017-11-04 13:14, Chris Murphy wrote:
On Fri, Nov 3, 2017 at 10:46 PM, Adam Borowski <kilob...@angband.pl> wrote:
On Fri, Nov 03, 2017 at 04:03:44PM -0600, Chris Murphy wrote:
On Tue, Oct 31, 2017 at 5:28 AM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
If you're runni
On 2017-11-03 03:42, Kai Krakow wrote:
Am Tue, 31 Oct 2017 07:28:58 -0400
schrieb "Austin S. Hemmelgarn" <ahferro...@gmail.com>:
On 2017-10-31 01:57, Marat Khalili wrote:
On 31/10/17 00:37, Chris Murphy wrote:
But off hand it sounds like hardware was sabotaging the expecte
On 2017-11-03 03:26, Kai Krakow wrote:
Am Thu, 2 Nov 2017 22:47:31 -0400
schrieb Dave :
On Thu, Nov 2, 2017 at 5:16 PM, Kai Krakow
wrote:
You may want to try btrfs autodefrag mount option and see if it
improves things (tho, the effect may take
On 2017-11-02 14:09, Dave wrote:
On Thu, Nov 2, 2017 at 7:17 AM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
And the worst performing machine was the one with the most RAM and a
fast NVMe drive and top of the line hardware.
Somewhat nonsensically, I'll bet that NVMe is a contri
On 2017-11-02 12:28, ST wrote:
On Thu, 2017-11-02 at 19:16 +0300, Marat Khalili wrote:
Could somebody among developers please elaborate on this issue - is
checking quota going always to be done by root? If so - btrfs might be
a no-go for our use case...
Not a developer, but sysadmin here:
On 2017-11-02 11:02, Martin Raiber wrote:
Hi,
snapshot cleanup is a little slow in my case (50TB volume). Would it
help to have multiple btrfs-cleaner threads? The block layer underneath
would have higher throughput with more simultaneous read/write requests.
I think a bigger impact would be
On 2017-11-02 03:29, ronnie sahlberg wrote:
I think it is just a matter of lack of resources.
The very few paid resources to work on btrfs probably does not have
priority to work on parity raid.
(And honestly, parity raid is probably much better implemented below
the filesystem in any case, i.e.
each other.)
What originally caught my attention was earlier information in this thread:
Am Wed, 20 Sep 2017 07:46:52 -0400
schrieb "Austin S. Hemmelgarn" <ahferro...@gmail.com>:
Fragmentation: Files with a lot of random writes can become
heavily fragmented (1+ ext
On 2017-11-01 21:39, Dave wrote:
On Wed, Nov 1, 2017 at 8:21 AM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
The cache is in a separate location from the profiles, as I'm sure you
know. The reason I suggested a separate BTRFS subvolume for
$HOME/.cache is that this will p
On 2017-11-02 05:09, ST wrote:
Ok. I'll use more standard approaches. Which of following commands will
work with BTRFS:
https://debian-handbook.info/browse/stable/sect.quotas.html
None, qgroups are the only option right now with BTRFS, and it's pretty
likely to stay that way since the
On 2017-11-01 13:52, Andrei Borzenkov wrote:
01.11.2017 15:01, Austin S. Hemmelgarn пишет:
...
The default subvolume is what gets mounted if you don't specify a
subvolume to mount. On a newly created filesystem, it's subvolume ID 5,
which is the top-level of the filesystem itself. Debian does
On 2017-11-01 10:05, ST wrote:
3. in my current ext4-based setup I have two servers while one syncs
files of certain dir to the other using lsyncd (which launches rsync on
inotify events). As far as I have understood it is more efficient to use
btrfs send/receive (over ssh) than rsync (over
On 2017-10-31 20:37, Dave wrote:
On Tue, Oct 31, 2017 at 7:06 PM, Peter Grandi
wrote:
Also nothing forces you to defragment a whole filesystem, you
can just defragment individual files or directories by using
'find' with it.
Thanks for that info. When
On 2017-10-31 16:06, ST wrote:
Thank you very much for such an informative response!
On Tue, 2017-10-31 at 13:45 -0400, Austin S. Hemmelgarn wrote:
On 2017-10-31 12:23, ST wrote:
Hello,
I've recently learned about btrfs and consider to utilize for my needs.
I have several questions
On 2017-10-31 15:54, Lentes, Bernd wrote:
- On Oct 31, 2017, at 6:00 PM, Austin S. Hemmelgarn ahferro...@gmail.com
wrote:
Assuming you're careful about how you install it (that is, put it in a
custom prefix that isn't in $PATH), you could always build a local
version of Python. Once
On 2017-10-31 14:51, Andrei Borzenkov wrote:
31.10.2017 20:45, Austin S. Hemmelgarn пишет:
On 2017-10-31 12:23, ST wrote:
Hello,
I've recently learned about btrfs and consider to utilize for my needs.
I have several questions in this regard:
I manage a dedicated server remotely and have some
On 2017-10-31 12:23, ST wrote:
Hello,
I've recently learned about btrfs and consider to utilize for my needs.
I have several questions in this regard:
I manage a dedicated server remotely and have some sort of script that
installs an OS from several images. There I can define partitions and
On 2017-10-31 12:54, Lentes, Bernd wrote:
- On Oct 31, 2017, at 2:59 PM, Austin S. Hemmelgarn ahferro...@gmail.com
wrote:
Hi Austin,
thanks for your effort. What are the minimum prerequesties for kernel and
btrfsprogs for that script ?
Do you think it will run on my old SLES 11 SP4
A new version of btrfs-subv-backup has just been uploaded to github.
Yes, I know I skipped v0.2b, I actually did upload a version v0.2b, I
just forgot to post anything about it here...
Changes since the last time I posted here:
v0.2b:
* Updated the LICENSE file so that GitHub properly
On 2017-10-31 08:40, Lentes, Bernd wrote:
- Am 26. Okt 2017 um 15:32 schrieb Austin S. Hemmelgarn
ahferro...@gmail.com:
As previously mentioned on the list, I've written up a script to back up
BTRFS subvolume structures in regular file-based backups. As of right
now, it's still a bit
On 2017-10-31 01:57, Marat Khalili wrote:
On 31/10/17 00:37, Chris Murphy wrote:
But off hand it sounds like hardware was sabotaging the expected write
ordering. How to test a given hardware setup for that, I think, is
really overdue. It affects literally every file system, and Linux
storage
On 2017-10-26 11:25, Marat Khalili wrote:
Hello Austin,
Looks very useful. Two questions:
1. Can you release it under some standard license recognized by github,
in case someone wants to include it in other projects? AGPL-3.0 would be
nice.
The intent is for it to be under what Github calls
As previously mentioned on the list, I've written up a script to back up
BTRFS subvolume structures in regular file-based backups. As of right
now, it's still a bit rough around the edges, but it's cleaned up enough
that I consider it of at least beta quality, and therefore fit for more
On 2017-10-24 10:12, Andrei Borzenkov wrote:
On Tue, Oct 24, 2017 at 2:53 PM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
SLES (and OpenSUSE in general) does do something special though, they use
subvolumes and qgroups to replicate multiple independent partitions (which
is a s
On 2017-10-24 09:28, Lentes, Bernd wrote:
-Original Message-
From: Austin S. Hemmelgarn [mailto:ahferro...@gmail.com]
Sent: Tuesday, October 24, 2017 1:53 PM
To: Adam Borowski <kilob...@angband.pl>; Lentes, Bernd
<bernd.len...@helmholtz-muenchen.de>
Cc: Btrfs ML
On 2017-10-21 14:07, Adam Borowski wrote:
On Sat, Oct 21, 2017 at 01:46:06PM +0200, Lentes, Bernd wrote:
- Am 21. Okt 2017 um 4:31 schrieb Duncan 1i5t5.dun...@cox.net:
Lentes, Bernd posted on Fri, 20 Oct 2017 20:40:15 +0200 as excerpted:
Is it generally possible to restore a btrfs
On 2017-10-19 14:39, Peter Grandi wrote:
[ ... ]
Oh please, please a bit less silliness would be welcome here.
In a previous comment on this tedious thread I had written:
If the block device abstraction layer and lower layers work
correctly, Btrfs does not have problems of that sort when
On 2017-10-19 10:42, Zoltan wrote:
On Thu, Oct 19, 2017 at 4:27 PM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
and thus when the same device reappears (as it will when the disconnect was
due to a transient bus error, which happens a lot), it shows up as a
different device node,
On 2017-10-19 09:48, Zoltan wrote:
Hi,
On Thu, Oct 19, 2017 at 1:01 PM, Peter Grandi
wrote:
What the OP was doing was using "unreliable" both for the case
where the device "lies" and the case where the device does not
"lie" but reports a failure. Both of these
On 2017-10-19 07:01, Peter Grandi wrote:
[ ... ]
Oh please, please a bit less silliness would be welcome here.
In a previous comment on this tedious thread I had written:
If the block device abstraction layer and lower layers work
correctly, Btrfs does not have problems of that sort when
On 2017-10-18 07:59, Adam Borowski wrote:
On Wed, Oct 18, 2017 at 07:30:55AM -0400, Austin S. Hemmelgarn wrote:
On 2017-10-17 16:21, Adam Borowski wrote:
It's a single-device filesystem, thus disconnects are obviously fatal. But,
they never caused even a single bit of damage (as scrub goes
On 2017-10-18 09:53, Peter Grandi wrote:
I forget sometimes that people insist on storing large
volumes of data on unreliable storage...
Here obviously "unreliable" is used on the sense of storage that
can work incorrectly, not in the sense of storage that can fail.
Um, in what world is a
On 2017-10-17 13:58, Cloud Admin wrote:
Hi,
I want to remove two devices from a BTRFS RAID 1 pool. It should be
enough free space to do it, but what is the best strategie. Remove both
device in one call 'btrfs dev rem /dev/sda1 /dev/sdb1' (for example) or
should it be better in two separate
On 2017-10-17 16:21, Adam Borowski wrote:
On Tue, Oct 17, 2017 at 03:19:09PM -0400, Austin S. Hemmelgarn wrote:
On 2017-10-17 13:06, Adam Borowski wrote:
The thing is, reliability guarantees required vary WILDLY depending on your
particular use cases. On one hand, there's "even an one-m
On 2017-10-17 13:06, Adam Borowski wrote:
On Tue, Oct 17, 2017 at 08:40:20AM -0400, Austin S. Hemmelgarn wrote:
On 2017-10-17 07:42, Zoltan wrote:
On Tue, Oct 17, 2017 at 1:26 PM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
I forget sometimes that people insist on storing
On 2017-10-17 07:42, Zoltan wrote:
Hi,
On Tue, Oct 17, 2017 at 1:26 PM, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
I forget sometimes that people insist on storing large volumes of data on
unreliable storage...
In my opinion the unreliability of the storage is the exact
On 2017-10-16 21:14, Adam Borowski wrote:
On Mon, Oct 16, 2017 at 01:27:40PM -0400, Austin S. Hemmelgarn wrote:
On 2017-10-16 12:57, Zoltan wrote:
On Mon, Oct 16, 2017 at 1:53 PM, Austin S. Hemmelgarn wrote:
In an ideal situation, scrubbing should not be an 'only if needed' thing,
even
On 2017-10-16 12:57, Zoltan wrote:
Hi,
On Mon, Oct 16, 2017 at 1:53 PM, Austin S. Hemmelgarn wrote:
you will need to scrub regularly to avoid data corruption
Is there any indication that a scrub is needed? Before actually doing
a scrub, is btrfs already aware that one of the devices did
On 2017-10-15 04:30, Zoltán Ivánfi wrote:
Hi,
Thanks for the replies.
As you both pointed out, I shouldn't have described the issue having
to do with hotplugging. I got confused by this use-case being somewhat
emphasized in the description of the bug I linked to. As for the
question of why I
On 2017-10-12 21:42, Kai Hendry wrote:
Thank you Austin & Chris for your replies!
On Fri, 13 Oct 2017, at 01:19 AM, Austin S. Hemmelgarn wrote:
Same here on a pair of 3 year old NUC's. Based on the traces and the
other information, I'd be willing to bet this is probably the root c
On 2017-10-12 12:57, Chris Murphy wrote:
On Sun, Oct 8, 2017 at 10:58 AM, Kai Hendry wrote:
Hi there,
My /mnt/raid1 suddenly became full somewhat expectedly, so I bought 2
new USB 4TB hard drives (one WD, one Seagate) to upgrade to.
After adding sde and sdd I started to see
On 2017-10-12 11:30, Chris Murphy wrote:
On Thu, Oct 12, 2017 at 3:44 PM, Joseph Dunn wrote:
On Thu, 12 Oct 2017 15:32:24 +0100
Chris Murphy wrote:
On Thu, Oct 12, 2017 at 2:20 PM, Joseph Dunn wrote:
On Thu, 12 Oct 2017
On 2017-10-06 19:33, Liu Bo wrote:
On Thu, Oct 05, 2017 at 07:07:44AM -0400, Austin S. Hemmelgarn wrote:
On 2017-10-04 16:11, Liu Bo wrote:
On Tue, Oct 03, 2017 at 11:59:20PM +0800, Anand Jain wrote:
From: Anand Jain <anand.j...@oracle.com>
Write and flush errors are critical errors
On 2017-10-04 16:11, Liu Bo wrote:
On Tue, Oct 03, 2017 at 11:59:20PM +0800, Anand Jain wrote:
From: Anand Jain
Write and flush errors are critical errors, upon which the device fd
must be closed and marked as failed.
Can we defer the job of closing device to umount?
On 2017-10-04 07:13, Tomasz Chmielewski wrote:
Kernel: 4.13.4, btrfs RAID-1.
Disk usage more or less like below (yes, I know about btrfs fi df / show
/ usage):
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 424G 262G 161G 62% /var/lib/lxd
Balance would exit immediately
On 2017-09-27 20:00, Qu Wenruo wrote:
On 2017年09月28日 00:20, David Sterba wrote:
On Mon, Sep 25, 2017 at 07:15:30AM -0400, Austin S. Hemmelgarn wrote:
On 2017-09-24 10:08, Goffredo Baroncelli wrote:
On 09/24/2017 12:10 PM, Anand Jain wrote:
A lot of points in this thread, let me address
On 2017-09-26 18:46, Ferry Toth wrote:
Op Tue, 26 Sep 2017 15:52:44 -0400, schreef Austin S. Hemmelgarn:
On 2017-09-26 12:50, Ferry Toth wrote:
Looking at the Phoronix benchmark here:
https://www.phoronix.com/scan.php?page=article=linux414-bcache-
raid=2
I think it might be idle hopes
On 2017-09-26 12:50, Ferry Toth wrote:
Looking at the Phoronix benchmark here:
https://www.phoronix.com/scan.php?page=article=linux414-bcache-
raid=2
I think it might be idle hopes to think bcache can be used as a ssd cache
for btrfs to significantly improve performance.. True, the benchmark
On 2017-09-22 11:07, Qu Wenruo wrote:
On 2017年09月22日 21:33, Austin S. Hemmelgarn wrote:
On 2017-09-22 08:32, Qu Wenruo wrote:
On 2017年09月22日 19:38, Austin S. Hemmelgarn wrote:
On 2017-09-22 06:39, Qu Wenruo wrote:
As I already stated in an other thread, if you want to shrink, do
On 2017-09-24 10:08, Goffredo Baroncelli wrote:
On 09/24/2017 12:10 PM, Anand Jain wrote:
All my points are clear for this patchset:
I know I removed one function, and my reason is:
1) No or little usage
And it's anti intuition.
2) Dead code (not tested nor well documented)
3) Possible
On 2017-09-22 08:32, Qu Wenruo wrote:
On 2017年09月22日 19:38, Austin S. Hemmelgarn wrote:
On 2017-09-22 06:39, Qu Wenruo wrote:
As I already stated in an other thread, if you want to shrink, do it
in another command line tool.
Do one thing and do it simple. (Although Btrfs itself is already out
On 2017-09-22 06:39, Qu Wenruo wrote:
As I already stated in an other thread, if you want to shrink, do it in another
command line tool.
Do one thing and do it simple. (Although Btrfs itself is already out of the
UNIX way)
Unless I'm reading the code wrong, the shrinking isn't happening in a
On 2017-09-21 16:10, Kai Krakow wrote:
Am Wed, 20 Sep 2017 07:46:52 -0400
schrieb "Austin S. Hemmelgarn" <ahferro...@gmail.com>:
Fragmentation: Files with a lot of random writes can become
heavily fragmented (1+ extents) causing excessive multi-second
spikes of CPU
On 2017-09-20 02:38, Dave wrote:
On Thu 2017-08-31 (09:05), Ulli Horlacher wrote:
When I do a
btrfs filesystem defragment -r /directory
does it defragment really all files in this directory tree, even if it
contains subvolumes?
The man page does not mention subvolumes on this topic.
No answer
On 2017-09-19 14:33, Andrei Borzenkov wrote:
19.09.2017 14:49, Senén Vidal Blanco пишет:
Perfect!! Just what I was looking for.
Sorry for the delay, because before doing so, I preferred to test to see if it
actually worked.
I have a doubt. The system works perfectly, but at the time of
On 2017-09-19 11:30, Pat Sailor wrote:
Hello,
I have a half-filled raid1 on top of six spinning devices. Now I have
come into a spare SSD I'd like to use for caching, if possible without
having to rebuild or, failing that, without having to renounce to btrfs
and flexible reshaping.
I've
On 2017-09-15 15:41, Ulli Horlacher wrote:
On Fri 2017-09-15 (13:16), Austin S. Hemmelgarn wrote:
And then mount enryptfs:
mount.ecryptfs / /
This only possible by root.
For a user it is not possible to have access for his own snapshots.
Bad.
Which is why you use EncFS (which is a FUSE
On 2017-09-16 10:28, Ulli Horlacher wrote:
On Sat 2017-09-16 (13:47), Kai Krakow wrote:
Or you do "btrfs device stats .", it shows the associated device(s).
tux@xerus:/test/tux/zz: btrfs device stats .
ERROR: getting dev info for devstats failed: Operation not permitted
Not possible for a
On 2017-09-15 15:32, Ulli Horlacher wrote:
On Fri 2017-09-15 (13:08), Austin S. Hemmelgarn wrote:
On 2017-09-15 12:37, Ulli Horlacher wrote:
I have my btrfs filesystem mounted with option user_subvol_rm_allowed
tux@xerus: btrfs subvolume delete /test/tux/zz/.snapshot/2017-09-15_1824.test
On 2017-09-15 12:28, Ulli Horlacher wrote:
On Fri 2017-09-15 (12:15), Peter Becker wrote:
2017-09-15 12:01 GMT+02:00 Ulli Horlacher :
On Fri 2017-09-15 (06:45), Andrei Borzenkov wrote:
The actual question is - do you need to mount each individual btrfs
On 2017-09-15 11:34, Adam Borowski wrote:
Hi!
Here's a patch set that allows changing the compression level for zstd,
currently at mount time only. I've played with it for a month, so despite
being a quick hack, it's reasonably well tested. Tested on 4.13 +
btrfs-for-4.14 only, though -- I've
On 2017-09-15 12:37, Ulli Horlacher wrote:
I have my btrfs filesystem mounted with option user_subvol_rm_allowed
tux@xerus: btrfs --version
btrfs-progs v4.4
tux@xerus: uname -a
Linux xerus 4.4.0-93-generic #116-Ubuntu SMP Fri Aug 11 21:17:51 UTC 2017
x86_64 x86_64 x86_64 GNU/Linux
tux@xerus:
On 2017-09-14 23:45, Andrei Borzenkov wrote:
14.09.2017 18:32, Hugo Mills пишет:
On Thu, Sep 14, 2017 at 04:57:39PM +0200, Ulli Horlacher wrote:
I use encfs on top of btrfs.
I can create btrfs snapshots, but I have no suggestive access to the files
in these snaspshots, because they look like:
On 2017-09-14 22:26, Tomasz Kłoczko wrote:
On 14 September 2017 at 19:53, Austin S. Hemmelgarn
<ahferro...@gmail.com> wrote:
[..]
While it's not for BTRFS< a tool called e4rat might be of interest to you
regarding this. It reorganizes files on an ext4 filesystem so that stuff
used by
On 2017-09-14 13:48, Tomasz Kłoczko wrote:
On 14 September 2017 at 16:24, Kai Krakow wrote:
[..]
Getting e.g. boot files into read order or at least nearby improves
boot time a lot. Similar for loading applications.
By how much it is possible to improve boot time?
Just
On 2017-09-14 03:54, Duncan wrote:
Austin S. Hemmelgarn posted on Tue, 12 Sep 2017 13:27:00 -0400 as
excerpted:
The tricky part though is that differing workloads are impacted
differently by fragmentation. Using just four generic examples:
* Mostly sequential write focused workloads (like
On 2017-09-13 10:47, Martin Raiber wrote:
Hi,
On 12.09.2017 23:13 Adam Borowski wrote:
On Tue, Sep 12, 2017 at 04:12:32PM -0400, Austin S. Hemmelgarn wrote:
On 2017-09-12 16:00, Adam Borowski wrote:
Noted. Both Marat's and my use cases, though, involve VMs that are off most
of the time
On 2017-09-12 20:52, Timofey Titovets wrote:
No, no, no, no...
No new ioctl, no change in fallocate.
Fisrt: VM can do punch hole, if you use qemu -> qemu know how to do it.
Windows Guest also know how to do it.
Different Hypervisor? -> google -> Make issue to support, all
Linux/Windows/Mac OS
On 2017-09-13 07:51, Pete wrote:
On 09/12/2017 01:16 PM, Austin S. Hemmelgarn wrote:
Diverting away from the original topic, what issues with overlayfs and
btrfs?
As mentioned, I thought whiteout support was missing, but if you're
using it without issue, I might be wrong.
Whiteout works
On 2017-09-12 17:13, Adam Borowski wrote:
On Tue, Sep 12, 2017 at 04:12:32PM -0400, Austin S. Hemmelgarn wrote:
On 2017-09-12 16:00, Adam Borowski wrote:
Noted. Both Marat's and my use cases, though, involve VMs that are off most
of the time, and at least for me, turned on only to test
On 2017-09-12 16:00, Adam Borowski wrote:
On Tue, Sep 12, 2017 at 03:11:52PM -0400, Austin S. Hemmelgarn wrote:
On 2017-09-12 14:43, Adam Borowski wrote:
On Tue, Sep 12, 2017 at 01:36:48PM -0400, Austin S. Hemmelgarn wrote:
On 2017-09-12 13:21, Adam Borowski wrote:
There's fallocate -d
On 2017-09-12 14:47, Christoph Hellwig wrote:
On Tue, Sep 12, 2017 at 08:43:59PM +0200, Adam Borowski wrote:
For now, though, I wonder -- should we send fine folks at util-linux a patch
to make fallocate -d restore mtime, either always or on an option?
Don't do that. Please just add a new
On 2017-09-12 14:43, Adam Borowski wrote:
On Tue, Sep 12, 2017 at 01:36:48PM -0400, Austin S. Hemmelgarn wrote:
On 2017-09-12 13:21, Adam Borowski wrote:
There's fallocate -d, but that for some reason touches mtime which makes
rsync go again. This can be handled manually but is still not nice
On 2017-09-12 13:21, Adam Borowski wrote:
On Tue, Sep 12, 2017 at 02:26:39PM +0300, Marat Khalili wrote:
On 12/09/17 14:12, Adam Borowski wrote:
Why would you need support in the hypervisor if cp --reflink=always is
enough?
+1 :)
But I've already found one problem: I use rsync snapshots for
On 2017-09-12 12:28, Ulli Horlacher wrote:
On Thu 2017-08-31 (09:05), Ulli Horlacher wrote:
When I do a
btrfs filesystem defragment -r /directory
does it defragment really all files in this directory tree, even if it
contains subvolumes?
The man page does not mention subvolumes on this topic.
On 2017-09-11 17:33, Duncan wrote:
Austin S. Hemmelgarn posted on Mon, 11 Sep 2017 11:11:01 -0400 as
excerpted:
On 2017-09-11 09:16, Marat Khalili wrote:
Patrik, Duncan, thank you for the help. The `btrfs replace start
/dev/sdb7 /dev/sdd7 /mnt/data` worked without a hitch (though I didn't
try
On 2017-09-11 17:36, Pete wrote:
On 09/11/2017 07:49 PM, Austin S. Hemmelgarn wrote:
Unfortunately, I don't know of any overlay mount implementation that
works correctly and reliably with BTRFS. I know for a fact that
OverlayFS (the upstream in-kernel implementation) does not work, and I
On 2017-09-11 14:17, Senén Vidal Blanco wrote:
I am trying to implement a system that stores the data in a unit (A) with
BTRFS format that is untouchable and that future files and folders created or
modified are stored in another physical unit (B) with BTRFS format.
Each year the new files will
On 2017-09-11 09:16, Marat Khalili wrote:
Patrik, Duncan, thank you for the help. The `btrfs replace start
/dev/sdb7 /dev/sdd7 /mnt/data` worked without a hitch (though I didn't
try to reboot yet, still have grub/efi/several mdadm partitions to copy).
It also worked much faster than mdadm
On 2017-09-10 02:33, Marat Khalili wrote:
It doesn't need replaced disk to be readable, right? Then what prevents same
procedure to work without a spare bay?
In theory, nothing.
In practice, there are reliability issues with mounting a filesystem
degraded (and you should be avoiding running
On 2017-09-08 16:54, Tomasz Kłoczko wrote:
On 8 September 2017 at 20:06, Austin S. Hemmelgarn <ahferro...@gmail.com> wrote:
[..]
If you don't like awk you can use jq, sed, perl, python, ruby or
whatever you have/like/want.
And which command is more readable? Something like the
On 2017-09-08 14:09, Tomasz Kłoczko wrote:
On 8 September 2017 at 17:39, David Sterba wrote:
[..]
My plan is to introduce a global options to set various this, also the
output format, eg.
$ btrfs -t bell be om -format=json subvolume list
that would dump the list in json
201 - 300 of 1331 matches
Mail list logo