On 2019-10-22 18:56, Christian Pernegger wrote:
[Please CC me, I'm not on the list.]
Am Mo., 21. Okt. 2019 um 15:34 Uhr schrieb Qu Wenruo :
[...] just fstrim wiped some old tree blocks. But maybe it's some unfortunate
race, that fstrim trimmed some tree blocks still in use.
Forgive me for as
On 2019-10-22 06:01, Qu Wenruo wrote:
On 2019/10/22 下午5:47, Tobias Reinhard wrote:
Hi,
I noticed that if you punch a hole in the middle of a file the available
filesystem space seems not to increase.
Kernel is 5.2.11
To reproduce:
->mkfs.btrfs /dev/loop1 -f
btrfs-progs v4.15.1
See http:/
On 2019-10-21 09:02, Christian Pernegger wrote:
[Please CC me, I'm not on the list.]
Am Mo., 21. Okt. 2019 um 13:47 Uhr schrieb Austin S. Hemmelgarn
:
I've [worked with fs clones] like this dozens of times on single-device volumes
with exactly zero issues.
Thank you, I
On 2019-10-21 06:47, Christian Pernegger wrote:
[Please CC me, I'm not on the list.]
Am So., 20. Okt. 2019 um 12:28 Uhr schrieb Qu Wenruo :
Question: Can I work with the mounted backup image on the machine that
also contains the original disc? I vaguely recall something about
btrfs really not l
On 2019-10-10 17:21, Ulli Horlacher wrote:
On Thu 2019-10-10 (20:47), Kai Krakow wrote:
I run into the problem that "rsync -ax" sees btrfs subvolumes as "other
filesystems" and ignores them.
I worked around it by mounting the btrfs-pool at a special directory:
mount -o subvolid=0 /dev/disk/b
On 2019-10-03 13:51, Graham Cobb wrote:
Hi,
I seem to have another case where scrub gets confused when it is
cancelled and restarted many times (or, maybe, it is my error or
something). I will look into it further but, instead of just hacking
away at my script to work out what is going on, I tho
On 2019-09-25 00:25, Nick Bowler wrote:
On Tue, Sep 24, 2019, 18:34 Chris Murphy, wrote:
On Tue, Sep 24, 2019 at 4:04 PM Nick Bowler wrote:
- Running Linux 5.2.14, I pushed this system to OOM; the oom killer
ran and killed some userspace tasks. At this point many of the
remaining tasks were
On 2019-09-13 12:54, General Zed wrote:
Quoting "Austin S. Hemmelgarn" :
On 2019-09-12 18:21, General Zed wrote:
Quoting "Austin S. Hemmelgarn" :
On 2019-09-12 15:18, webmas...@zedlx.com wrote:
Quoting "Austin S. Hemmelgarn" :
On 2019-09-11 17:37, webmas
On 2019-09-12 18:21, General Zed wrote:
Quoting "Austin S. Hemmelgarn" :
On 2019-09-12 15:18, webmas...@zedlx.com wrote:
Quoting "Austin S. Hemmelgarn" :
On 2019-09-11 17:37, webmas...@zedlx.com wrote:
Quoting "Austin S. Hemmelgarn" :
On 2019-09-11 13
On 2019-09-12 18:57, General Zed wrote:
Quoting Chris Murphy :
On Thu, Sep 12, 2019 at 3:34 PM General Zed
wrote:
Quoting Chris Murphy :
> On Thu, Sep 12, 2019 at 1:18 PM wrote:
>>
>> It is normal and common for defrag operation to use some disk space
>> while it is running. I estimate t
On 2019-09-12 19:54, Zygo Blaxell wrote:
On Thu, Sep 12, 2019 at 06:57:26PM -0400, General Zed wrote:
Quoting Chris Murphy :
On Thu, Sep 12, 2019 at 3:34 PM General Zed wrote:
Quoting Chris Murphy :
On Thu, Sep 12, 2019 at 1:18 PM wrote:
It is normal and common for defrag operation t
On 2019-09-12 15:18, webmas...@zedlx.com wrote:
Quoting "Austin S. Hemmelgarn" :
On 2019-09-11 17:37, webmas...@zedlx.com wrote:
Quoting "Austin S. Hemmelgarn" :
On 2019-09-11 13:20, webmas...@zedlx.com wrote:
Quoting "Austin S. Hemmelgarn" :
On 2019-09
On 2019-09-11 17:37, webmas...@zedlx.com wrote:
Quoting "Austin S. Hemmelgarn" :
On 2019-09-11 13:20, webmas...@zedlx.com wrote:
Quoting "Austin S. Hemmelgarn" :
On 2019-09-10 19:32, webmas...@zedlx.com wrote:
Quoting "Austin S. Hemmelgarn" :
Give
On 2019-09-11 13:20, webmas...@zedlx.com wrote:
Quoting "Austin S. Hemmelgarn" :
On 2019-09-10 19:32, webmas...@zedlx.com wrote:
Quoting "Austin S. Hemmelgarn" :
=== I CHALLENGE you and anyone else on this mailing list: ===
- Show me an exaple where splittin
On 2019-09-10 19:32, webmas...@zedlx.com wrote:
Quoting "Austin S. Hemmelgarn" :
Defrag may break up extents. Defrag may fuse extents. But it shouln't
ever unshare extents.
Actually, spitting or merging extents will unshare them in a large
majority of cases.
Ok, this po
On 2019-09-09 15:26, webmas...@zedlx.com wrote:
This post is a reply to Remi Gauvin's post, but the email got lost so I
can't reply to him directly.
Remi Gauvin wrote on 2019-09-09 17:24 :
On 2019-09-09 11:29 a.m., Graham Cobb wrote:
and does anyone really care about
defrag any more?).
On 2019-09-09 07:25, zedlr...@server53.web-hosting.com wrote:
Quoting Qu Wenruo :
1) Full online backup (or copy, whatever you want to call it)
btrfs backup [-f]
- backups a btrfs filesystem given by to a partition
(with all subvolumes).
Why not just btrfs send?
Or you want to keep the w
On 2019-09-04 08:46, Jorge Fernandez Monteagudo wrote:
Hi Austin!
What you want here is mkfs.btrfs with the `-r` and `--shrink` options.
So, for your specific example, replace the genisoimage command from your
first example with this and update the file names appropriately:
# mkfs.btrfs -r
On 2019-09-04 02:23, Jorge Fernandez Monteagudo wrote:
Hi all!
Is it possible to get a crypted btrfs in a file? Currently I'm doing this to
get a crypted ISO filesystem in a file:
# genisoimage -R -J -iso-level 4 -o iso.img
# fallocate iso-crypted.img -l $(stat --printf="%s" iso.img)
# crypts
On 2019-09-01 21:09, Chris Murphy wrote:
I'm still mostly convinced the policy questions and management should
be dealt with a btrfsd userspace daemon.
Btrfs kernel code itself tolerates quite a lot of read and write
errors, where a userspace service could say, yeah forget that we're
moving over
On 2019-08-23 13:08, Adam Borowski wrote:
the improved collision
resistance of xxhash64 is not a reason as if you intend to dedupe you want
a crypto hash so you don't need to verify.
The improved collision resistance is a roughly 10 orders of magnitude
reduction in the chance of a collision.
On 2019-07-25 14:37, David Sterba wrote:
On Thu, Jul 18, 2019 at 02:27:49PM +0800, Qu Wenruo wrote:
RAID10 can accept as much as half of its disks to be missing, as long as
each sub stripe still has a good mirror.
Can you please make a test case for that?
I think the number of devices that ca
On 2019-06-25 06:41, Roman Mamedov wrote:
Hello,
I have a number of VM images in sparse NOCOW files, with:
# du -B M -sc *
...
46030M total
and:
# du -B M -sc --apparent-size *
...
96257M total
But despite there being nothing else on the filesystem and no snapsh
On 2019-06-18 15:37, Stéphane Lesimple wrote:
June 18, 2019 9:06 PM, "Austin S. Hemmelgarn" wrote:
On 2019-06-18 14:26, Stéphane Lesimple wrote:
[...]
I don't need to have a perfectly balanced FS, I just want all the space > to be
allocatable.
I tried using the -ddevid
On 2019-06-18 14:26, Stéphane Lesimple wrote:
Hello,
I've been a btrfs user for quite a number of years now, but it seems I
need the wiseness of the btrfs gurus on this one!
I have a 5-hdd btrfs raid1 setup with 4x3T+1x10T drives.
A few days ago, I replaced one of the 3T by a new 10T, running
On 2019-06-18 14:57, Hugo Mills wrote:
On Tue, Jun 18, 2019 at 02:50:34PM -0400, Austin S. Hemmelgarn wrote:
On 2019-06-18 14:45, Hugo Mills wrote:
On Tue, Jun 18, 2019 at 08:26:32PM +0200, Stéphane Lesimple wrote:
I've been a btrfs user for quite a number of years now, but it seems
I
On 2019-06-18 14:45, Hugo Mills wrote:
On Tue, Jun 18, 2019 at 08:26:32PM +0200, Stéphane Lesimple wrote:
I've been a btrfs user for quite a number of years now, but it seems
I need the wiseness of the btrfs gurus on this one!
I have a 5-hdd btrfs raid1 setup with 4x3T+1x10T drives.
A few days
On 2019-06-12 16:02, Chris Murphy wrote:
On Wed, Jun 12, 2019 at 2:07 AM Neal Gompa wrote:
I mean, yes... FHS is definitely unhelpful, but Apple conforms to FHS
pretty well, even though it's not obvious that it does. Apple just has
the benefit of being able to shuffle things around without peop
On 2019-05-29 21:13, Newbugreport wrote:
I'm experimenting with the rsync algorithm for btrfs deduplication. Every other
deduplication tool I've seen works against whole files. I'm concerned about
deduping chunks under 4k and about files with scattered extents.
AFAIK, regions smaller than the F
On 2019-05-23 13:31, Martin Raiber wrote:
On 23.05.2019 19:13 Austin S. Hemmelgarn wrote:
On 2019-05-23 12:24, Chris Murphy wrote:
On Thu, May 23, 2019 at 5:19 AM Austin S. Hemmelgarn
wrote:
On 2019-05-22 14:46, Cerem Cem ASLAN wrote:
Could you confirm or disclaim the following explanation
On 2019-05-23 12:24, Chris Murphy wrote:
On Thu, May 23, 2019 at 5:19 AM Austin S. Hemmelgarn
wrote:
On 2019-05-22 14:46, Cerem Cem ASLAN wrote:
Could you confirm or disclaim the following explanation:
https://unix.stackexchange.com/a/520063/65781
Aside from what Hugo mentioned (which is
On 2019-05-23 12:46, Chris Murphy wrote:
On Thu, May 23, 2019 at 10:34 AM Adam Borowski wrote:
On Thu, May 23, 2019 at 10:24:28AM -0600, Chris Murphy wrote:
On Thu, May 23, 2019 at 5:19 AM Austin S. Hemmelgarn
BTRFS explicitly requests write barriers to prevent that type of
reordering of
On 2019-05-22 14:46, Cerem Cem ASLAN wrote:
Could you confirm or disclaim the following explanation:
https://unix.stackexchange.com/a/520063/65781
Aside from what Hugo mentioned (which is correct), it's worth mentioning
that the example listed in the answer of how hardware issues could screw
t
On 2019-05-20 07:15, Newbugreport wrote:
Patrik, thank you. I've enabled the SAMBA module, which may help in the future.
Does the GUI file manager (i.e. Nautilus) need special support?
It shouldn't (Windows' default file manager doesn't, and most stuff on
Linux uses Samba so it shouldn't either
On 2019-05-17 14:36, Diego Calleja wrote:
El miércoles, 15 de mayo de 2019 19:27:21 (CEST) David Sterba escribió:
Once the code is ready for more checksum algos, we'll pick candidates
and my idea is to select 1 fast (not necessarily strong, but better
than crc32c) and 1 strong (but slow, and sha
On 2019-05-20 03:47, Johannes Thumshirn wrote:
On Sat, May 18, 2019 at 02:38:08AM +0200, Adam Borowski wrote:
On Fri, May 17, 2019 at 09:07:03PM +0200, Johannes Thumshirn wrote:
On Fri, May 17, 2019 at 08:36:23PM +0200, Diego Calleja wrote:
If btrfs needs an algorithm with good performance/sec
On 2019-04-29 13:31, Andrei Borzenkov wrote:
29.04.2019 20:20, Austin S. Hemmelgarn пишет:
As of today there is no provision for automatic mounting of incomplete
multi-device btrfs in degraded mode. Actually, with systemd it is flat
impossible to mount incomplete btrfs because standard
On 2019-04-29 12:16, Hendrik Friedel wrote:
Hello,
With "single" data profile you won't lose filesystem, but you will
irretrievably lose any data on the missing drive. Also "single" profile
does not support auto-healing (repairing of bad copy from good copy). If
this is acceptable to you, then y
On 2019-04-28 16:14, Andrei Borzenkov wrote:
28.04.2019 22:35, Hendrik Friedel пишет:
Hello,
I intend to move to BTRFS and of course I have some data already.
I currently have several single 4TB drives and I would like to move the
Data onto new drives (2*8TB). I need no raid, as I prefer a back
On 2019-04-28 12:18, Alberto Bursi wrote:
I am looking for a way to mimic mdadm's behaviour and have btrfs mount
a degraded array on boot as long as it's not broken (specific use case:
RAID1 with a single disk missing/dead)
So far the only thing I could think of (and I've seen suggested
elsewher
On 2019-04-08 09:30, Leonid Bloch wrote:
On 4/8/19 3:44 PM, Austin S. Hemmelgarn wrote:
On 2019-04-08 07:27, Leonid Bloch wrote:
Hi List,
Can you suggest a way of cryptographically verifying the content of a
btrfs subvolume, besides the naïve approach, of running a cryptographic
hash function
On 2019-04-08 07:27, Leonid Bloch wrote:
Hi List,
Can you suggest a way of cryptographically verifying the content of a
btrfs subvolume, besides the naïve approach, of running a cryptographic
hash function on the output of btrfs send?
Running BTRFS on top of dm-integrity and dm-crypt with them s
On 2019-04-03 14:17, Hendrik Friedel wrote:
Hello,
thanks for your reply.
3) Even more, it would be good, if btrfs would disable the write cache
in that case, so that one does not need to rely on the user
Personally speaking, if user really believes it's write cache causing
the problem or wan
On 2019-04-01 15:22, Hendrik Friedel wrote:
Dear btrfs-team,
I am aware, that barriers are essential for btrfs [1].
I have some questions on that topic:
1) I am not aware how to determine, whether barriers are supported,
except for searching dmesg for a message that barriers are disabled. Is
t
On 2019-03-07 15:07, Zygo Blaxell wrote:
On Mon, Mar 04, 2019 at 04:34:39PM +0100, Christoph Anton Mitterer wrote:
Hey.
Thanks for your elaborate explanations :-)
On Fri, 2019-02-15 at 00:40 -0500, Zygo Blaxell wrote:
The problem occurs only on reads. Data that is written to disk will
be O
On 2019-02-24 12:32, Nemo wrote:
Hi,
I had a RAID1 disk failure recently, and a limitation in number of SATA
connectors meant I could not do a live replace. I'm still in the
progress of resolving the issue, but posting some feedback here on the
issues I faced and what could have helped.
**What
On 2019-02-15 14:50, Zygo Blaxell wrote:
On Fri, Feb 15, 2019 at 11:54:57AM -0500, Austin S. Hemmelgarn wrote:
On 2019-02-15 10:40, Brian B wrote:
It looks like the btrfs code currently uses the total space available on
a disk to determine where it should place the two copies of a file in
On 2019-02-15 10:40, Brian B wrote:
It looks like the btrfs code currently uses the total space available on
a disk to determine where it should place the two copies of a file in
RAID1 mode. Wouldn't it make more sense to use the _percentage_ of free
space instead of the number of free bytes?
F
On 2019-02-11 22:16, Sébastien Luttringer wrote:
Hello,
The context is a BTRFS filesystem on top of an md device (raid5 on 6 disks).
System is an Arch Linux and the kernel was a vanilla 4.20.2.
# btrfs fi us /home
Overall:
Device size: 27.29TiB
Device allocated:
On 2019-02-10 13:34, Chris Murphy wrote:
On Sat, Feb 9, 2019 at 5:13 AM waxhead wrote:
Understood, but that is not quite what I meant - let me rephrase...
If BTRFS still can't mount, why would it blindly accept a previously
non-existing disk to take part of the pool?!
It doesn't do it blindl
On 2019-02-08 13:10, waxhead wrote:
Austin S. Hemmelgarn wrote:
On 2019-02-07 13:53, waxhead wrote:
Austin S. Hemmelgarn wrote:
On 2019-02-07 06:04, Stefan K wrote:
Thanks, with degraded as kernel parameter and also ind the fstab
it works like expected
That should be the normal
that?
Because we currently don't have any code that does it. Part of the
problem is that we're a lot more tolerant of intermittent I/O errors
than LVM and MD are, so we can't reliably tell if a device is truly gone
or not.
On Thursday, February 7, 2019 2:39:34 PM CET A
On 2019-02-07 23:51, Andrei Borzenkov wrote:
07.02.2019 22:39, Austin S. Hemmelgarn пишет:
The issue with systemd is that if you pass 'degraded' on most systemd
systems, and devices are missing when the system tries to mount the
volume, systemd won't mount it because it does
On 2019-02-07 13:53, waxhead wrote:
Austin S. Hemmelgarn wrote:
On 2019-02-07 06:04, Stefan K wrote:
Thanks, with degraded as kernel parameter and also ind the fstab it
works like expected
That should be the normal behaviour, cause a server must be up and
running, and I don't care
On 2019-02-07 06:04, Stefan K wrote:
Thanks, with degraded as kernel parameter and also ind the fstab it works like
expected
That should be the normal behaviour, cause a server must be up and running, and
I don't care about a device loss, thats why I use a RAID1. The device-loss
problem can
On 2019-02-04 12:47, Patrik Lundquist wrote:
On Sun, 3 Feb 2019 at 01:24, Chris Murphy wrote:
1. At least with raid1/10, a particular device can only be mounted
rw,degraded one time and from then on it fails, and can only be ro
mounted. There are patches for this but I don't think they've been
On 2019-01-31 07:38, Ronald Schaten wrote:
Hello everybody...
This is my first mail to this list, and -- as much as I'd like to be --
I'm not a kernel developer. So please forgive me if this isn't the right
place for questions like this. I'm thankful for any pointer into the
right direction.
T
On 2019-01-30 10:26, Christoph Anton Mitterer wrote:
On Wed, 2019-01-30 at 07:58 -0500, Austin S. Hemmelgarn wrote:
Running dm-integrity without a journal is roughly equivalent to
using
the nobarrier mount option (the journal is used to provide the same
guarantees that barriers do). IOW, don
On 2019-01-29 18:15, Hans van Kranenburg wrote:
Hi,
Thought experiment time...
I have an HP z820 workstation here (with ECC memory, yay!) and 4x250G
10k SAS disks (and some spare disks). It's donated hardware, and I'm
going to use it to replace the current server in the office of a
non-profit o
On 2019-01-16 13:15, Chris Murphy wrote:
On Wed, Jan 16, 2019 at 7:58 AM Stefan K wrote:
:(
that means when one jbod fail its there is no guarantee that it works fine?
like in zfs? well that sucks
Didn't anyone think to program it that way?
The mirroring is a function of the block group,
On 12/23/2018 1:16 AM, Adam Borowski wrote:
On Sun, Dec 23, 2018 at 12:24:02AM +, Paul Jones wrote:
IMHO the more pertinent question is :
If a file has portions which are not easily compressible does that imply all
future writes are also incompressible. IMO no, so I think what will be prude
On 12/19/2018 7:57 PM, Qu Wenruo wrote:
On 2018/12/19 下午11:41, devz...@web.de wrote:
does compress-force really force compression?
It should.
The only exception is block size.
If the file is smaller than the sector size (4K for x86_64), then no
compression no matter whatever the mount opti
On 2018-12-13 05:39, Remi Gauvin wrote:
On 2018-12-13 02:29 AM, Adam Borowski wrote:
For btrfs, a block device is a block device, it's not "racist".
You can freely mix and/or replace. If you want to, say, extend a SD
card with NBD to remote spinning rust, it works well -- tested :p
The pos
On 2018-12-07 01:43, Doni Crosby wrote:
This is qemu-kvm? What's the cache mode being used? It's possible the
usual write guarantees are thwarted by VM caching.
Yes it is a proxmox host running the system so it is a qemu vm, I'm
unsure on the caching situation.
On the note of QEMU and the cache
On 2018-12-06 23:09, Andrei Borzenkov wrote:
06.12.2018 16:04, Austin S. Hemmelgarn пишет:
* On SCSI devices, a discard operation translates to a SCSI UNMAP
command. As pointed out by Ronnie Sahlberg in his reply, this command
is purely advisory, may not result in any actual state change on
On 2018-12-06 01:11, Robert White wrote:
(1) Automatic and selective wiping of unused and previously used disk
blocks is a good security measure, particularly when there is an
encryption layer beneath the file system.
(2) USB attached devices _never_ support TRIM and they are the most
likely
On 2018-12-05 14:50, Roman Mamedov wrote:
Hello,
To migrate my FS to a different physical disk, I have added a new empty device
to the FS, then ran the remove operation on the original one.
Now my FS has only devid 2:
Label: 'p1' uuid: d886c190-b383-45ba-9272-9f00c6a10c50
Total device
On 2018-12-04 08:37, Graham Cobb wrote:
On 04/12/2018 12:38, Austin S. Hemmelgarn wrote:
In short, USB is _crap_ for fixed storage, don't use it like that, even
if you are using filesystems which don't appear to complain.
That's useful advice, thanks.
Do you (or anyone
On 2018-12-04 00:37, Tomasz Chmielewski wrote:
I'm trying to use btrfs on an external USB drive, without much success.
When the drive is connected for 2-3+ days, the filesystem gets remounted
readonly, with BTRFS saying "IO failure":
[77760.444607] BTRFS error (device sdb1): bad tree block st
On 2018-11-15 13:39, Juan Alberto Cirez wrote:
Is BTRFS mature enough to be deployed on a production system to underpin
the storage layer of a 16+ ipcameras-based NVR (or VMS if you prefer)?
For NVR, I'd say no. BTRFS does pretty horribly with append-only
workloads, even if they are WORM style.
On 11/13/2018 10:31 AM, David Sterba wrote:
On Mon, Oct 01, 2018 at 09:31:04PM +0800, Anand Jain wrote:
+ /*
+ * we are going to replace the device path, make sure its the
+ * same device if the device mounted
+ */
+ if (device->bdev) {
+ struct b
On 11/4/2018 11:44 AM, waxhead wrote:
Sterling Windmill wrote:
Out of curiosity, what led to you choosing RAID1 for data but RAID10
for metadata?
I've flip flipped between these two modes myself after finding out
that BTRFS RAID10 doesn't work how I would've expected.
Wondering what made you c
On 10/30/2018 12:10 PM, Ulli Horlacher wrote:
On Mon 2018-10-29 (17:57), Remi Gauvin wrote:
On 2018-10-29 02:11 PM, Ulli Horlacher wrote:
I want to know how many free space is left and have problems in
interpreting the output of:
btrfs filesystem usage
btrfs filesystem df
btrfs filesystem sh
On 18/10/2018 08.02, Anton Shepelev wrote:
I wrote:
What may be the reason of a CRC mismatch on a BTRFS file in
a virutal machine:
csum failed ino 175524 off 1876295680 csum 451760558
expected csum 1446289185
Shall I seek the culprit in the host machine on in the
guest one? Supposing the hos
On 2018-10-16 16:27, Chris Murphy wrote:
On Tue, Oct 16, 2018 at 9:42 AM, Austin S. Hemmelgarn
wrote:
On 2018-10-16 11:30, Anton Shepelev wrote:
Hello, all
What may be the reason of a CRC mismatch on a BTRFS file in
a virutal machine:
csum failed ino 175524 off 1876295680 csum
On 2018-10-16 11:30, Anton Shepelev wrote:
Hello, all
What may be the reason of a CRC mismatch on a BTRFS file in
a virutal machine:
csum failed ino 175524 off 1876295680 csum 451760558
expected csum 1446289185
Shall I seek the culprit in the host machine on in the guest
one? Supposin
On 2018-10-15 10:42, Anton Shepelev wrote:
Hugo Mills to Anton Shepelev:
While trying to resolve free space problems, and found
that I cannot interpret the output of:
btrfs filesystem show
Label: none uuid: 8971ce5b-71d9-4e46-ab25-ca37485784c8
Total devices 1 FS bytes used 34.06GiB
On 2018-10-13 18:28, Chris Murphy wrote:
Is it practical and desirable to make Btrfs based OS installation
images reproducible? Or is Btrfs simply too complex and
non-deterministic? [1]
The main three problems with Btrfs right now for reproducibility are:
a. many objects have uuids other than th
On 2018-10-14 07:08, waxhead wrote:
In case BTRFS fails to WRITE to a disk. What happens?
Does the bad area get mapped out somehow? Does it try again until it
succeed or until it "times out" or reach a threshold counter?
Does it eventually try to write to a different disk (in case of using
the
On 2018-10-07 09:37, Holger Hoffstätte wrote:
The Prometheus statistics collection/aggregation/monitoring/alerting system
[1] is quite popular, easy to use and will probably be the basis for the
upcoming OpenMetrics "standard" [2].
Prometheus collects metrics by polling host-local "exporters" t
On 2018-10-05 20:34, Duncan wrote:
Wilson, Ellis posted on Fri, 05 Oct 2018 15:29:52 + as excerpted:
Is there any tuning in BTRFS that limits the number of outstanding reads
at a time to a small single-digit number, or something else that could
be behind small queue depths? I can't otherwi
On 2018-10-01 04:56, Anand Jain wrote:
Its not that impossible to imagine that a device OR a btrfs image is
been copied just by using the dd or the cp command. Which in case both
the copies of the btrfs will have the same fsid. If on the system with
automount enabled, the copied FS gets scanned.
On 2018-09-19 15:08, Goffredo Baroncelli wrote:
On 18/09/2018 19.15, Goffredo Baroncelli wrote:
b. The bootloader code, would have to have sophisticated enough Btrfs
knowledge to know if the grubenv has been reflinked or snapshot,
because even if +C, it may not be valid to overwrite, and COW mus
On 2018-09-18 15:00, Chris Murphy wrote:
On Tue, Sep 18, 2018 at 12:25 PM, Austin S. Hemmelgarn
wrote:
It actually is independent of /boot already. I've got it running just fine
on my laptop off of the EFI system partition (which is independent of my
/boot partition), and thus have no i
On 2018-09-18 14:57, Chris Murphy wrote:
On Tue, Sep 18, 2018 at 12:16 PM, Andrei Borzenkov wrote:
18.09.2018 08:37, Chris Murphy пишет:
The patches aren't upstream yet? Will they be?
I do not know. Personally I think much easier is to make grub location
independent of /boot, allowing gru
On 2018-09-18 14:38, Andrei Borzenkov wrote:
18.09.2018 21:25, Austin S. Hemmelgarn пишет:
On 2018-09-18 14:16, Andrei Borzenkov wrote:
18.09.2018 08:37, Chris Murphy пишет:
On Mon, Sep 17, 2018 at 11:24 PM, Andrei Borzenkov
wrote:
18.09.2018 07:21, Chris Murphy пишет:
On Mon, Sep 17, 2018
On 2018-09-18 14:16, Andrei Borzenkov wrote:
18.09.2018 08:37, Chris Murphy пишет:
On Mon, Sep 17, 2018 at 11:24 PM, Andrei Borzenkov wrote:
18.09.2018 07:21, Chris Murphy пишет:
On Mon, Sep 17, 2018 at 9:44 PM, Chris Murphy wrote:
https://btrfs.wiki.kernel.org/index.php/FAQ#Does_grub_suppo
On 2018-09-06 03:23, Nathan Dehnel wrote:
https://lwn.net/Articles/287289/
In 2008, HP released the source code for a filesystem called advfs so
that its features could be incorporated into linux filesystems. Advfs
had a feature where a group of file writes were an atomic transaction.
https://w
On 2018-08-30 13:13, Axel Burri wrote:
On 29/08/2018 21.02, Austin S. Hemmelgarn wrote:
On 2018-08-29 13:24, Axel Burri wrote:
This patch allows to build distinct binaries for specific btrfs
subcommands, e.g. "btrfs-subvolume-show" which would be identical to
"btrfs
On 2018-08-29 13:24, Axel Burri wrote:
This patch allows to build distinct binaries for specific btrfs
subcommands, e.g. "btrfs-subvolume-show" which would be identical to
"btrfs subvolume show".
Motivation:
While btrfs-progs offer the all-inclusive "btrfs" command, it gets
pretty cumbersome t
On 2018-08-29 08:33, Nikolay Borisov wrote:
On 29.08.2018 15:09, Qu Wenruo wrote:
On 2018/8/29 下午4:35, Nikolay Borisov wrote:
Here is the userspace tooling support for utilising the new metadata_uuid field,
enabling the change of fsid without having to rewrite every metadata block. This
pat
t looks like that cannot be easily disabled, and without the
apt-btrfs-snapshot package scheduling cleanups it's not ever
automatically removed?
> just google it, there is no mention of this behaviour
> Il giorno mar 28 ago 2018 alle ore 19:07 Austin S. Hemmelgarn
&g
On 2018-08-28 12:05, Noah Massey wrote:
On Tue, Aug 28, 2018 at 11:47 AM Austin S. Hemmelgarn
wrote:
On 2018-08-28 11:27, Noah Massey wrote:
On Tue, Aug 28, 2018 at 10:59 AM Menion wrote:
[sudo] password for menion:
ID gen top level path
On 2018-08-28 11:27, Noah Massey wrote:
On Tue, Aug 28, 2018 at 10:59 AM Menion wrote:
[sudo] password for menion:
ID gen top level path
-- --- -
257 600627 5 /@
258 600626 5 /@home
296 599489 5
/@apt-snapsho
On 2018-08-27 18:53, John Petrini wrote:
Hi List,
I'm seeing corruption errors when running btrfs device stats but I'm
not sure what that means exactly. I've just completed a full scrub and
it reported no errors. I'm hoping someone here can enlighten me.
Thanks!
The first thing to understand h
On 2018-08-27 17:05, Eugene Bright wrote:
Greetings!
BTRFS wiki says there is no per-subvolume compression option [1].
At the same time next command allow me to set properties per-subvolume:
btrfs property set /volume compression zstd
Corresponding get command shows distinct propertie
On 2018-08-23 10:04, Stefan Malte Schumacher wrote:
Hallo,
I originally had RAID with six 4TB drives, which was more than 80
percent full. So now I bought
a 10TB drive, added it to the Array and gave the command to remove the
oldest drive in the array.
btrfs device delete /dev/sda /mnt/btrfs-
On 2018-08-22 11:01, David Sterba wrote:
On Wed, Aug 22, 2018 at 09:56:59AM -0400, Austin S. Hemmelgarn wrote:
On 2018-08-22 09:48, David Sterba wrote:
On Tue, Aug 21, 2018 at 01:01:00PM -0400, Austin S. Hemmelgarn wrote:
On 2018-08-21 12:05, David Sterba wrote:
On Tue, Aug 21, 2018 at 10:10
On 2018-08-22 09:48, David Sterba wrote:
On Tue, Aug 21, 2018 at 01:01:00PM -0400, Austin S. Hemmelgarn wrote:
On 2018-08-21 12:05, David Sterba wrote:
On Tue, Aug 21, 2018 at 10:10:04AM -0400, Austin S. Hemmelgarn wrote:
On 2018-08-21 09:32, Janos Toth F. wrote:
so pretty much everyone who
On 2018-08-21 23:57, Duncan wrote:
Austin S. Hemmelgarn posted on Tue, 21 Aug 2018 13:01:00 -0400 as
excerpted:
Otherwise, the only option for people who want it set is to patch the
kernel to get noatime as the default (instead of relatime). I would
look at pushing such a patch upstream
1 - 100 of 1429 matches
Mail list logo