On 2018-08-21 12:05, David Sterba wrote:
On Tue, Aug 21, 2018 at 10:10:04AM -0400, Austin S. Hemmelgarn wrote:
On 2018-08-21 09:32, Janos Toth F. wrote:
so pretty much everyone who wants to avoid the overhead from them can just
use the `noatime` mount option.
It would be great if someone
On 2018-08-21 09:43, David Howells wrote:
Qu Wenruo wrote:
But to be more clear, NOSSD shouldn't be a special case.
In fact currently NOSSD only affects whether we will output the message
"enabling ssd optimization", no real effect if I didn't miss anything.
That's not quite true. In:
On 2018-08-21 09:32, Janos Toth F. wrote:
so pretty much everyone who wants to avoid the overhead from them can just
use the `noatime` mount option.
It would be great if someone finally fixed this old bug then:
https://bugzilla.kernel.org/show_bug.cgi?id=61601
Until then, it seems practically i
On 2018-08-21 08:06, Adam Borowski wrote:
On Mon, Aug 20, 2018 at 08:16:16AM -0400, Austin S. Hemmelgarn wrote:
Also, slightly OT, but atimes are not where the real benefit is here for
most people. No sane software other than mutt uses atimes (and mutt's use
of them is not sane, but tha
On 2018-08-19 06:25, Andrei Borzenkov wrote:
Отправлено с iPhone
19 авг. 2018 г., в 11:37, Martin Steigerwald написал(а):
waxhead - 18.08.18, 22:45:
Adam Hunt wrote:
Back in 2014 Ted Tso introduced the lazytime mount option for ext4
and shortly thereafter a more generic VFS implementation
On 2018-08-17 08:50, Roman Mamedov wrote:
On Fri, 17 Aug 2018 14:28:25 +0200
Martin Steigerwald wrote:
First off, keep in mind that the SSD firmware doing compression only
really helps with wear-leveling. Doing it in the filesystem will help
not only with that, but will also give you more spa
On 2018-08-17 08:28, Martin Steigerwald wrote:
Thanks for your detailed answer.
Austin S. Hemmelgarn - 17.08.18, 13:58:
On 2018-08-17 05:08, Martin Steigerwald wrote:
[…]
I have seen a discussion about the limitation in point 2. That
allowing to add a device and make it into RAID 1 again
On 2018-08-17 05:08, Martin Steigerwald wrote:
Hi!
This happened about two weeks ago. I already dealt with it and all is
well.
Linux hung on suspend so I switched off this ThinkPad T520 forcefully.
After that it did not boot the operating system anymore. Intel SSD 320,
latest firmware, which sh
On 2018-08-10 06:07, Cerem Cem ASLAN wrote:
Original question is here: https://superuser.com/questions/1347843
How can we sure that a readonly snapshot is not corrupted due to a disk failure?
Is the only way calculating the checksums one on another and store it
for further examination, or does
On 2018-08-12 03:04, Andrei Borzenkov wrote:
12.08.2018 06:16, Chris Murphy пишет:
On Fri, Aug 10, 2018 at 9:29 PM, Duncan <1i5t5.dun...@cox.net> wrote:
Chris Murphy posted on Fri, 10 Aug 2018 12:07:34 -0600 as excerpted:
But whether data is shared or exclusive seems potentially ephemeral, an
On 2018-08-10 14:07, Chris Murphy wrote:
On Thu, Aug 9, 2018 at 5:35 PM, Qu Wenruo wrote:
On 8/10/18 1:48 AM, Tomasz Pala wrote:
On Tue, Jul 31, 2018 at 22:32:07 +0800, Qu Wenruo wrote:
2) Different limitations on exclusive/shared bytes
Btrfs can set different limit on exclusive/shared
On 2018-08-10 14:21, Tomasz Pala wrote:
On Fri, Aug 10, 2018 at 07:39:30 -0400, Austin S. Hemmelgarn wrote:
I.e.: every shared segment should be accounted within quota (at least once).
I think what you mean to say here is that every shared extent should be
accounted to quotas for every
On 2018-08-09 13:48, Tomasz Pala wrote:
On Tue, Jul 31, 2018 at 22:32:07 +0800, Qu Wenruo wrote:
2) Different limitations on exclusive/shared bytes
Btrfs can set different limit on exclusive/shared bytes, further
complicating the problem.
3) Btrfs quota only accounts data/metadata used
On 2018-08-09 19:35, Qu Wenruo wrote:
On 8/10/18 1:48 AM, Tomasz Pala wrote:
On Tue, Jul 31, 2018 at 22:32:07 +0800, Qu Wenruo wrote:
2) Different limitations on exclusive/shared bytes
Btrfs can set different limit on exclusive/shared bytes, further
complicating the problem.
3) Btrf
On 2018-08-02 06:56, Qu Wenruo wrote:
On 2018年08月02日 18:45, Andrei Borzenkov wrote:
Отправлено с iPhone
2 авг. 2018 г., в 10:02, Qu Wenruo написал(а):
On 2018年08月01日 11:45, MegaBrutal wrote:
Hi all,
I know it's a decade-old question, but I'd like to hear your thoughts
of today. By no
On 2018-07-31 23:45, MegaBrutal wrote:
Hi all,
I know it's a decade-old question, but I'd like to hear your thoughts
of today. By now, I became a heavy BTRFS user. Almost everywhere I use
BTRFS, except in situations when it is obvious there is no benefit
(e.g. /var/log, /boot). At home, all my d
On 2018-07-20 14:41, Hugo Mills wrote:
On Fri, Jul 20, 2018 at 09:38:14PM +0300, Andrei Borzenkov wrote:
20.07.2018 20:16, Goffredo Baroncelli пишет:
[snip]
Limiting the number of disk per raid, in BTRFS would be quite simple to implement in the
"chunk allocator"
You mean that currently RA
On 2018-07-20 13:13, Goffredo Baroncelli wrote:
On 07/19/2018 09:10 PM, Austin S. Hemmelgarn wrote:
On 2018-07-19 13:29, Goffredo Baroncelli wrote:
[...]
So until now you are repeating what I told: the only useful raid profile are
- striping
- mirroring
- striping+paring (even limiting the
On 2018-07-20 01:01, Andrei Borzenkov wrote:
18.07.2018 16:30, Austin S. Hemmelgarn пишет:
On 2018-07-18 09:07, Chris Murphy wrote:
On Wed, Jul 18, 2018 at 6:35 AM, Austin S. Hemmelgarn
wrote:
If you're doing a training presentation, it may be worth mentioning that
preallocation
On 2018-07-19 13:29, Goffredo Baroncelli wrote:
On 07/19/2018 01:43 PM, Austin S. Hemmelgarn wrote:
On 2018-07-18 15:42, Goffredo Baroncelli wrote:
On 07/18/2018 09:20 AM, Duncan wrote:
Goffredo Baroncelli posted on Wed, 18 Jul 2018 07:59:52 +0200 as
excerpted:
On 07/17/2018 11:12 PM
On 2018-07-19 03:27, Qu Wenruo wrote:
On 2018年07月14日 02:46, David Sterba wrote:
Hi,
I have some goodies that go into the RAID56 problem, although not
implementing all the remaining features, it can be useful independently.
This time my hackweek project
https://hackweek.suse.com/17/projects/
On 2018-07-18 15:42, Goffredo Baroncelli wrote:
On 07/18/2018 09:20 AM, Duncan wrote:
Goffredo Baroncelli posted on Wed, 18 Jul 2018 07:59:52 +0200 as
excerpted:
On 07/17/2018 11:12 PM, Duncan wrote:
Goffredo Baroncelli posted on Mon, 16 Jul 2018 20:29:46 +0200 as
excerpted:
On 07/15/2018 0
On 2018-07-18 17:32, Chris Murphy wrote:
On Wed, Jul 18, 2018 at 12:01 PM, Austin S. Hemmelgarn
wrote:
On 2018-07-18 13:40, Chris Murphy wrote:
On Wed, Jul 18, 2018 at 11:14 AM, Chris Murphy
wrote:
I don't know for sure, but based on the addresses reported before and
after dd fo
On 2018-07-18 13:40, Chris Murphy wrote:
On Wed, Jul 18, 2018 at 11:14 AM, Chris Murphy wrote:
I don't know for sure, but based on the addresses reported before and
after dd for the fallocated tmp file, it looks like Btrfs is not using
the originally fallocated addresses for dd. So maybe it is
On 2018-07-18 13:04, Chris Murphy wrote:
On Wed, Jul 18, 2018 at 7:30 AM, Austin S. Hemmelgarn
wrote:
I'm not sure. In this particular case, this will fail on BTRFS for any X
larger than just short of one third of the total free space. I would expect
it to fail for any X larger than
On 2018-07-18 09:07, Chris Murphy wrote:
On Wed, Jul 18, 2018 at 6:35 AM, Austin S. Hemmelgarn
wrote:
If you're doing a training presentation, it may be worth mentioning that
preallocation with fallocate() does not behave the same on BTRFS as it does
on other filesystems. For example
On 2018-07-18 03:20, Duncan wrote:
Goffredo Baroncelli posted on Wed, 18 Jul 2018 07:59:52 +0200 as
excerpted:
On 07/17/2018 11:12 PM, Duncan wrote:
Goffredo Baroncelli posted on Mon, 16 Jul 2018 20:29:46 +0200 as
excerpted:
On 07/15/2018 04:37 PM, waxhead wrote:
Striping and mirroring/pa
On 2018-07-18 04:39, Duncan wrote:
Duncan posted on Wed, 18 Jul 2018 07:20:09 + as excerpted:
As implemented in BTRFS, raid1 doesn't have striping.
The argument is that because there's only two copies, on multi-device
btrfs raid1 with 4+ devices of equal size so chunk allocations tend to
On 2018-07-17 13:54, Martin Steigerwald wrote:
Nikolay Borisov - 17.07.18, 10:16:
On 17.07.2018 11:02, Martin Steigerwald wrote:
Nikolay Borisov - 17.07.18, 09:20:
On 16.07.2018 23:58, Wolf wrote:
Greetings,
I would like to ask what what is healthy amount of free space to
keep on each device
On 2018-07-16 16:58, Wolf wrote:
Greetings,
I would like to ask what what is healthy amount of free space to keep on
each device for btrfs to be happy?
This is how my disk array currently looks like
[root@dennas ~]# btrfs fi usage /raid
Overall:
Device size:
On 2018-07-16 14:29, Goffredo Baroncelli wrote:
On 07/15/2018 04:37 PM, waxhead wrote:
David Sterba wrote:
An interesting question is the naming of the extended profiles. I picked
something that can be easily understood but it's not a final proposal.
Years ago, Hugo proposed a naming scheme tha
On 2018-07-03 03:35, Duncan wrote:
Austin S. Hemmelgarn posted on Mon, 02 Jul 2018 07:49:05 -0400 as
excerpted:
Notably, most Intel systems I've seen have the SATA controllers in the
chipset enumerate after the USB controllers, and the whole chipset
enumerates after add-in cards (so
On 2018-07-02 13:34, Marc MERLIN wrote:
On Mon, Jul 02, 2018 at 12:59:02PM -0400, Austin S. Hemmelgarn wrote:
Am I supposed to put LVM thin volumes underneath so that I can share
the same single 10TB raid5?
Actually, because of the online resize ability in BTRFS, you don't
technically _
On 2018-07-02 11:19, Marc MERLIN wrote:
Hi Qu,
thanks for the detailled and honest answer.
A few comments inline.
On Mon, Jul 02, 2018 at 10:42:40PM +0800, Qu Wenruo wrote:
For full, it depends. (but for most real world case, it's still flawed)
We have small and crafted images as test cases, w
On 2018-07-02 11:18, Marc MERLIN wrote:
Hi Qu,
I'll split this part into a new thread:
2) Don't keep unrelated snapshots in one btrfs.
I totally understand that maintain different btrfs would hugely add
maintenance pressure, but as explains, all snapshots share one
fragile extent t
On 2018-06-30 02:33, Duncan wrote:
Austin S. Hemmelgarn posted on Fri, 29 Jun 2018 14:31:04 -0400 as
excerpted:
On 2018-06-29 13:58, james harvey wrote:
On Fri, Jun 29, 2018 at 1:09 PM, Austin S. Hemmelgarn
wrote:
On 2018-06-29 11:15, james harvey wrote:
On Thu, Jun 28, 2018 at 6:27 PM
On 2018-06-30 01:32, Andrei Borzenkov wrote:
30.06.2018 06:22, Duncan пишет:
Austin S. Hemmelgarn posted on Mon, 25 Jun 2018 07:26:41 -0400 as
excerpted:
On 2018-06-24 16:22, Goffredo Baroncelli wrote:
On 06/23/2018 07:11 AM, Duncan wrote:
waxhead posted on Fri, 22 Jun 2018 01:13:31 +0200
On 2018-06-29 23:22, Duncan wrote:
Austin S. Hemmelgarn posted on Mon, 25 Jun 2018 07:26:41 -0400 as
excerpted:
On 2018-06-24 16:22, Goffredo Baroncelli wrote:
On 06/23/2018 07:11 AM, Duncan wrote:
waxhead posted on Fri, 22 Jun 2018 01:13:31 +0200 as excerpted:
According to this:
https
On 2018-06-29 13:58, james harvey wrote:
On Fri, Jun 29, 2018 at 1:09 PM, Austin S. Hemmelgarn
wrote:
On 2018-06-29 11:15, james harvey wrote:
On Thu, Jun 28, 2018 at 6:27 PM, Chris Murphy
wrote:
And an open question I have about scrub is weather it only ever is
checking csums, meaning
On 2018-06-29 11:15, james harvey wrote:
On Thu, Jun 28, 2018 at 6:27 PM, Chris Murphy wrote:
And an open question I have about scrub is weather it only ever is
checking csums, meaning nodatacow files are never scrubbed, or if the
copies are at least compared to each other?
Scrub never looks
On 2018-06-29 07:04, marble wrote:
Hello,
I have an external HDD. The HDD contains no partition.
I use the whole HDD as a LUKS container. Inside that LUKS is a btrfs.
It's used to store some media files.
The HDD was hooked up to a Raspberry Pi running up-to-date Arch Linux
to play music from the
On 2018-06-28 07:46, Qu Wenruo wrote:
On 2018年06月28日 19:12, Austin S. Hemmelgarn wrote:
On 2018-06-28 05:15, Qu Wenruo wrote:
On 2018年06月28日 16:16, Andrei Borzenkov wrote:
On Thu, Jun 28, 2018 at 8:39 AM, Qu Wenruo
wrote:
On 2018年06月28日 11:14, r...@georgianit.com wrote:
On Wed, Jun
On 2018-06-28 05:15, Qu Wenruo wrote:
On 2018年06月28日 16:16, Andrei Borzenkov wrote:
On Thu, Jun 28, 2018 at 8:39 AM, Qu Wenruo wrote:
On 2018年06月28日 11:14, r...@georgianit.com wrote:
On Wed, Jun 27, 2018, at 10:55 PM, Qu Wenruo wrote:
Please get yourself clear of what other raid1 is
On 2018-06-25 21:05, Sterling Windmill wrote:
I am running a single btrfs RAID10 volume of eight LUKS devices, each
using a 2TB SATA hard drive as a backing store. The SATA drives are a
mixture of Seagate and Western Digital drives, some with RPMs ranging
from 5400 to 7200. Each seems to individu
On 2018-06-25 12:07, Marc MERLIN wrote:
On Tue, Jun 19, 2018 at 12:58:44PM -0400, Austin S. Hemmelgarn wrote:
In your situation, I would run "btrfs pause ", wait to hear from
a btrfs developer, and not use the volume whatsoever in the meantime.
I would say this is probably good
On 2018-06-24 16:22, Goffredo Baroncelli wrote:
On 06/23/2018 07:11 AM, Duncan wrote:
waxhead posted on Fri, 22 Jun 2018 01:13:31 +0200 as excerpted:
According to this:
https://stratis-storage.github.io/StratisSoftwareDesign.pdf Page 4 ,
section 1.2
It claims that BTRFS still have significan
On 2018-06-19 12:30, james harvey wrote:
On Tue, Jun 19, 2018 at 11:47 AM, Marc MERLIN wrote:
On Mon, Jun 18, 2018 at 06:00:55AM -0700, Marc MERLIN wrote:
So, I ran this:
gargamel:/mnt/btrfs_pool2# btrfs balance start -dusage=60 -v . &
[1] 24450
Dumping filters: flags 0x1, state 0x0, force is
On 2018-06-15 13:40, Chris Murphy wrote:
On Fri, Jun 15, 2018 at 5:33 AM, ein wrote:
Hello group,
does anyone have had any luck with hosting qemu kvm images resided on BTRFS
filesystem while serving
the volume via iSCSI?
I encouraged some unidentified problem and I am able to replicate it. B
On 2018-05-29 10:02, ein wrote:
On 05/29/2018 02:12 PM, Austin S. Hemmelgarn wrote:
On 2018-05-28 13:10, ein wrote:
On 05/23/2018 01:03 PM, Austin S. Hemmelgarn wrote:
On 2018-05-23 06:09, ein wrote:
On 05/23/2018 11:09 AM, Duncan wrote:
ein posted on Wed, 23 May 2018 10:03:52 +0200 as
On 2018-05-28 13:10, ein wrote:
On 05/23/2018 01:03 PM, Austin S. Hemmelgarn wrote:
On 2018-05-23 06:09, ein wrote:
On 05/23/2018 11:09 AM, Duncan wrote:
ein posted on Wed, 23 May 2018 10:03:52 +0200 as excerpted:
IMHO the best course of action would be to disable checksumming for
you
vm
On 2018-05-23 06:09, ein wrote:
On 05/23/2018 11:09 AM, Duncan wrote:
ein posted on Wed, 23 May 2018 10:03:52 +0200 as excerpted:
IMHO the best course of action would be to disable checksumming for you
vm files.
Do you mean '-o nodatasum' mount flag? Is it possible to disable
checksumming fo
On 2018-05-21 13:43, David Sterba wrote:
On Fri, May 18, 2018 at 01:10:02PM -0400, Austin S. Hemmelgarn wrote:
On 2018-05-18 12:36, Niccolò Belli wrote:
On venerdì 18 maggio 2018 18:20:51 CEST, David Sterba wrote:
Josef started working on that in 2014 and did not finish it. The patches
can be
On 2018-05-21 09:42, Timofey Titovets wrote:
пн, 21 мая 2018 г. в 16:16, Austin S. Hemmelgarn :
On 2018-05-19 04:54, Niccolò Belli wrote:
On venerdì 18 maggio 2018 20:33:53 CEST, Austin S. Hemmelgarn wrote:
With a bit of work, it's possible to handle things sanely. You can
deduplicate
On 2018-05-19 04:54, Niccolò Belli wrote:
On venerdì 18 maggio 2018 20:33:53 CEST, Austin S. Hemmelgarn wrote:
With a bit of work, it's possible to handle things sanely. You can
deduplicate data from snapshots, even if they are read-only (you need
to pass the `-A` option to duperemove an
On 2018-05-18 13:18, Niccolò Belli wrote:
On venerdì 18 maggio 2018 19:10:02 CEST, Austin S. Hemmelgarn wrote:
and also forces the people who have ridiculous numbers of snapshots to
deal with the memory usage or never defrag
Whoever has at least one snapshot is never going to defrag anyway
On 2018-05-18 12:36, Niccolò Belli wrote:
On venerdì 18 maggio 2018 18:20:51 CEST, David Sterba wrote:
Josef started working on that in 2014 and did not finish it. The patches
can be still found in his tree. The problem is in excessive memory
consumption when there are many snapshots that need t
10:46 PM, Jeff Mahoney wrote:
On 5/17/18 8:25 AM, Austin S. Hemmelgarn wrote:
On 2018-05-16 22:32, Anand Jain wrote:
On 05/17/2018 06:35 AM, David Sterba wrote:
On Wed, May 16, 2018 at 06:03:56PM +0800, Anand Jain wrote:
Not yet ready for the integration. As I need to introdu
On 2018-05-17 10:46, Jeff Mahoney wrote:
On 5/16/18 6:35 PM, David Sterba wrote:
On Wed, May 16, 2018 at 06:03:56PM +0800, Anand Jain wrote:
Not yet ready for the integration. As I need to introduce
-o no_read_mirror_policy instead of -o read_mirror_policy=-
Mount option is mostly likely not
On 2018-05-16 22:32, Anand Jain wrote:
On 05/17/2018 06:35 AM, David Sterba wrote:
On Wed, May 16, 2018 at 06:03:56PM +0800, Anand Jain wrote:
Not yet ready for the integration. As I need to introduce
-o no_read_mirror_policy instead of -o read_mirror_policy=-
Mount option is mostly likely
On 2018-05-16 09:23, Anand Jain wrote:
On 05/16/2018 07:25 PM, Austin S. Hemmelgarn wrote:
On 2018-05-15 22:51, Anand Jain wrote:
Add a kernel log when the balance ends, either for cancel or completed
or if it is paused.
---
v1->v2: Moved from 2/3 to 3/3
fs/btrfs/volumes.c | 7 +++
On 2018-05-15 22:51, Anand Jain wrote:
Add a kernel log when the balance ends, either for cancel or completed
or if it is paused.
---
v1->v2: Moved from 2/3 to 3/3
fs/btrfs/volumes.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index ce68c
encryption, and you
can't inspect that code yourself).
Le 08/05/2018 à 13:32, Austin S. Hemmelgarn a écrit :
On 2018-05-08 03:50, Rolf Wald wrote:
Hello,
some hints inside
Am 08.05.2018 um 02:22 schrieb faurepi...@gmail.com:
Hi,
I'm curious about btrfs, and maybe considering it
On 2018-05-08 03:50, Rolf Wald wrote:
Hello,
some hints inside
Am 08.05.2018 um 02:22 schrieb faurepi...@gmail.com:
Hi,
I'm curious about btrfs, and maybe considering it for my new laptop
installation (a Lenovo T470).
I was going to install my usual lvm+ext4+full disk encryption setup, but
th
On 2018-05-03 04:11, Andrei Borzenkov wrote:
On Wed, May 2, 2018 at 10:29 PM, Austin S. Hemmelgarn
wrote:
...
Assume you have a BTRFS raid5 volume consisting of 6 8TB disks (which gives
you 40TB of usable space). You're storing roughly 20TB of data on it, using
a 16kB block size, and it
On 2018-05-02 16:40, Goffredo Baroncelli wrote:
On 05/02/2018 09:29 PM, Austin S. Hemmelgarn wrote:
On 2018-05-02 13:25, Goffredo Baroncelli wrote:
On 05/02/2018 06:55 PM, waxhead wrote:
So again, which problem would solve having the parity checksummed ? On the best
of my knowledge nothing
On 2018-05-02 13:25, Goffredo Baroncelli wrote:
On 05/02/2018 06:55 PM, waxhead wrote:
So again, which problem would solve having the parity checksummed ? On the best
of my knowledge nothing. In any case the data is checksummed so it is
impossible to return corrupted data (modulo bug :-) ).
On 2018-05-02 12:55, waxhead wrote:
Goffredo Baroncelli wrote:
Hi
On 05/02/2018 03:47 AM, Duncan wrote:
Gandalf Corvotempesta posted on Tue, 01 May 2018 21:57:59 + as
excerpted:
Hi to all I've found some patches from Andrea Mazzoleni that adds
support up to 6 parity raid.
Why these are wa
On 2018-04-25 07:29, Christoph Anton Mitterer wrote:
On Wed, 2018-04-25 at 07:22 -0400, Austin S. Hemmelgarn wrote:
While I can understand Duncan's point here, I'm inclined to agree
with
David
Same from my side... and I run a multi-PiB storage site (though not
with btrfs).
Cosmet
On 2018-04-25 07:13, Gandalf Corvotempesta wrote:
2018-04-23 17:16 GMT+02:00 David Sterba :
Reviewed and updated for 4.16, there's no change regarding the overall
status, though 4.16 has some raid56 fixes.
Thank you!
Any ETA for a stable RAID56 ? (or, even better, for a stable btrfs
ready for
On 2018-04-25 07:02, David Sterba wrote:
On Wed, Apr 25, 2018 at 06:31:20AM +, Duncan wrote:
David Sterba posted on Tue, 24 Apr 2018 13:58:57 +0200 as excerpted:
btrfs-progs version 4.16.1 have been released. This is a bugfix
release.
Changes:
* remove obsolete tools: btrfs-debug-tre
On 2018-04-23 14:25, waxhead wrote:
Howdy!
I am pondering writing a little C program that use libmicrohttpd and
libbtrfsutil to display some very basic (overview) details about BTRFS.
I was hoping to display the same information that'btrfs fi sh /mnt' and
'btrfs fi us -T /mnt' do, but somewh
On 2018-04-20 10:21, David Sterba wrote:
This patchset adds new ioctl similar to TRIM, that provides several
other ways how to clear the unused space. The changelogs are
incomplete, for preview not for inclusion yet.
+1 for the idea. This will be insanely useful for certain VM setups.
It com
On 2018-04-18 11:10, Brendan Hide wrote:
Hi, all
I'm looking for some advice re compression with NVME. Compression helps
performance with a minor CPU hit - but is it still worth it with the far
higher throughputs offered by newer PCI and NVME-type SSDs?
I've ordered a PCIe-to-M.2 adapter alo
On 2018-04-16 13:10, Chris Murphy wrote:
Adding linux-usb@ and linux-scsi@
(This email does contain the thread initiating email, but some replies
are on the other lists.)
On Mon, Apr 16, 2018 at 5:43 AM, Austin S. Hemmelgarn
wrote:
On 2018-04-15 21:04, Chris Murphy wrote:
I just ran into
On 2018-04-16 11:02, Wol's lists wrote:
On 16/04/18 12:43, Austin S. Hemmelgarn wrote:
On 2018-04-15 21:04, Chris Murphy wrote:
I just ran into this:
https://github.com/neilbrown/mdadm/pull/32/commits/af1ddca7d5311dfc9ed60a5eb6497db1296f1bec
This solution is inadequate, can it be made
On 2018-04-15 21:04, Chris Murphy wrote:
I just ran into this:
https://github.com/neilbrown/mdadm/pull/32/commits/af1ddca7d5311dfc9ed60a5eb6497db1296f1bec
This solution is inadequate, can it be made more generic? This isn't
an md specific problem, it affects Btrfs and LVM as well. And in fact
ra
On 2018-04-10 09:08, James Courtier-Dutton wrote:
Hi,
I have disk that in the past had errors on it.
I have fixed up the errors.
btrfs scrub now reports no errors.
How do I reset these counters to zero?
BTRFS info (device sdc2): bdev /dev/sdc2 errs: wr 0, rd 35, flush 0,
corrupt 1, gen 0
Run
On 2018-04-02 11:18, Goffredo Baroncelli wrote:
On 04/02/2018 07:45 AM, Zygo Blaxell wrote:
[...]
It is possible to combine writes from a single transaction into full
RMW stripes, but this *does* have an impact on fragmentation in btrfs.
Any partially-filled stripe is effectively read-only and t
On 2018-03-30 12:38, Adam Borowski wrote:
On Fri, Mar 30, 2018 at 10:42:10AM +0100, Pete wrote:
I've just notice work going on to make rmdir be able to delete
subvolumes. Is there an intent to allow ls -l to display directories as
subvolumes?
That's entirely up to coreutils guys.
Expanding
fail. Returns 2 if an internal error
+occurred.
+
+Copyright (C) 2018 Austin S. Hemmelgarn
+
+This program is free software; you can redistribute it and/or
+modify it under the terms of the GNU General Public
+License v2 as published by the Free Software Foundation.
+
+This program is distribu
On 2018-03-21 16:38, Goffredo Baroncelli wrote:
On 03/21/2018 12:47 PM, Austin S. Hemmelgarn wrote:
I agree as well, with the addendum that I'd love to see a new ioctl that does
proper permissions checks. While letting rmdir(2) work for an empty subvolume
with the appropriate permis
On 2018-03-21 16:02, Christoph Anton Mitterer wrote:
On the note of maintenance specifically:
- Maintenance tools
- How to get the status of the RAID? (Querying kernel logs is IMO
rather a bad way for this)
This includes:
- Is the raid degraded or not?
Check for the 'degraded' f
On 2018-03-21 03:46, Nikolay Borisov wrote:
On 20.03.2018 22:06, Goffredo Baroncelli wrote:
On 03/20/2018 07:45 AM, Misono, Tomohiro wrote:
Deletion of subvolume by non-privileged user is completely restricted
by default because we can delete a subvolume even if it is not empty
and may cause
On 2018-03-14 14:39, Goffredo Baroncelli wrote:
On 03/14/2018 01:02 PM, Austin S. Hemmelgarn wrote:
[...]
In btrfs, a checksum mismatch creates an -EIO error during the reading. In a
conventional filesystem (or a btrfs filesystem w/o datasum) there is no
checksum, so this problem doesn
On 2018-03-14 05:20, Nikolay Borisov wrote:
On 13.03.2018 17:06, Anand Jain wrote:
We aren't checking the SB csum when the device scanned,
instead we do that when mounting the device, and if the
csum fails we fail the mount. How if we check the csum
when the device is scanned, I can't see any re
On 2018-03-13 15:36, Goffredo Baroncelli wrote:
On 03/12/2018 10:48 PM, Christoph Anton Mitterer wrote:
On Mon, 2018-03-12 at 22:22 +0100, Goffredo Baroncelli wrote:
Unfortunately no, the likelihood might be 100%: there are some
patterns which trigger this problem quite easily. See The link whi
On 2018-03-13 09:07, Valerio Pachera wrote:
Short version:
656G used (df -h)
450G used (du -sh)
10G used by snapshots
196G discrepancy <-
I don't undertand what is using 196G.
df -h /mnt/dati/
File systemDim. Usati Dispon. Uso% Montato su
/dev/mapper/vg00-dati 919G 656G26
will give you degraded
performance for the longest amount of time.
Thanks again for your notes, they should be on the wiki.. :)
I've been meaning to add it for a while actually, I just haven't gotten
around to it yet.
On Fri, 9 Mar 2018 at 16:43, Austin S. Hemmelgarn <mail
On 2018-03-09 11:02, Paul Richards wrote:
Hello there,
I have a 3 disk btrfs RAID 1 filesystem, with a single failed drive.
Before I attempt any recovery I’d like to ask what is the recommended
approach? (The wiki docs suggest consulting here before attempting
recovery[1].)
The system is power
On 2018-03-08 05:36, waxhead wrote:
Just out of curiosity, are there any work going on for enabling
different "RAID" levels per subvolume?!
Not that I know of, but it would be great to have (I could get rid of
some of the various small isolated volumes I have solely to have a
different storage
On 2018-03-05 10:28, Christoph Hellwig wrote:
On Sat, Mar 03, 2018 at 06:59:26AM +, Duncan wrote:
Indeed. Preallocation with COW doesn't make the sense it does on an
overwrite-in-place filesystem.
It makes a whole lot of sense, it just is a little harder to implement.
There is no reason
On 2018-03-01 05:18, Andrei Borzenkov wrote:
On Thu, Mar 1, 2018 at 12:26 PM, vinayak hegde wrote:
No, there is no opened file which is deleted, I did umount and mounted
again and reboot also.
I think I am hitting the below issue, lot of random writes were
happening and the file is not fully w
On 2018-02-28 14:54, Duncan wrote:
Austin S. Hemmelgarn posted on Wed, 28 Feb 2018 14:24:40 -0500 as
excerpted:
I believe this effect is what Austin was referencing when he suggested
the defrag, tho defrag won't necessarily /entirely/ clear it up. One
way to be /sure/ it's cleared u
On 2018-02-28 14:09, Duncan wrote:
vinayak hegde posted on Tue, 27 Feb 2018 18:39:51 +0530 as excerpted:
I am using btrfs, But I am seeing du -sh and df -h showing huge size
difference on ssd.
mount:
/dev/drbd1 on /dc/fileunifier.datacache type btrfs
(rw,noatime,nodiratime,flushoncommit,disc
On 2018-02-27 08:09, vinayak hegde wrote:
I am using btrfs, But I am seeing du -sh and df -h showing huge size
difference on ssd.
mount:
/dev/drbd1 on /dc/fileunifier.datacache type btrfs
(rw,noatime,nodiratime,flushoncommit,discard,nospace_cache,recovery,commit=5,subvolid=5,subvol=/)
du -sh /
On 2018-02-23 06:21, Shyam Prasad N wrote:
Hi,
Can someone explain me why there is a difference in the number of
blocks reported by df and du commands below?
=
# df -h /dc
Filesystem Size Used Avail Use% Mounted on
/dev/drbd1 746G 519G 225G 70% /dc
# btrfs fil
On 2018-02-21 10:56, Hans van Kranenburg wrote:
On 02/21/2018 04:19 PM, Ellis H. Wilson III wrote:
$ sudo btrfs fi df /mnt/btrfs
Data, single: total=3.32TiB, used=3.32TiB
System, DUP: total=8.00MiB, used=384.00KiB
Metadata, DUP: total=16.50GiB, used=15.82GiB
GlobalReserve, single: total=512.00M
On 2018-02-20 09:59, Ellis H. Wilson III wrote:
On 02/16/2018 07:59 PM, Qu Wenruo wrote:
On 2018年02月16日 22:12, Ellis H. Wilson III wrote:
$ sudo btrfs-debug-tree -t chunk /dev/sdb | grep CHUNK_ITEM | wc -l
3454
OK, this explains everything.
There are too many chunks.
This means at mount you
On 2018-02-15 11:18, Alex Adriaanse wrote:
We've been using Btrfs in production on AWS EC2 with EBS devices for over 2
years. There is so much I love about Btrfs: CoW snapshots, compression,
subvolumes, flexibility, the tools, etc. However, lack of stability has been a
serious ongoing issue fo
On 2018-02-15 11:58, Ellis H. Wilson III wrote:
On 02/15/2018 11:51 AM, Austin S. Hemmelgarn wrote:
There are scaling performance issues with directory listings on BTRFS
for directories with more than a few thousand files, but they're not
well documented (most people don't hit th
101 - 200 of 1429 matches
Mail list logo