Approximately 16 hours ago I've run a script that deleted >~100
snapshots and started quota rescan on a large USB-connected btrfs volume
(5.4 of 22 TB occupied now). Quota rescan only completed just now, with
100% load from [btrfs-transacti] throughout this period, which is
probably ~ok dependi
Signed-off-by: Anand Jain
---
fs/btrfs/disk-io.c | 3 ---
fs/btrfs/volumes.h | 1 -
2 files changed, 4 deletions(-)
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 08b74daf35d0..9de35bca1f67 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -3521,9 +3521,6 @@ static int writ
As of now we do alloc an empty bio and then use the flag REQ_PREFLUSH
to flush the device cache, instead we can use blkdev_issue_flush()
for this puspose.
Also now no need to check the return when write_dev_flush() is called
with wait = 0
Signed-off-by: Anand Jain
---
V2
Title of this patch is
Now when counting number of error devices we don't need to count
them separately once during send and wait, as because device error
counted during send is more of static check.
Also kindly note that as of now there is no code which would set
dev->bdev = NULL unless device is missing. However I sti
The objective of this patch is to cleanup barrier_all_devices()
so that the error checking is in a separate loop independent of
of the loop which submits and waits on the device flush requests.
By doing this it helps to further develop patches which would tune
the error-actions as needed.
Signed-
On 2017-03-30 09:07, Tim Cuthbertson wrote:
On Wed, Mar 29, 2017 at 10:46 PM, Duncan <1i5t5.dun...@cox.net> wrote:
Tim Cuthbertson posted on Wed, 29 Mar 2017 18:20:52 -0500 as excerpted:
So, another question...
Do I then leave the top level mounted all the time for snapshots, or
should I crea
> Can you try to first dedup the btrfs volume? This is probably
> out of date, but you could try one of these: [ ... ] Yep,
> that's probably a lot of work. [ ... ] My recollection is that
> btrfs handles deduplication differently than zfs, but both of
> them can be very, very slow
But the big de
>>> The way btrfs is designed I'd actually expect shrinking to
>>> be fast in most cases. [ ... ]
>> The proposed "move whole chunks" implementation helps only if
>> there are enough unallocated chunks "below the line". If regular
>> 'balance' is done on the filesystem there will be some, but that
Marat Khalili posted on Fri, 31 Mar 2017 10:05:20 +0300 as excerpted:
> Approximately 16 hours ago I've run a script that deleted >~100
> snapshots and started quota rescan on a large USB-connected btrfs volume
> (5.4 of 22 TB occupied now). Quota rescan only completed just now, with
> 100% load f
Thank you very much for reply and suggestions, more comments below.
Still, is there a definite answer on root question: are different btrfs
volumes independent in terms of CPU, or are there some shared workers
that can be point of contention?
What would have been interesting would have been i
>> [ ... ] CentOS, Redhat, and Oracle seem to take the position
>> that very large data subvolumes using btrfs should work
>> fine. But I would be curious what the rest of the list thinks
>> about 20 TiB in one volume/subvolume.
> To be sure I'm a biased voice here, as I have multiple
> independen
On 2017-03-30 11:55, Peter Grandi wrote:
My guess is that very complex risky slow operations like that are
provided by "clever" filesystem developers for "marketing" purposes,
to win box-ticking competitions. That applies to those system
developers who do know better; I suspect that even some fil
Hi,
While doing a regular kernel build I triggered the following splat
on a vanilla v4.11-rc4 kernel.
[73253.814880] WARNING: CPU: 20 PID: 631 at fs/btrfs/qgroup.c:2472
btrfs_qgroup_free_refroot+0x154/0x180 [btrfs]
[73253.814880] Modules linked in: st(E) sr_mod(E) cdrom(E) nfsv3(E) nfs_acl(E)
The opposite case was already handled right in the very next switch entry.
Reported-by: Hans van Kranenburg
Signed-off-by: Adam Borowski
---
Not sure if setting NOSSD should also disable SSD_SPREAD, there's currently
no way to disable that option once set.
fs/btrfs/super.c | 2 ++
1 file chang
Hi,
btrfs-progs version 4.10.2 have been released. More build breakages fixed and
some minor updates.
Changes:
* check: lowmem mode fix for false alert about lost backrefs
* convert: minor bugfix
* library: fix build, misisng symbols, added tests
Tarballs: https://www.kernel.org/pub/linux/k
On 03/31/2017 05:19 PM, Adam Borowski wrote:
> The opposite case was already handled right in the very next switch entry.
>
> Reported-by: Hans van Kranenburg
> Signed-off-by: Adam Borowski
> ---
> Not sure if setting NOSSD should also disable SSD_SPREAD, there's currently
> no way to disable th
On Fri, Mar 31, 2017 at 06:00:08PM +0200, Hans van Kranenburg wrote:
> On 03/31/2017 05:19 PM, Adam Borowski wrote:
> > The opposite case was already handled right in the very next switch entry.
> >
> > Reported-by: Hans van Kranenburg
> > Signed-off-by: Adam Borowski
> > ---
> > Not sure if set
>>> My guess is that very complex risky slow operations like
>>> that are provided by "clever" filesystem developers for
>>> "marketing" purposes, to win box-ticking competitions.
>>> That applies to those system developers who do know better;
>>> I suspect that even some filesystem developers are
On Fri, Mar 31, 2017 at 10:03:28AM +0800, Qu Wenruo wrote:
>
>
> At 03/30/2017 06:31 PM, David Sterba wrote:
> > On Thu, Mar 30, 2017 at 09:03:21AM +0800, Qu Wenruo wrote:
> +static int lock_full_stripe(struct btrfs_fs_info *fs_info, u64 bytenr)
> +{
> +struct btrfs_block_g
On Wed, Mar 15, 2017 at 05:02:26PM +0100, David Sterba wrote:
> No point using radix_tree_gang_lookup if we're looking up just one slot.
>
> Signed-off-by: David Sterba
I've bisected to this patch, causes a hang in btrfs/011. I'll revert it
for until I find out the cause.
--
To unsubscribe from
On Fri, Mar 31, 2017 at 09:29:20AM +0800, Qu Wenruo wrote:
>
>
> At 03/31/2017 12:49 AM, Liu Bo wrote:
> > On Thu, Mar 30, 2017 at 02:32:47PM +0800, Qu Wenruo wrote:
> > > Unlike mirror based profiles, RAID5/6 recovery needs to read out the
> > > whole full stripe.
> > >
> > > And if we don't do
Am Wed, 29 Mar 2017 16:27:30 -0500
schrieb Tim Cuthbertson :
> I have recently switched from multiple partitions with multiple
> btrfs's to a flat layout. I will try to keep my question concise.
>
> I am confused as to whether a snapshots container should be a normal
> directory or a mountable su
Well, now I am curious. Until we hear back from Christiane on the
progress of the never ending file system shrinkage, I suppose it can't
hurt to ask what the signifigance of the xargs size limits of btrfs
might be. Or, again, if Christiane is already happily on his way to
an xfs server running ov
And when turning on nossd, drop ssd_spread.
Reported-by: Hans van Kranenburg
Signed-off-by: Adam Borowski
---
On Fri, Mar 31, 2017 at 07:10:16PM +0200, David Sterba wrote:
> On Fri, Mar 31, 2017 at 06:00:08PM +0200, Hans van Kranenburg wrote:
> > On 03/31/2017 05:19 PM, Adam Borowski wrote:
> >
On 03/31/2017 10:08 PM, Adam Borowski wrote:
> And when turning on nossd, drop ssd_spread.
>
> Reported-by: Hans van Kranenburg
> Signed-off-by: Adam Borowski
> ---
> On Fri, Mar 31, 2017 at 07:10:16PM +0200, David Sterba wrote:
>> On Fri, Mar 31, 2017 at 06:00:08PM +0200, Hans van Kranenburg wr
> [ ... ] what the signifigance of the xargs size limits of
> btrfs might be. [ ... ] So what does it mean that btrfs has a
> higher xargs size limit than other file systems? [ ... ] Or
> does the lower capacity for argument length for hfsplus
> demonstrate it is the superior file system for avoidi
On Fri, Mar 31, 2017 at 10:24:57PM +0200, Hans van Kranenburg wrote:
> >>> How did you test this?
> >>>
> >>> This was also my first thought, but here's a weird thing:
> >>>
> >>> -# mount -o nossd /dev/sdx /mnt/btrfs/
> >>>
> >>> BTRFS info (device sdx): not using ssd allocation scheme
> >>>
> >>>
On 03/31/2017 10:43 PM, Adam Borowski wrote:
> On Fri, Mar 31, 2017 at 10:24:57PM +0200, Hans van Kranenburg wrote:
>>
>> Yes, but we're not doing the same thing here.
>>
>> You have a file via a loop mount. If I do that, I get the same output as
>> you show, the right messages when I remount ssd a
Hi Linus,
We have 3 small fixes queued up in my for-linus-4.11 branch:
git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs.git
for-linus-4.11
Goldwyn Rodrigues (1) commits (+7/-7):
btrfs: Change qgroup_meta_rsv to 64bit
Dan Carpenter (1) commits (+6/-1):
Btrfs: fix an integ
It is confusing, and now that I look at it, more than a little funny.
Your use of xargs returns the size of the kernel module for each of
the filesystem types. I think I get it now: you are pointing to how
large the kernel module for btrfs is compared to other file system
kernel modules, 833 megs
Marat Khalili posted on Fri, 31 Mar 2017 15:28:20 +0300 as excerpted:
>> and that if you try the same thing with one of the filesystems being
>> for instance ext4, you'll see the same problem there as well
> Not sure if it's possible to reproduce the problem with ext4, since it's
> not possible t
GWB posted on Fri, 31 Mar 2017 19:02:40 -0500 as excerpted:
> It is confusing, and now that I look at it, more than a little funny.
> Your use of xargs returns the size of the kernel module for each of the
> filesystem types. I think I get it now: you are pointing to how large
> the kernel module
Indeed, that does make sense. It's the output of the size command in
the Berkeley format of "text", not decimal, octal or hex. Out of
curiosity about kernel module sizes, I dug up some old MacBooks and
looked around in:
/System/Library/Extensions/[modulename].kext/Content/MacOS:
udf is 637K on
I've run into a frustrating problem with a btrfs volume just now. I
have a USB drive which has many partitions, two of which are luks
encrypted, which can be unlocked as a single, multi-device btrfs
volume. For some reason the drive logically disconnected at the USB
protocol level, but not physic
We are working on a small NAS server for home user. The product is
equipped with a small fast SSD (around 60-120GB) and a large HDD (2T
to 4T).
We have two choices:
1. using bcache to accelerate io operation
2. combining SSD and HDD into a single btrfs volume.
Bcache is certainly designed for ou
35 matches
Mail list logo