Am Montag, den 24.07.2017, 10:25 -0400 schrieb Austin S. Hemmelgarn:
> On 2017-07-24 10:12, Cloud Admin wrote:
> > Am Montag, den 24.07.2017, 09:46 -0400 schrieb Austin S.
> > Hemmelgarn:
> > > On 2017-07-24 07:27, Cloud Admin wrote:
> > > > Hi,
> > > > I have a multi-device pool (three discs) as
On 07/24/2017 04:25 PM, David Sterba wrote:
> On Fri, Jul 21, 2017 at 01:47:11PM +0200, Hans van Kranenburg wrote:
>> [...]
>>
>> So what now...?
>>
>> The changes in here do the following:
>>
>> 1. Throw out the current ssd_spread behaviour.
>> 2. Move the current ssd behaviour to the
Hi Nick,
[auto build test WARNING on linus/master]
[also build test WARNING on v4.13-rc2 next-20170724]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system]
url:
https://github.com/0day-ci/linux/commits/Nick-Terrell/Add-xxhash-and-zstd-modules
On Mon, 24 Jul 2017 09:46:34 -0400
"Austin S. Hemmelgarn" wrote:
> > I am a little bit confused because the balance command is running since
> > 12 hours and only 3GB of data are touched. This would mean the whole
> > balance process (new disc has 8TB) would run a long,
On Fri, Jul 21, 2017 at 11:00:27PM +0200, Adam Borowski wrote:
> On Fri, Jul 21, 2017 at 11:37:49PM +0500, Roman Mamedov wrote:
> > On Fri, 21 Jul 2017 13:00:56 +0800
> > Anand Jain wrote:
> > > On 07/18/2017 02:30 AM, David Sterba wrote:
> > > > This must stay 'return 1',
On Mon, Jul 24, 2017 at 02:37:06PM +0300, Timofey Titovets wrote:
> Get small sample from input data
> and calculate byte type count for that sample
>
> Signed-off-by: Timofey Titovets
> ---
> fs/btrfs/compression.c | 24 ++--
> fs/btrfs/compression.h |
Eg. files that are already compressed would increase the cpu consumption
with compress-force, while they'd be hopefully detected as
incompressible with 'compress' and clever heuristics. So the NOCOMPRESS
bit would better reflect the status of the file.
current NOCOMPRESS is based on trial
On 2017-07-24 07:27, Cloud Admin wrote:
Hi,
I have a multi-device pool (three discs) as RAID1. Now I want to add a
new disc to increase the pool. I followed the description on https://bt
rfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices and
used 'btrfs add '. After that I called a
On Mon, Jul 24, 2017 at 11:26:49AM +0300, Nikolay Borisov wrote:
>
>
> On 21.07.2017 20:29, jo...@toxicpanda.com wrote:
> > From: Josef Bacik
> >
> > Readdir does dir_emit while under the btree lock. dir_emit can trigger
> > the page fault which means we can deadlock. Fix this
On Wed, Jul 19, 2017 at 11:25:51PM -0400, Jeff Mahoney wrote:
> If we have a block group that is all of the following:
> 1) uncached in memory
> 2) is read-only
> 3) has a disk cache state that indicates we need to recreate the cache
>
> AND the file system has enough free space fragmentation
On Fri, Jul 21, 2017 at 01:47:11PM +0200, Hans van Kranenburg wrote:
> In the first year of btrfs development, around early 2008, btrfs
> gained a mount option which enables specific functionality for
> filesystems on solid state devices. The first occurance of this
> functionality is in
On 2017-07-24 10:12, Cloud Admin wrote:
Am Montag, den 24.07.2017, 09:46 -0400 schrieb Austin S. Hemmelgarn:
On 2017-07-24 07:27, Cloud Admin wrote:
Hi,
I have a multi-device pool (three discs) as RAID1. Now I want to
add a
new disc to increase the pool. I followed the description on https:
On Mon, Jul 24, 2017 at 10:02:30AM -0400, Josef Bacik wrote:
> On Mon, Jul 24, 2017 at 02:42:29PM +0200, David Sterba wrote:
> > On Fri, Jul 21, 2017 at 01:29:07PM -0400, jo...@toxicpanda.com wrote:
> > > From: Josef Bacik
> > >
> > > We need to use file->private_data for readdir
On Mon, Jul 24, 2017 at 02:37:05PM +0300, Timofey Titovets wrote:
> Based on kdave for-next
> As heuristic skeleton already merged
> Populate heuristic with basic code that:
> 1. Collect sample from input data
> 2. Calculate byte set for sample
>For detect easily compressible data
> 3.
On 2017-07-22 07:35, Adam Borowski wrote:
On Fri, Jul 21, 2017 at 11:56:21AM -0400, Austin S. Hemmelgarn wrote:
On 2017-07-20 17:27, Nick Terrell wrote:
This patch set adds xxhash, zstd compression, and zstd decompression
modules. It also adds zstd support to BtrFS and SquashFS.
Each patch
On Mon, Jul 24, 2017 at 03:14:08PM +0200, David Sterba wrote:
> On Mon, Jul 24, 2017 at 02:50:50PM +0200, David Sterba wrote:
> > On Fri, Jul 21, 2017 at 01:29:08PM -0400, jo...@toxicpanda.com wrote:
> > > From: Josef Bacik
> > >
> > > Readdir does dir_emit while under the btree
On 07/24/2017 03:06 PM, Austin S. Hemmelgarn wrote:
On 2017-07-24 14:53, Chris Mason wrote:
On 07/24/2017 02:41 PM, David Sterba wrote:
would it be ok for you to keep ssd_working as before?
I'd really like to get this patch merged soon because "do not use ssd
mode for ssd" has started to be
I accidentally ran into this problem (it's pretty silly because I
almost never run RC kernels or do dio writes but somehow I just
happened to do both at once, exactly before I read your patch notes).
I didn't initially catch any issues (I see no related messages in the
kernel log) but after seeing
On Mon, Jul 24, 2017 at 02:42:29PM +0200, David Sterba wrote:
> On Fri, Jul 21, 2017 at 01:29:07PM -0400, jo...@toxicpanda.com wrote:
> > From: Josef Bacik
> >
> > We need to use file->private_data for readdir on directories, so just
> > don't allow user space transactions on
On 07/24/2017 02:41 PM, David Sterba wrote:
On Mon, Jul 24, 2017 at 02:01:07PM -0400, Chris Mason wrote:
On 07/24/2017 10:25 AM, David Sterba wrote:
Thanks for the extensive historical summary, this change really deserves
it.
Decoupling the assumptions about the device's block management
On 2017-07-24 14:53, Chris Mason wrote:
On 07/24/2017 02:41 PM, David Sterba wrote:
would it be ok for you to keep ssd_working as before?
I'd really like to get this patch merged soon because "do not use ssd
mode for ssd" has started to be the recommended workaround. Once this
sticks, we
Am Montag, den 24.07.2017, 09:46 -0400 schrieb Austin S. Hemmelgarn:
> On 2017-07-24 07:27, Cloud Admin wrote:
> > Hi,
> > I have a multi-device pool (three discs) as RAID1. Now I want to
> > add a
> > new disc to increase the pool. I followed the description on https:
> > //bt
> >
From: Josef Bacik
Our dir_context->pos is supposed to hold the next position we're
supposed to look. If we successfully insert a delayed dir index we
could end up with a duplicate entry because we don't increase ctx->pos
after doing the dir_emit.
Signed-off-by: Josef Bacik
From: Josef Bacik
Readdir does dir_emit while under the btree lock. dir_emit can trigger
the page fault which means we can deadlock. Fix this by allocating a
buffer on opening a directory and copying the readdir into this buffer
and doing dir_emit from outside of the tree lock.
On Wed, Jul 12, 2017 at 04:49:48PM +0800, Lu Fengqi wrote:
> From: Wang Xiaoguang
>
> This issue was revealed by modifying BTRFS_MAX_EXTENT_SIZE(128MB) to 64KB,
> When modifying BTRFS_MAX_EXTENT_SIZE(128MB) to 64KB, fsstress test often
> gets these warnings from
Chris Murphy posted on Sat, 22 Jul 2017 14:35:25 -0600 as excerpted:
> If we go back even further in time, what I'm trying to avoid is the
> problem with DE's where the user connects a two device Btrfs, and then
> they want to eject it. The DE is already confused because behind the
> scenes it
On Mon, Jul 24, 2017 at 02:35:05PM -0600, Chris Murphy wrote:
> On Mon, Jul 24, 2017 at 5:27 AM, Cloud Admin
> wrote:
>
> > I am a little bit confused because the balance command is running since
> > 12 hours and only 3GB of data are touched.
>
> That's incredibly
On Thu, Jul 13, 2017 at 8:24 PM, Sargun Dhillon wrote:
> We've been running Btrfs with Docker at appreciable scale for a few
> months now (100-200k containers / day ).
Is this on a single Btrfs file system? Or is it distributed among
multiple Btrfs file systems?
I'm curious
Hi Chris,
On 07/24/2017 08:53 PM, Chris Mason wrote:
> On 07/24/2017 02:41 PM, David Sterba wrote:
>> On Mon, Jul 24, 2017 at 02:01:07PM -0400, Chris Mason wrote:
>>> On 07/24/2017 10:25 AM, David Sterba wrote:
>>>
Thanks for the extensive historical summary, this change really
deserves
On Mon, Jul 24, 2017 at 3:12 PM, waxhead wrote:
>
>
> Chris Murphy wrote:
>>
>> On Mon, Jul 24, 2017 at 5:27 AM, Cloud Admin
>> wrote:
>>
>>> I am a little bit confused because the balance command is running since
>>> 12 hours and only 3GB of
On Mon, Jul 24, 2017 at 5:27 AM, Cloud Admin wrote:
> I am a little bit confused because the balance command is running since
> 12 hours and only 3GB of data are touched.
That's incredibly slow. Something isn't right.
Using btrfs-debug -b from btrfs-progs, I've
On Mon, Jul 24, 2017 at 2:42 PM, Hugo Mills wrote:
>
>In my experience, it's pretty consistent at about a minute per 1
> GiB for data on rotational drives on RAID-1. For metadata, it can go
> up to several hours (or more) per 256 MiB chunk, depending on what
> kind of
On Mon, Jul 24, 2017 at 02:55:00PM -0600, Chris Murphy wrote:
> Egads.
>
> Maybe Cloud Admin ought to consider using a filter to just balance the
> data chunks across the three devices, and just leave the metadata on
> the original two disks?
Balancing when adding a new disk isn't that important
On Mon, Jul 24, 2017 at 02:55:00PM -0600, Chris Murphy wrote:
> On Mon, Jul 24, 2017 at 2:42 PM, Hugo Mills wrote:
>
> >
> >In my experience, it's pretty consistent at about a minute per 1
> > GiB for data on rotational drives on RAID-1. For metadata, it can go
> > up to
Chris Murphy wrote:
On Mon, Jul 24, 2017 at 5:27 AM, Cloud Admin wrote:
I am a little bit confused because the balance command is running since
12 hours and only 3GB of data are touched.
That's incredibly slow. Something isn't right.
Using btrfs-debug -b from
On 07/24/2017 07:52 PM, David Sterba wrote:
> On Mon, Jul 24, 2017 at 07:22:03PM +0200, Hans van Kranenburg wrote:
>> On 07/24/2017 04:25 PM, David Sterba wrote:
>>> On Fri, Jul 21, 2017 at 01:47:11PM +0200, Hans van Kranenburg wrote:
[...]
So what now...?
The changes
Preliminary support for setting compression level for zlib, the
following works:
$ mount -o compess=zlib # default
$ mount -o compess=zlib0# same
$ mount -o compess=zlib9# level 9, slower sync, less data
$ mount -o compess=zlib1#
On Mon, Jul 24, 2017 at 07:22:03PM +0200, Hans van Kranenburg wrote:
> On 07/24/2017 04:25 PM, David Sterba wrote:
> > On Fri, Jul 21, 2017 at 01:47:11PM +0200, Hans van Kranenburg wrote:
> >> [...]
> >>
> >> So what now...?
> >>
> >> The changes in here do the following:
> >>
> >> 1. Throw
Am Montag, den 24.07.2017, 19:08 +0500 schrieb Roman Mamedov:
> On Mon, 24 Jul 2017 09:46:34 -0400
> "Austin S. Hemmelgarn" wrote:
>
> > > I am a little bit confused because the balance command is running
> > > since
> > > 12 hours and only 3GB of data are touched. This
Signed-off-by: David Sterba
---
fs/btrfs/ctree.h | 6 +++---
fs/btrfs/disk-io.c | 2 +-
fs/btrfs/extent-tree.c | 22 +++---
fs/btrfs/file.c| 10 +-
fs/btrfs/inode.c | 22 +++---
fs/btrfs/ioctl.c | 10
David Sterba (4):
btrfs: fix spelling of snapshotting
btrfs: drop ancient page flag mappings
btrfs: remove trivial wrapper btrfs_force_ra
btrfs: drop chunk locks at the end of close_ctree
fs/btrfs/ctree.h | 21 +++--
fs/btrfs/disk-io.c | 4 +---
On Mon, Jul 24, 2017 at 02:01:07PM -0400, Chris Mason wrote:
> On 07/24/2017 10:25 AM, David Sterba wrote:
>
> > Thanks for the extensive historical summary, this change really deserves
> > it.
> >
> > Decoupling the assumptions about the device's block management is really
> > a good thing,
It's a simple call page_cache_sync_readahead, same arguments in the same
order.
Signed-off-by: David Sterba
---
fs/btrfs/ctree.h | 8
fs/btrfs/ioctl.c | 4 ++--
fs/btrfs/send.c | 2 +-
3 files changed, 3 insertions(+), 11 deletions(-)
diff --git a/fs/btrfs/ctree.h
The pinned chunks might be left over so we clean them but at this point
of close_ctree, there's noone to race with, the locking can be removed.
Signed-off-by: David Sterba
---
fs/btrfs/disk-io.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/fs/btrfs/disk-io.c
On 07/24/2017 10:25 AM, David Sterba wrote:
Thanks for the extensive historical summary, this change really deserves
it.
Decoupling the assumptions about the device's block management is really
a good thing, mount option 'ssd' should mean that the device just has
cheap seeks. Moving the the
There's no PageFsMisc. Added by patch 4881ee5a2e995 in 2008, the flag is
not present in current kernels.
Signed-off-by: David Sterba
---
fs/btrfs/ctree.h | 7 ---
1 file changed, 7 deletions(-)
diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
index
>> This may be a stupid question , but are your pool of butter (or BTRFS pool)
>> by any chance hooked up via USB? If this is USB2.0 at 480mitb/s then it is
>> about 57MB/s / 4 drives = roughly 14.25 or about 11MB/s if you shave off
>> some overhead.
>
>Nope, USB 3. Typically on scrubs I get
On 2017年07月25日 04:00, Josef Bacik wrote:
On Wed, Jul 12, 2017 at 04:49:48PM +0800, Lu Fengqi wrote:
From: Wang Xiaoguang
This issue was revealed by modifying BTRFS_MAX_EXTENT_SIZE(128MB) to 64KB,
When modifying BTRFS_MAX_EXTENT_SIZE(128MB) to 64KB, fsstress test
Make the check of mixed block groups early.
Reason:
We do not support re-initing extent tree for mixed block groups.
So it will return -EINVAL in function reinit_extent_tree.
In this situation, we do not need to start transaction.
We do not have a btrfs_abort_transaction like kernel now,
so we
On Mon, Jul 24, 2017 at 3:17 PM, Adam Borowski wrote:
> On Mon, Jul 24, 2017 at 02:55:00PM -0600, Chris Murphy wrote:
>> Egads.
>>
>> Maybe Cloud Admin ought to consider using a filter to just balance the
>> data chunks across the three devices, and just leave the metadata on
On 21.07.2017 20:29, jo...@toxicpanda.com wrote:
> From: Josef Bacik
>
> Readdir does dir_emit while under the btree lock. dir_emit can trigger
> the page fault which means we can deadlock. Fix this by allocating a
> buffer on opening a directory and copying the readdir into
On Fri, Jul 21, 2017 at 01:29:07PM -0400, jo...@toxicpanda.com wrote:
> From: Josef Bacik
>
> We need to use file->private_data for readdir on directories, so just
> don't allow user space transactions on directories.
>
> Signed-off-by: Josef Bacik
> ---
>
On 2017-07-21 19:21, Hans van Kranenburg wrote:
> On 07/21/2017 05:50 PM, Austin S. Hemmelgarn wrote:
>> On 2017-07-21 07:47, Hans van Kranenburg wrote:
>>> [...]
>>>
>>> Signed-off-by: Hans van Kranenburg
>> Behaves as advertised, and I'm not seeing any issues in
Hi,
I have a multi-device pool (three discs) as RAID1. Now I want to add a
new disc to increase the pool. I followed the description on https://bt
rfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices and
used 'btrfs add '. After that I called a balance
for rebalancing the RAID1 using
Calculate byte core set for data sample
For low core set, data are easily compressible
For high core set, data are not compressible
Signed-off-by: Timofey Titovets
---
fs/btrfs/compression.c | 60 ++
fs/btrfs/compression.h |
Calculate byte set size for data sample
if byte set low, data are easily compressible
Signed-off-by: Timofey Titovets
---
fs/btrfs/compression.c | 27 +++
fs/btrfs/compression.h | 1 +
2 files changed, 28 insertions(+)
diff --git
Get small sample from input data
and calculate byte type count for that sample
Signed-off-by: Timofey Titovets
---
fs/btrfs/compression.c | 24 ++--
fs/btrfs/compression.h | 11 +++
2 files changed, 33 insertions(+), 2 deletions(-)
diff --git
Based on kdave for-next
As heuristic skeleton already merged
Populate heuristic with basic code that:
1. Collect sample from input data
2. Calculate byte set for sample
For detect easily compressible data
3. Calculate byte core set size
For detect easily and not compressible data
Timofey
On Fri, Jul 21, 2017 at 01:29:08PM -0400, jo...@toxicpanda.com wrote:
> From: Josef Bacik
>
> Readdir does dir_emit while under the btree lock. dir_emit can trigger
> the page fault which means we can deadlock. Fix this by allocating a
> buffer on opening a directory and copying
On Mon, Jul 24, 2017 at 02:42:29PM +0200, David Sterba wrote:
> On Fri, Jul 21, 2017 at 01:29:07PM -0400, jo...@toxicpanda.com wrote:
> > From: Josef Bacik
> >
> > We need to use file->private_data for readdir on directories, so just
> > don't allow user space transactions on
On Mon, Jul 24, 2017 at 02:50:50PM +0200, David Sterba wrote:
> On Fri, Jul 21, 2017 at 01:29:08PM -0400, jo...@toxicpanda.com wrote:
> > From: Josef Bacik
> >
> > Readdir does dir_emit while under the btree lock. dir_emit can trigger
> > the page fault which means we can
On Fri, Jul 21, 2017 at 11:28:24AM +0300, Nikolay Borisov wrote:
> Further testing showed that the fix introduced in 7dfb8be11b5d
> ("btrfs: Round down values which are written for total_bytes_size") was
> insufficient and it could still lead to discrepancies between the total_bytes
> in
> the
62 matches
Mail list logo