Austin S. Hemmelgarn posted on Fri, 01 Sep 2017 10:07:47 -0400 as
excerpted:
> On 2017-09-01 09:54, Qu Wenruo wrote:
>>
>> On 2017年09月01日 20:47, Austin S. Hemmelgarn wrote:
>>> On 2017-09-01 08:19, Qu Wenruo wrote:
Current kernel (and btrfs-progs also tries to follow kernel chunk
If we're still going to wait after schedule(), we don't have to do
finish_wait() to remove our %wait_queue_entry since prepare_to_wait()
won't add the same %wait_queue_entry twice.
Signed-off-by: Liu Bo
---
fs/btrfs/ioctl.c | 2 +-
1 file changed, 1 insertion(+), 1
Block layer has a limit on plug, ie. BLK_MAX_REQUEST_COUNT == 16, so
we don't gain benefits by batching 64 bios here.
Signed-off-by: Liu Bo
---
fs/btrfs/volumes.c | 6 --
1 file changed, 6 deletions(-)
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index
Since TASK_UNINTERRUPTIBLE has been used here, wait_event() can do the
same job.
Signed-off-by: Liu Bo
---
fs/btrfs/ioctl.c | 21 +++--
1 file changed, 3 insertions(+), 18 deletions(-)
diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
index 19e4dec..1c8bdde
Both wait_for_commit() and wait_for_writer() are checking the
condition out of the mutex lock.
This refactors code a bit to be lock safe.
Signed-off-by: Liu Bo
---
fs/btrfs/tree-log.c | 30 --
1 file changed, 16 insertions(+), 14 deletions(-)
wake_up() will go to check whether someone is on the waiting list with
holding spin_lock().
Around some btrfs code, we don't check waitqueue_active() firstly, so
the spin_lock() pair in wake_up() is called even if no one is waiting
on the queue.
There are more wake_up()s without
You'll be fine, it's only happening on the one fs right? That's 13gib of
metadata with checksums and all that shit, it'll probably look like 8 or 9gib
of ram worst case. I'd mount with -o ref_verify and check the slab amount in
/proc/meminfo to get an idea of real usage. Once the mount is
On Fri, Sep 1, 2017 at 11:20 AM, Austin S. Hemmelgarn
wrote:
> No, that's not what I'm talking about. You always get one bcache device per
> backing device, but multiple bcache devices can use the same physical cache
> device (that is, backing devices map 1:1 to bcache
On Thu, Aug 31, 2017 at 05:48:23PM +, Josef Bacik wrote:
> We are using 4.11 in production at fb with backports from recent (a month
> ago?) stuff. I’m relatively certain nothing bad will happen, and this branch
> has the most recent fsync() corruption fix (which exists in your kernel so
>
On Fri, Sep 1, 2017 at 7:38 AM, Eric Wolf <19w...@gmail.com> wrote:
> Okay,
> I have a hex editor open. Now what? Your instructions seems
> straightforward, but I have no idea what I'm doing.
First step, backup as much as you can, because if you don't know what
you're doing, good chance you make
On Fri, Aug 25, 2017 at 09:34:49AM +0900, Misono, Tomohiro wrote:
> On 2017/08/25 2:37, David Sterba wrote:
> > On Thu, Aug 24, 2017 at 04:39:53PM +0900, Misono, Tomohiro wrote:
> >> "btrfs inspect-internal rootid " rejects a file to be specified in
> >> the implementation.
> >> Therefore change
On Tue, Jul 25, 2017 at 09:57:44AM +0800, Gu Jinxiang wrote:
> Make the check of mixed block groups early.
> Reason:
> We do not support re-initing extent tree for mixed block groups.
> So it will return -EINVAL in function reinit_extent_tree.
> In this situation, we do not need to start
On Fri, Aug 25, 2017 at 06:17:23PM +0300, Nikolay Borisov wrote:
>
>
> On 25.08.2017 18:11, jo...@toxicpanda.com wrote:
> > From: Josef Bacik
> >
> > While looking at a log of a corrupted fs I needed to verify we were
> > missing csums for a given range. Make this easier by
I rolled back my filesystem with 'snapper rollback 81 (or whatever
snapshot it was)' and now when I boot my filesystem is read-only. How
do I fix it?
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info
On 2017-09-01 11:00, Juan Orti Alcaine wrote:
El 1 sept. 2017 15:59, "Austin S. Hemmelgarn" > escribió:
If you are going to use bcache, you don't need separate caches for
each device (and in fact, you're probably better off sharing
On Fri, Aug 25, 2017 at 04:48:43PM +0300, Nikolay Borisov wrote:
>
>
> On 25.08.2017 16:13, jo...@toxicpanda.com wrote:
> > From: Josef Bacik
> >
> > While looking at a log of a corrupted fs I needed to verify we were
> > missing csums for a given range. Make this easier by
On Fri, Aug 18, 2017 at 09:04:19AM +0200, Goffredo Baroncelli wrote:
> Hi All,
>
> Piotr and Chris, pointed me at these bugs which could be triggered by this
> test case:
>
> # btrfs sub create test1
> # btrfs sub create test1/test2
> # btrfs sub snap test1 test1.snap
> # btrfs fi du -s test1
>
On 09/01/2017 07:15 AM, Qu Wenruo wrote:
>
>
> On 2017年09月01日 11:36, Anthony Riley wrote:
>> Hey folks,
>>
>> I thought I would finally take a swing at things I've wanted to be an
>> kernel/fs dev fora few years now. My current $job is as an
>> Infrastructure Engineer. I'm currently teaching
On 2017-09-01 09:54, Qu Wenruo wrote:
On 2017年09月01日 20:47, Austin S. Hemmelgarn wrote:
On 2017-09-01 08:19, Qu Wenruo wrote:
On 2017年09月01日 20:05, Austin S. Hemmelgarn wrote:
On 2017-09-01 07:49, Qu Wenruo wrote:
On 2017年09月01日 19:28, Austin S. Hemmelgarn wrote:
On 2017-08-31 20:13,
On 2017-09-01 09:52, Juan Orti Alcaine wrote:
2017-08-31 13:36 GMT+02:00 Roman Mamedov :
If you could implement SSD caching in front of your FS (such as lvmcache or
bcache), that would work wonders for performance in general, and especially
for mount times. I have seen amazing
On 2017年09月01日 20:47, Austin S. Hemmelgarn wrote:
On 2017-09-01 08:19, Qu Wenruo wrote:
On 2017年09月01日 20:05, Austin S. Hemmelgarn wrote:
On 2017-09-01 07:49, Qu Wenruo wrote:
On 2017年09月01日 19:28, Austin S. Hemmelgarn wrote:
On 2017-08-31 20:13, Qu Wenruo wrote:
On 2017年09月01日 01:27,
2017-08-31 13:36 GMT+02:00 Roman Mamedov :
> If you could implement SSD caching in front of your FS (such as lvmcache or
> bcache), that would work wonders for performance in general, and especially
> for mount times. I have seen amazing results with lvmcache (of just 32 GB) for
On Thu, Aug 31, 2017 at 4:11 PM, Hugo Mills wrote:
> On Thu, Aug 31, 2017 at 03:21:07PM -0400, Eric Wolf wrote:
>> I've previously confirmed it's a bad ram module which I have already
>> submitted an RMA for. Any advice for manually fixing the bits?
>
>What I'd do... use a
Okay,
I have a hex editor open. Now what? Your instructions seems
straightforward, but I have no idea what I'm doing.
---
Eric Wolf
(201) 316-6098
19w...@gmail.com
On Thu, Aug 31, 2017 at 4:11 PM, Hugo Mills wrote:
> On Thu, Aug 31, 2017 at 03:21:07PM -0400, Eric Wolf wrote:
On 2017-09-01 08:19, Qu Wenruo wrote:
On 2017年09月01日 20:05, Austin S. Hemmelgarn wrote:
On 2017-09-01 07:49, Qu Wenruo wrote:
On 2017年09月01日 19:28, Austin S. Hemmelgarn wrote:
On 2017-08-31 20:13, Qu Wenruo wrote:
On 2017年09月01日 01:27, Goffredo Baroncelli wrote:
Hi All,
I found a bug
On Fri, Sep 01, 2017 at 05:58:47PM +0900, Naohiro Aota wrote:
> commit 524272607e88 ("btrfs: Handle delalloc error correctly to avoid
> ordered extent hang") introduced btrfs_cleanup_ordered_extents() to cleanup
> submitted ordered extents. However, it does not clear the ordered bit
> (Private2)
On Fri, Sep 01, 2017 at 05:59:07PM +0900, Naohiro Aota wrote:
> __endio_write_update_ordered() repeats the search until it reaches the end
> of the specified range. This works well with direct IO path, because before
> the function is called, it's ensured that there are ordered extents filling
>
On 2017年09月01日 20:05, Austin S. Hemmelgarn wrote:
On 2017-09-01 07:49, Qu Wenruo wrote:
On 2017年09月01日 19:28, Austin S. Hemmelgarn wrote:
On 2017-08-31 20:13, Qu Wenruo wrote:
On 2017年09月01日 01:27, Goffredo Baroncelli wrote:
Hi All,
I found a bug in mkfs.btrfs, when it is used the
On 2017-09-01 07:49, Qu Wenruo wrote:
On 2017年09月01日 19:28, Austin S. Hemmelgarn wrote:
On 2017-08-31 20:13, Qu Wenruo wrote:
On 2017年09月01日 01:27, Goffredo Baroncelli wrote:
Hi All,
I found a bug in mkfs.btrfs, when it is used the option '-r'. It
seems that it is not visible the full
On 2017年09月01日 19:28, Austin S. Hemmelgarn wrote:
On 2017-08-31 20:13, Qu Wenruo wrote:
On 2017年09月01日 01:27, Goffredo Baroncelli wrote:
Hi All,
I found a bug in mkfs.btrfs, when it is used the option '-r'. It
seems that it is not visible the full disk.
Despite the new bug you found,
On 2017年09月01日 19:28, Austin S. Hemmelgarn wrote:
On 2017-08-31 20:13, Qu Wenruo wrote:
On 2017年09月01日 01:27, Goffredo Baroncelli wrote:
Hi All,
I found a bug in mkfs.btrfs, when it is used the option '-r'. It
seems that it is not visible the full disk.
Despite the new bug you found,
On 2017-08-31 16:29, Goffredo Baroncelli wrote:
On 2017-08-31 20:49, Austin S. Hemmelgarn wrote:
On 2017-08-31 13:27, Goffredo Baroncelli wrote:
Hi All,
I found a bug in mkfs.btrfs, when it is used the option '-r'. It
seems that it is not visible the full disk.
$ uname -a Linux venice.bhome
On 2017-09-01 06:21, ein wrote:
Very comprehensive, thank you. I was asking because I'd like to learn
how really random writes by VM affects BTRFS (vs XFS,Ext4) performance
and try to develop some workaround to reduce/prevent it while having
csums, cow (snapshots) and compression.
I've
On 2017年09月01日 16:59, Naohiro Aota wrote:
__endio_write_update_ordered() repeats the search until it reaches the end
of the specified range. This works well with direct IO path, because before
the function is called, it's ensured that there are ordered extents filling
whole the range. It's not
On 2017-08-31 20:13, Qu Wenruo wrote:
On 2017年09月01日 01:27, Goffredo Baroncelli wrote:
Hi All,
I found a bug in mkfs.btrfs, when it is used the option '-r'. It seems
that it is not visible the full disk.
Despite the new bug you found, -r has several existing bugs.
Is this actually a bug
On 2017年09月01日 16:58, Naohiro Aota wrote:
commit 524272607e88 ("btrfs: Handle delalloc error correctly to avoid
ordered extent hang") introduced btrfs_cleanup_ordered_extents() to cleanup
submitted ordered extents. However, it does not clear the ordered bit
(Private2) of coresponding pages.
On 08/31/2017 06:18 PM, Duncan wrote:
[...]
> Michał Sokołowski posted on Thu, 31 Aug 2017 16:38:14 +0200 as excerpted:
>> Is there another tool to verify fragments number of given file when
>> using compression?
> AFAIK there isn't an official one, tho someone posted a script (python,
> IIRC) at
On Fri, Sep 01, 2017 at 01:15:45PM +0800, Qu Wenruo wrote:
> On 2017年09月01日 11:36, Anthony Riley wrote:
> >Hey folks,
> >
> >I thought I would finally take a swing at things I've wanted to be an
> >kernel/fs dev fora few years now. My current $job is as an
> >Infrastructure Engineer. I'm currently
__endio_write_update_ordered() repeats the search until it reaches the end
of the specified range. This works well with direct IO path, because before
the function is called, it's ensured that there are ordered extents filling
whole the range. It's not the case, however, when it's called from
commit 524272607e88 ("btrfs: Handle delalloc error correctly to avoid
ordered extent hang") introduced btrfs_cleanup_ordered_extents() to cleanup
submitted ordered extents. However, it does not clear the ordered bit
(Private2) of coresponding pages. Thus, the following BUG occurs from
On Fri, Sep 01, 2017 at 10:14:41AM +0300, Amir Goldstein wrote:
> On Fri, Sep 1, 2017 at 10:04 AM, Eryu Guan wrote:
> > On Fri, Sep 01, 2017 at 02:39:44PM +0900, Misono, Tomohiro wrote:
> >> Several tests uses both _filter_test_dir and _filter_scratch
> >> concatenated by pipe
On Fri, Sep 1, 2017 at 10:04 AM, Eryu Guan wrote:
> On Fri, Sep 01, 2017 at 02:39:44PM +0900, Misono, Tomohiro wrote:
>> Several tests uses both _filter_test_dir and _filter_scratch
>> concatenated by pipe to filter $TEST_DIR and $SCRATCH_MNT. However, this
>> would fail if the
From: Josef Bacik
We were having corruption issues that were tied back to problems with the extent
tree. In order to track them down I built this tool to try and find the
culprit, which was pretty successful. If you compile with this tool on it will
live verify every ref update
On Fri, Sep 01, 2017 at 02:39:44PM +0900, Misono, Tomohiro wrote:
> Several tests uses both _filter_test_dir and _filter_scratch
> concatenated by pipe to filter $TEST_DIR and $SCRATCH_MNT. However, this
> would fail if the shorter string is a substring of the other (like
> "/mnt" and "/mnt2").
>
44 matches
Mail list logo