On Mon, Jan 21, 2019 at 9:04 PM Bart Van Assche wrote:
>
> On 1/21/19 8:45 PM, Ashlie Martinez wrote:
> > I was working on porting parts of a file system crash consistency
> > checking tool called CrashMonkey [1] to linux kernels 4.9 and 4.14
> > when I noticed an inconsistency in how the bio->bi_
On 1/21/19 8:45 PM, Ashlie Martinez wrote:
I was working on porting parts of a file system crash consistency
checking tool called CrashMonkey [1] to linux kernels 4.9 and 4.14
when I noticed an inconsistency in how the bio->bi_opf field is
treated. According to the comments in /include/linux/blk_
Hello,
I was working on porting parts of a file system crash consistency
checking tool called CrashMonkey [1] to linux kernels 4.9 and 4.14
when I noticed an inconsistency in how the bio->bi_opf field is
treated. According to the comments in /include/linux/blk_types.h, the
REQ_OP should be the upp
On Mon, Jan 21, 2019 at 10:35:11PM -0500, Mike Snitzer wrote:
> On Mon, Jan 21 2019 at 10:17pm -0500,
> Mike Snitzer wrote:
>
> > On Mon, Jan 21 2019 at 9:46pm -0500,
> > Ming Lei wrote:
> >
> > > On Mon, Jan 21, 2019 at 11:02:04AM -0500, Mike Snitzer wrote:
> > > > On Sun, Jan 20 2019 at 10:2
On Mon, Jan 21 2019 at 10:17pm -0500,
Mike Snitzer wrote:
> On Mon, Jan 21 2019 at 9:46pm -0500,
> Ming Lei wrote:
>
> > On Mon, Jan 21, 2019 at 11:02:04AM -0500, Mike Snitzer wrote:
> > > On Sun, Jan 20 2019 at 10:21P -0500,
> > > Ming Lei wrote:
> > >
> > > > On Sat, Jan 19, 2019 at 01:05:
On Tue, Jan 22, 2019 at 5:13 AM Florian Stecker wrote:
>
> Hi everyone,
>
> on my laptop, I am experiencing occasional hangs of applications during
> fsync(), which are sometimes up to 30 seconds long. I'm using a BTRFS
> which spans two partitions on the same SSD (one of them used to contain
> a
On Mon, Jan 21 2019 at 9:46pm -0500,
Ming Lei wrote:
> On Mon, Jan 21, 2019 at 11:02:04AM -0500, Mike Snitzer wrote:
> > On Sun, Jan 20 2019 at 10:21P -0500,
> > Ming Lei wrote:
> >
> > > On Sat, Jan 19, 2019 at 01:05:05PM -0500, Mike Snitzer wrote:
> > > > Use the same BIO_QUEUE_ENTERED patte
Hello
On 1/21/19 11:22 PM, Marc Gonzalez wrote:
> Well, now we know for sure that the clk_scaling_lock is a red herring.
> I applied the patch below, and still the system locked up:
>
> # dd if=/dev/sde of=/dev/null bs=1M status=progress
> 3892314112 bytes (3.9 GB, 3.6 GiB) copied, 50.0042 s, 77.
On Mon, Jan 21, 2019 at 11:02:04AM -0500, Mike Snitzer wrote:
> On Sun, Jan 20 2019 at 10:21P -0500,
> Ming Lei wrote:
>
> > On Sat, Jan 19, 2019 at 01:05:05PM -0500, Mike Snitzer wrote:
> > > Use the same BIO_QUEUE_ENTERED pattern that was established by commit
> > > cd4a4ae4683dc ("block: don't
On Mon, Jan 21, 2019 at 01:43:21AM -0800, Sagi Grimberg wrote:
>
> > V14:
> > - drop patch(patch 4 in V13) for renaming bvec helpers, as suggested by
> > Jens
> > - use mp_bvec_* as multi-page bvec helper name
> > - fix one build issue, which is caused by missing one converion of
> >
On Sat, Jan 19, 2019 at 11:08:27AM +0100, Andrea Righi wrote:
[..]
> Alright, let's skip the root cgroup for now. I think the point here is
> if we want to provide sync() isolation among cgroups or not.
>
> According to the manpage:
>
>sync() causes all pending modifications to file
Hi everyone,
on my laptop, I am experiencing occasional hangs of applications during
fsync(), which are sometimes up to 30 seconds long. I'm using a BTRFS
which spans two partitions on the same SSD (one of them used to contain
a Windows, but I removed it and added the partition to the BTRFS vo
On 2019-01-21 17:23, Jens Axboe wrote:
On 1/21/19 8:58 AM, Roman Penyaev wrote:
On 2019-01-21 16:30, Jens Axboe wrote:
On 1/21/19 2:13 AM, Roman Penyaev wrote:
On 2019-01-18 17:12, Jens Axboe wrote:
[...]
+
+static int io_uring_create(unsigned entries, struct
io_uring_params
*p,
+
On 1/21/19 8:58 AM, Roman Penyaev wrote:
> On 2019-01-21 16:30, Jens Axboe wrote:
>> On 1/21/19 2:13 AM, Roman Penyaev wrote:
>>> On 2019-01-18 17:12, Jens Axboe wrote:
>>>
>>> [...]
>>>
+
+static int io_uring_create(unsigned entries, struct io_uring_params
*p,
+
On Sun, Jan 20 2019 at 10:21P -0500,
Ming Lei wrote:
> On Sat, Jan 19, 2019 at 01:05:05PM -0500, Mike Snitzer wrote:
> > Use the same BIO_QUEUE_ENTERED pattern that was established by commit
> > cd4a4ae4683dc ("block: don't use blocking queue entered for recursive
> > bio submits") by setting BIO
On 2019-01-21 16:30, Jens Axboe wrote:
On 1/21/19 2:13 AM, Roman Penyaev wrote:
On 2019-01-18 17:12, Jens Axboe wrote:
[...]
+
+static int io_uring_create(unsigned entries, struct io_uring_params
*p,
+ bool compat)
+{
+ struct user_struct *user = NULL;
+ s
On 1/21/19 2:13 AM, Roman Penyaev wrote:
> On 2019-01-18 17:12, Jens Axboe wrote:
>
> [...]
>
>> +
>> +static int io_uring_create(unsigned entries, struct io_uring_params
>> *p,
>> + bool compat)
>> +{
>> +struct user_struct *user = NULL;
>> +struct io_ring_ctx *ctx
On 19/01/2019 20:47, Marc Gonzalez wrote:
> On 19/01/2019 10:56, Christoph Hellwig wrote:
>
>> On Jan 18, 2019 at 10:48:15AM -0700, Jens Axboe wrote:
>>
>>> It's UFS that totally buggy, if you look at its queuecommand, it does:
>>>
>>> if (!down_read_trylock(&hba->clk_scaling_lock))
When something let __find_get_block_slow() hit all_mapped path, it calls
printk() for 100+ times per a second. But there is no need to print same
message with such high frequency; it is just asking for stall warning, or
at least bloating log files.
[ 399.866302][T15342] __find_get_block_slow()
Heyas,
I've discovered an issue when upgrading my sisters computer where
kernels 4.19.1 and above (up to 4.20.3, the latest I could test with)
result in intermittent stalling of the machine as each process gets
"stuck" waiting on disk I/O, even though there is no actual disk
activity occuring (no
4.20-stable review patch. If anyone has any objections, please let me know.
--
From: Jaegeuk Kim
commit 5db470e229e22b7eda6e23b5566e532c96fb5bc3 upstream.
If we don't drop caches used in old offset or block_size, we can get old data
from new offset/block_size, which gives unex
4.19-stable review patch. If anyone has any objections, please let me know.
--
From: Jaegeuk Kim
commit 5db470e229e22b7eda6e23b5566e532c96fb5bc3 upstream.
If we don't drop caches used in old offset or block_size, we can get old data
from new offset/block_size, which gives unex
4.14-stable review patch. If anyone has any objections, please let me know.
--
From: Jaegeuk Kim
commit 5db470e229e22b7eda6e23b5566e532c96fb5bc3 upstream.
If we don't drop caches used in old offset or block_size, we can get old data
from new offset/block_size, which gives unex
Add test for changing capacity of a loop device when a filesystem with
non-default block size is mounted on it. This is a regression test for
"blockdev: Fix livelocks on loop device".
Signed-off-by: Jan Kara
---
tests/loop/007 | 39 +++
tests/loop/007.out
V14:
- drop patch(patch 4 in V13) for renaming bvec helpers, as suggested by
Jens
- use mp_bvec_* as multi-page bvec helper name
- fix one build issue, which is caused by missing one converion of
bio_for_each_segment_all in fs/gfs2
- fix one 32bit ARCH s
On 2019-01-18 17:12, Jens Axboe wrote:
[...]
+
+static int io_uring_create(unsigned entries, struct io_uring_params
*p,
+ bool compat)
+{
+ struct user_struct *user = NULL;
+ struct io_ring_ctx *ctx;
+ int ret;
+
+ if (entries > IORING_MAX_ENTR
On Thu, Jan 10, 2019 at 04:30:51PM +0200, Andy Shevchenko wrote:
> There are new types and helpers that are supposed to be used in new code.
>
> As a preparation to get rid of legacy types and API functions do
> the conversion here.
This seems to miss a "lightnvm" in the subject line.
> static
On Mon, Jan 21, 2019 at 09:38:10AM +0100, Christoph Hellwig wrote:
> On Mon, Jan 21, 2019 at 04:37:12PM +0800, Ming Lei wrote:
> > On Mon, Jan 21, 2019 at 09:22:46AM +0100, Christoph Hellwig wrote:
> > > On Mon, Jan 21, 2019 at 04:17:47PM +0800, Ming Lei wrote:
> > > > V14:
> > > > - drop p
On Mon, Jan 21, 2019 at 04:37:12PM +0800, Ming Lei wrote:
> On Mon, Jan 21, 2019 at 09:22:46AM +0100, Christoph Hellwig wrote:
> > On Mon, Jan 21, 2019 at 04:17:47PM +0800, Ming Lei wrote:
> > > V14:
> > > - drop patch(patch 4 in V13) for renaming bvec helpers, as suggested by
> > > Jens
> > >
On Mon, Jan 21, 2019 at 09:22:46AM +0100, Christoph Hellwig wrote:
> On Mon, Jan 21, 2019 at 04:17:47PM +0800, Ming Lei wrote:
> > V14:
> > - drop patch(patch 4 in V13) for renaming bvec helpers, as suggested by
> > Jens
> > - use mp_bvec_* as multi-page bvec helper name
>
> WTF? Where i
On Sat, Jan 19, 2019 at 08:47:13PM +0100, Marc Gonzalez wrote:
> On 19/01/2019 10:56, Christoph Hellwig wrote:
>
> > On Jan 18, 2019 at 10:48:15AM -0700, Jens Axboe wrote:
> >
> >> It's UFS that totally buggy, if you look at its queuecommand, it does:
> >>
> >> if (!down_read_trylock(&hba-
On Sat, Jan 19, 2019 at 08:09:18AM -0800, Bart Van Assche wrote:
> Which patch are you referring to?
a3cd5ec55f6 ("scsi: ufs: add load based scaling of UFS gear")
That being said it doesn't seem entirely trivial to revert due to
later additions, so this might require a little more work.
Once multi-page bvec is enabled, the last bvec may include more than one
page, this patch use mp_bvec_last_segment() to truncate the bio.
Reviewed-by: Omar Sandoval
Reviewed-by: Christoph Hellwig
Signed-off-by: Ming Lei
---
fs/buffer.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
Since bdced438acd83ad83a6c ("block: setup bi_phys_segments after splitting"),
physical segment number is mainly figured out in blk_queue_split() for
fast path, and the flag of BIO_SEG_VALID is set there too.
Now only blk_recount_segments() and blk_recalc_rq_segments() use this
flag.
Basically blk
Now multi-page bvec is supported, some helpers may return page by
page, meantime some may return segment by segment, this patch
documents the usage.
Reviewed-by: Christoph Hellwig
Reviewed-by: Omar Sandoval
Signed-off-by: Ming Lei
---
Documentation/block/biovecs.txt | 25 ++
QUEUE_FLAG_NO_SG_MERGE has been killed, so kill BLK_MQ_F_SG_MERGE too.
Reviewed-by: Christoph Hellwig
Reviewed-by: Omar Sandoval
Signed-off-by: Ming Lei
---
block/blk-mq-debugfs.c | 1 -
drivers/block/loop.c | 2 +-
drivers/block/nbd.c | 2 +-
drivers/block/rbd.c
On Mon, Jan 21, 2019 at 04:17:47PM +0800, Ming Lei wrote:
> V14:
> - drop patch(patch 4 in V13) for renaming bvec helpers, as suggested by
> Jens
> - use mp_bvec_* as multi-page bvec helper name
WTF? Where is this coming from? mp is just a nightmare of a name,
and I also didn't see
This patch introduces one extra iterator variable to bio_for_each_segment_all(),
then we can allow bio_for_each_segment_all() to iterate over multi-page bvec.
Given it is just one mechannical & simple change on all
bio_for_each_segment_all()
users, this patch does tree-wide change in one single p
Now multi-page bvec can cover CONFIG_THP_SWAP, so we don't need to
increase BIO_MAX_PAGES for it.
CONFIG_THP_SWAP needs to split one THP into normal pages and adds
them all to one bio. With multipage-bvec, it just takes one bvec to
hold them all.
Reviewed-by: Omar Sandoval
Reviewed-by: Christoph
iov_iter is implemented on bvec itererator helpers, so it is safe to pass
multi-page bvec to it, and this way is much more efficient than passing one
page in each bvec.
Reviewed-by: Christoph Hellwig
Reviewed-by: Omar Sandoval
Signed-off-by: Ming Lei
---
drivers/block/loop.c | 20 ++---
This patch pulls the trigger for multi-page bvecs.
Reviewed-by: Omar Sandoval
Signed-off-by: Ming Lei
---
block/bio.c | 22 +++---
fs/iomap.c | 4 ++--
fs/xfs/xfs_aops.c | 4 ++--
include/linux/bio.h | 2 +-
4 files changed, 20 insertions(+), 12 deletions(-
bch_bio_alloc_pages() is always called on one new bio, so it is safe
to access the bvec table directly. Given it is the only kind of this
case, open code the bvec table access since bio_for_each_segment_all()
will be changed to support for iterating over multipage bvec.
Acked-by: Coly Li
Reviewed
Preparing for supporting multi-page bvec.
Reviewed-by: Omar Sandoval
Signed-off-by: Ming Lei
---
fs/btrfs/extent_io.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index dc8ba3ee515d..986ef49b0269 100644
--- a/fs/btrfs/exten
It is more efficient to use bio_for_each_mp_bvec() to map sg, meantime
we have to consider splitting multipage bvec as done in blk_bio_segment_split().
Reviewed-by: Omar Sandoval
Reviewed-by: Christoph Hellwig
Signed-off-by: Ming Lei
---
block/blk-merge.c | 70 +
First it is more efficient to use bio_for_each_mp_bvec() in both
blk_bio_segment_split() and __blk_recalc_rq_segments() to compute how
many multi-page bvecs there are in the bio.
Secondly once bio_for_each_mp_bvec() is used, the bvec may need to be
splitted because its length can be very longer th
BTRFS and guard_bio_eod() need to get the last singlepage segment
from one multipage bvec, so introduce this helper to make them happy.
Reviewed-by: Omar Sandoval
Signed-off-by: Ming Lei
---
include/linux/bvec.h | 22 ++
1 file changed, 22 insertions(+)
diff --git a/include
bio_for_each_mp_bvec() is used for iterating over multi-page bvec for bio
split & merge code.
rq_for_each_mp_bvec() can be used for drivers which may handle the
multi-page bvec directly, so far loop is one perfect use case.
Reviewed-by: Christoph Hellwig
Reviewed-by: Omar Sandoval
Signed-off-by
Commit 7759eb23fd980 ("block: remove bio_rewind_iter()") removes
bio_rewind_iter(), then no one uses bvec_iter_rewind() any more,
so remove it.
Reviewed-by: Omar Sandoval
Reviewed-by: Christoph Hellwig
Signed-off-by: Ming Lei
---
include/linux/bvec.h | 24
1 file chang
This patch introduces helpers of 'mp_bvec_iter_*' for multi-page bvec
support.
The introduced helpers treate one bvec as real multi-page segment,
which may include more than one pages.
The existed helpers of bvec_iter_* are interfaces for supporting current
bvec iterator which is thought as singl
It is wrong to use bio->bi_vcnt to figure out how many segments
there are in the bio even though CLONED flag isn't set on this bio,
because this bio may be splitted or advanced.
So always use bio_segments() in blk_recount_segments(), and it shouldn't
cause any performance loss now because the phys
Hi,
This patchset brings multi-page bvec into block layer:
1) what is multi-page bvec?
Multipage bvecs means that one 'struct bio_bvec' can hold multiple pages
which are physically contiguous instead of one single page used in linux
kernel for long time.
2) why is multi-page bvec introduced?
K
From: Christoph Hellwig
bio_readpage_error currently uses bi_vcnt to decide if it is worth
retrying an I/O. But the vector count is mostly an implementation
artifact - it really should figure out if there is more than a
single sector worth retrying. Use bi_size for that and shift by
PAGE_SHIFT.
On Sat, Jan 19, 2019 at 01:05:03PM -0500, Mike Snitzer wrote:
> DM's clone_bio() now benefits from using bio_trim() by fixing the fact
> that clone_bio() wasn't clearing BIO_SEG_VALID like bio_trim() does;
> which triggers blk_recount_segments() via bio_phys_segments().
>
> Signed-off-by: Mike Sni
On Sat, Jan 19, 2019 at 01:05:05PM -0500, Mike Snitzer wrote:
> Use the same BIO_QUEUE_ENTERED pattern that was established by commit
> cd4a4ae4683dc ("block: don't use blocking queue entered for recursive
> bio submits") by setting BIO_QUEUE_ENTERED after bio_split() and before
> recursing via gen
On Sat, 19 Jan 2019, Scott Bauer wrote:
On Thu, Jan 17, 2019 at 09:31:55PM +, David Kozub wrote:
- for (state = 0; !error && state < n_steps; state++) {
- step = &steps[state];
-
- error = step->fn(dev, step->data);
- if (error) {
-
On Sat, Jan 19 2019, Mike Snitzer wrote:
> Use the same BIO_QUEUE_ENTERED pattern that was established by commit
> cd4a4ae4683dc ("block: don't use blocking queue entered for recursive
> bio submits") by setting BIO_QUEUE_ENTERED after bio_split() and before
> recursing via generic_make_request().
56 matches
Mail list logo