On Wed, Jun 13, 2018 at 10:54:09AM -0400, Kent Overstreet wrote:
> On Wed, Jun 13, 2018 at 03:56:32PM +0200, Christoph Hellwig wrote:
> > On Wed, Jun 13, 2018 at 07:06:41PM +0800, Ming Lei wrote:
> > > > before bio_alloc_pages) that can be switched to something that just
> > > > creates a
> > > >
Setting up a zoned disks in a generic form is not so trivial. There
is also quite a bit of tribal knowledge with these devices which is not
easy to find.
The currently supplied demo script works but it is not generic enough to be
practical for Linux distributions or even developers which often mov
On 6/13/2018 10:41 AM, Keith Busch wrote:
> Thanks for the feedback!
>
> This test does indeed toggle the Link Control Link Disable bit to simulate
> the link failure. The PCIe specification specifically covers this case
> in Section 3.2.1, Data Link Control and Management State Machine Rules:
>
>
On 12.06.2018 10:09, Matias Bjørling wrote:
On 06/12/2018 04:59 PM, Javier Gonzalez wrote:
On 11 Jun 2018, at 22.53, Heiner Litz wrote:
In the read path, partial reads are currently performed synchronously
which affects performance for workloads that generate many partial
reads. This patch
On 6/13/2018 10:41 AM, Keith Busch wrote:
> Thanks for the feedback!
> This test does indeed toggle the Link Control Link Disable bit to simulate
> the link failure. The PCIe specification specifically covers this case
> in Section 3.2.1, Data Link Control and Management State Machine Rules:
>
>
On Wed, Jun 13, 2018 at 03:36:42PM +, Bart Van Assche wrote:
> Hello stable kernel maintainers,
>
> Please backport patch 327ea4adcfa3 ("blkdev_report_zones_ioctl():
> Use vmalloc() to allocate large buffers") to at least the v4.17.x and
> v4.14.y stable kernel series. That patch fixes a bug i
On Tue, Jun 12, 2018 at 04:41:54PM -0700, austin.bo...@dell.com wrote:
> It looks like the test is setting the Link Disable bit. But this is not
> a good simulation for hot-plug surprise removal testing or surprise link
> down (SLD) testing, if that is the intent. One reason is that Link
> Disabl
Hello stable kernel maintainers,
Please backport patch 327ea4adcfa3 ("blkdev_report_zones_ioctl():
Use vmalloc() to allocate large buffers") to at least the v4.17.x and
v4.14.y stable kernel series. That patch fixes a bug introduced in
kernel v4.10. The entire patch is shown below.
Thanks,
Bart.
On 6/13/18 9:20 AM, Bart Van Assche wrote:
> On 05/22/18 10:58, Jens Axboe wrote:
>> On 5/22/18 9:27 AM, Bart Van Assche wrote:
>>> Avoid that complaints similar to the following appear in the kernel log
>>> if the number of zones is sufficiently large:
>>>
>>>fio: page allocation failure: orde
On 05/22/18 10:58, Jens Axboe wrote:
On 5/22/18 9:27 AM, Bart Van Assche wrote:
Avoid that complaints similar to the following appear in the kernel log
if the number of zones is sufficiently large:
fio: page allocation failure: order:9,
mode:0x140c0c0(GFP_KERNEL|__GFP_COMP|__GFP_ZERO), node
On Wed, Jun 13, 2018 at 03:56:32PM +0200, Christoph Hellwig wrote:
> On Wed, Jun 13, 2018 at 07:06:41PM +0800, Ming Lei wrote:
> > > before bio_alloc_pages) that can be switched to something that just
> > > creates a
> > > single bvec.
> >
> > Yes, multipage bvec shouldn't break any driver or fs.
On Wed, Jun 13, 2018 at 03:59:15PM +0200, Christoph Hellwig wrote:
> On Wed, Jun 13, 2018 at 04:54:41AM -0400, Kent Overstreet wrote:
> > bi_size is not immutable though, it will usually be modified by drivers
> > when you
> > submit a bio.
> >
> > I see what you're trying to do, but your approac
This abstracts out a way to reuse a bio without destroying the
bio vectors containing the data.
Signed-off-by: Christoph Hellwig
---
block/bio.c | 19 +++
include/linux/bio.h | 1 +
2 files changed, 20 insertions(+)
diff --git a/block/bio.c b/block/bio.c
index 70c4e1b6d
Instead of reinitializing the bio everytime we can call bio_reuse when
reusing it. Also moves the private data initialization out of
dirty_init, which is renamed to suit the remaining functionality.
Signed-off-by: Christoph Hellwig
---
drivers/md/bcache/writeback.c | 26 +---
We immediately overwrite the biovec array, so instead just allocate
a new bio and copy over the disk, setor and size.
Signed-off-by: Christoph Hellwig
---
drivers/md/bcache/debug.c | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/md/bcache/debug.c b/drivers/md/bca
Let bch_bio_alloc_pages and bch_bio_map set up the bio vec information
and bi_size. This also means no additional bch_bio_map call with
a NULL argument is needed before bch_bio_alloc_pages.
Signed-off-by: Christoph Hellwig
---
drivers/md/bcache/btree.c | 16 +++-
drivers/md/bcache/debug
Use the bio_reuse helper instead of rebuilding the bio_vecs and
size for bios that get reused for the same data.
Signed-off-by: Christoph Hellwig
---
drivers/md/bcache/request.c | 5 +
drivers/md/bcache/super.c | 6 ++
2 files changed, 3 insertions(+), 8 deletions(-)
diff --git a/driv
Hi all,
this series cleans up various places where bcache is way too intimate
with bio internals. This is intended as a baseline for the multi-page
biovec work, which requires some nasty workarounds for the existing
code.
Note that I do not have a bcache test setup, so this will require
some car
Instead of reinitializing the bio everytime we can call bio_reuse when
reusing it. Also removes the remainder of the moving_init helper
to improve readability.
Signed-off-by: Christoph Hellwig
---
drivers/md/bcache/movinggc.c | 40 +---
1 file changed, 19 inserti
On Wed, Jun 13, 2018 at 04:54:41AM -0400, Kent Overstreet wrote:
> bi_size is not immutable though, it will usually be modified by drivers when
> you
> submit a bio.
>
> I see what you're trying to do, but your approach is busted given the way the
> block layer works today. You'd have to save bio
On Wed, Jun 13, 2018 at 07:06:41PM +0800, Ming Lei wrote:
> > before bio_alloc_pages) that can be switched to something that just creates
> > a
> > single bvec.
>
> Yes, multipage bvec shouldn't break any driver or fs.
It probably isn't broken, at least I didn't see assumptions of the same
numbe
On Wed, Jun 13, 2018 at 05:58:01AM -0400, Kent Overstreet wrote:
> On Mon, Jun 11, 2018 at 09:48:00PM +0200, Christoph Hellwig wrote:
> > Hi all,
> >
> > this series cleans up various places where bcache is way too intimate
> > with bio internals. This is intended as a baseline for the multi-page
On Mon, Jun 11, 2018 at 09:48:00PM +0200, Christoph Hellwig wrote:
> Hi all,
>
> this series cleans up various places where bcache is way too intimate
> with bio internals. This is intended as a baseline for the multi-page
> biovec work, which requires some nasty workarounds for the existing
> co
On Wed, Jun 13, 2018 at 09:32:04AM +0200, Christoph Hellwig wrote:
> On Tue, Jun 12, 2018 at 02:16:30AM -0400, Kent Overstreet wrote:
> > On Mon, Jun 11, 2018 at 09:48:01PM +0200, Christoph Hellwig wrote:
> > > This abstracts out a way to reuse a bio without destroying the
> > > data pointers.
> >
On Tue, Jun 12, 2018 at 02:16:30AM -0400, Kent Overstreet wrote:
> On Mon, Jun 11, 2018 at 09:48:01PM +0200, Christoph Hellwig wrote:
> > This abstracts out a way to reuse a bio without destroying the
> > data pointers.
>
> What is the point of this? What "data pointers" does it not destroy?
It k
25 matches
Mail list logo