Currently pblk assumes that size of OOB metadata on drive is always
equal to size of pblk_sec_meta struct. This commit add helpers which will
allow to handle different sizes of OOB metadata on drive.
Signed-off-by: Igor Konopko
---
drivers/lightnvm/pblk-core.c | 10 +
drivers/lightnv
Currently whole lightnvm and pblk uses single DMA pool,
for which entry size is always equal to PAGE_SIZE.
PPA list always needs 8b*64, so there is only 56b*64
space for OOB meta. Since NVMe OOB meta can be bigger,
such as 128b, this solution is not robustness.
This patch add the possiblity to sup
In current pblk implementation, l2p mapping for not closed lines
is always stored only in OOB metadata and recovered from it.
Such a solution does not provide data integrity when drives does
not have such a OOB metadata space.
The goal of this patch is to add support for so called packed
metadata
Currently pblk and lightnvm does only check for size
of OOB metadata and does not care wheather this meta
is located in separate buffer or is interleaved with
data in single buffer.
In reality only the first scenario is supported, where
second mode will break pblk functionality during any
IO opera
This series of patches introduce some more flexibility in pblk
related to OOB meta:
-ability to use different sizes of metadata (previously fixed 16b)
-ability to use pblk on drives without metadata
-ensuring that extended (interleaved) metadata is not in use
I belive that most of this patches, ma
Since we have flexible size of pblk_sec_meta
which depends on drive metadata size we can
remove not needed reserved field from that
structure
Signed-off-by: Igor Konopko
---
drivers/lightnvm/pblk.h | 1 -
1 file changed, 1 deletion(-)
diff --git a/drivers/lightnvm/pblk.h b/drivers/lightnvm/pblk
Remove this function since it has no callers. This function was
introduced in commit 6cc77e9cb080 ("block: introduce zoned block
devices zone write locking").
Signed-off-by: Bart Van Assche
Reviewed-by: Damien Le Moal
Cc: Christoph Hellwig
Cc: Matias Bjorling
---
include/linux/blkdev.h | 9 --
Since the implementation of blk_queue_nr_zones() is trivial and since
it only has a single caller, inline this function.
Signed-off-by: Bart Van Assche
Reviewed-by: Damien Le Moal
Cc: Matias Bjorling
Cc: Christoph Hellwig
---
block/blk-mq-debugfs.c | 2 +-
include/linux/blkdev.h | 5 -
2
Using the __packed directive for a structure that does not need
it is wrong because it makes gcc generate suboptimal code on some
architectures. Hence remove the __packed directive from the
blk_zone_report structure definition. See also
http://digitalvampire.org/blog/index.php/2006/07/31/why-you-sh
Exclude zoned block device members from struct request_queue for
CONFIG_BLK_DEV_ZONED == n. Avoid breaking the build by only building
the code that uses these struct request_queue members if
CONFIG_BLK_DEV_ZONED != n.
Signed-off-by: Bart Van Assche
Reviewed-by: Damien Le Moal
Cc: Matias Bjorling
No cast is necessary when assigning a non-void pointer to a void
pointer.
Signed-off-by: Bart Van Assche
Reviewed-by: Damien Le Moal
Cc: Matias Bjorling
Cc: Christoph Hellwig
---
block/blk-zoned.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/block/blk-zoned.c b/block/bl
Hello Jens,
In this patch series there are five patches with small improvements for the
zoned block device code. Please consider these patches for the upstream kernel.
Thanks,
Bart.
Bart Van Assche (5):
block: Remove a superfluous cast from blkdev_report_zones()
include/uapi/linux/blkzoned.
Hello Jens,
In this patch series there are five patches with small improvements for the
zoned block device code. Please consider these patches for the upstream kernel.
Thanks,
Bart.
Bart Van Assche (5):
block: Remove a superfluous cast from blkdev_report_zones()
include/uapi/linux/blkzoned.
If NBD_DISCONNECT_ON_CLOSE is set on a device, then the driver will
issue a disconnect from nbd_release if the device has no remaining
bdev->bd_openers.
Fix ret val so reconfigure with only setting the flag succeeds.
Reviewed-by: Josef Bacik
Signed-off-by: Doron Roberts-Kedes
---
drivers/block
Hi Igor,
thanks for testing. You are correct with goto fail_pages
I will fix, rebase on top of 4-19 and resend the patch.
Heiner
On Wed, Jun 13, 2018 at 10:49 AM Igor Konopko wrote:
>
>
>
> On 12.06.2018 10:09, Matias Bjørling wrote:
> > On 06/12/2018 04:59 PM, Javier Gonzalez wrote:
> >>> On
On Fri, 2018-06-15 at 18:55 +0200, Hannes Reinecke wrote:
> On 06/15/2018 04:07 PM, Bart Van Assche wrote:
> > On Thu, 2018-06-14 at 15:38 +0200, Hannes Reinecke wrote:
> > > For performance reasons we should be able to allocate all memory
> > > from a given NUMA node, so this patch adds a new para
On 06/15/2018 04:07 PM, Bart Van Assche wrote:
On Thu, 2018-06-14 at 15:38 +0200, Hannes Reinecke wrote:
For performance reasons we should be able to allocate all memory
from a given NUMA node, so this patch adds a new parameter
'rd_numa_node' to allow the user to specify the NUMA node id.
When
On Fri, Jun 15 2018 at 5:59am -0400,
Damien Le Moal wrote:
> Mike,
>
> On 6/15/18 02:58, Mike Snitzer wrote:
> > On Thu, Jun 14 2018 at 1:37pm -0400,
> > Luis R. Rodriguez wrote:
> >
> >> On Thu, Jun 14, 2018 at 08:38:06AM -0400, Mike Snitzer wrote:
> >>> On Wed, Jun 13 2018 at 8:11pm -0400
On 6/15/18 3:23 AM, Mel Gorman wrote:
> On Thu, Jun 14, 2018 at 02:47:39PM -0600, Jens Axboe wrote:
> Will numactl ... modprobe brd ... solve this problem?
It won't, pages are allocated as needed.
>>>
>>> Then how about a numactl ... dd /dev/ram ... after the modprobe.
>>
>> Yes
On 6/15/18 7:13 AM, Christoph Hellwig wrote:
> Fix various little regressions introduced in this merge window, plus
> a rework of the fibre channel connect and reconnect path to share the
> code instead of having separate sets of bugs. Last but not least a
> trivial trace point addition from Hanne
On 6/15/18 5:55 AM, Christoph Hellwig wrote:
> This function is entirely unused, so remove it and the tag_queue_busy
> member of struct request_queue.
Applied, thanks.
--
Jens Axboe
On 6/15/18 1:30 AM, Christoph Hellwig wrote:
> On Thu, Jun 14, 2018 at 09:33:35AM -0600, Jens Axboe wrote:
>> Next question - what does the memory allocator do if we run out of
>> memory on the given node? Should we punt to a different node if that
>> happens? Slower, but functional, seems preferab
On Thu, 2018-06-14 at 15:38 +0200, Hannes Reinecke wrote:
> For performance reasons we should be able to allocate all memory
> from a given NUMA node, so this patch adds a new parameter
> 'rd_numa_node' to allow the user to specify the NUMA node id.
> When restricing fio to use the same NUMA node I
Fix various little regressions introduced in this merge window, plus
a rework of the fibre channel connect and reconnect path to share the
code instead of having separate sets of bugs. Last but not least a
trivial trace point addition from Hannes.
The following changes since commit 190b02ed79e08
This function is entirely unused, so remove it and the tag_queue_busy
member of struct request_queue.
Signed-off-by: Christoph Hellwig
---
Documentation/block/biodoc.txt | 15 +--
block/blk-tag.c| 22 --
include/linux/blkdev.h | 2 --
3 fi
On Thu, 2018-06-14 at 06:42 -0700, Christoph Hellwig wrote:
> On Thu, Jun 14, 2018 at 01:39:50PM +, Bart Van Assche wrote:
> > On Thu, 2018-06-14 at 10:01 +, Damien Le Moal wrote:
> > > Applied. Thanks Luis !
> >
> > Hello Damien,
> >
> > Can this still be undone? I agree with Mike that i
Mike,
On 6/15/18 02:58, Mike Snitzer wrote:
> On Thu, Jun 14 2018 at 1:37pm -0400,
> Luis R. Rodriguez wrote:
>
>> On Thu, Jun 14, 2018 at 08:38:06AM -0400, Mike Snitzer wrote:
>>> On Wed, Jun 13 2018 at 8:11pm -0400,
>>> Luis R. Rodriguez wrote:
>>>
Setting up a zoned disks in a generic
On Thu, Jun 14, 2018 at 02:47:39PM -0600, Jens Axboe wrote:
> >>> Will numactl ... modprobe brd ... solve this problem?
> >>
> >> It won't, pages are allocated as needed.
> >>
> >
> > Then how about a numactl ... dd /dev/ram ... after the modprobe.
>
> Yes of course, or you could do that for ever
Mike,
On 6/14/18 21:38, Mike Snitzer wrote:
> On Wed, Jun 13 2018 at 8:11pm -0400,
> Luis R. Rodriguez wrote:
>
>> Setting up a zoned disks in a generic form is not so trivial. There
>> is also quite a bit of tribal knowledge with these devices which is not
>> easy to find.
>>
>> The currently
On Thu, Jun 14, 2018 at 09:33:35AM -0600, Jens Axboe wrote:
> Next question - what does the memory allocator do if we run out of
> memory on the given node? Should we punt to a different node if that
> happens? Slower, but functional, seems preferable to not being able
> to get memory.
When using
30 matches
Mail list logo