On Mon, Jun 05, 2023 at 07:23:34AM -0600, Jonathan Corbet wrote:
> Bagas Sanjaya writes:
>
> > On Sun, Jun 04, 2023 at 12:06:03PM -0700, Russell Harmon wrote:
> >> Specifically:
> >> interleave_sectors = 32768
> >> buffer_sectors = 128
> >> block_size = 512
> >> journal_watermark = 50
> >
On Sun, Jun 04, 2023 at 10:08:51PM -0700, Russell Harmon wrote:
> +Accesses to the on-disk metadata area containing checksums (aka tags) are
> +buffered using dm-bufio. When an access to any given metadata area
> +occurs, each unique metadata area gets its own buffer(s). The buffer size
> +is cappe
On 6/3/23 22:22, Chang S. Bae wrote:
> +==
> +x86 Key Locker
> +==
> +
> +Introduction
> +
> +
> +Key Locker is a CPU feature to reduce key exfiltration opportunities
> +while maintaining a programming interface similar to AES-NI. It
> +converts the AES key into
On Sun, Jun 04, 2023 at 10:08:50PM -0700, Russell Harmon wrote:
> -There's an alternate mode of operation where dm-integrity uses bitmap
> +There's an alternate mode of operation where dm-integrity uses a bitmap
LGTM, thanks!
Reviewed-by: Bagas Sanjaya
--
An old man doll... just what I always
On 1/13/22 04:12, Chang S. Bae wrote:
> +==
> +x86 Key Locker
> +==
> +
> +Introduction
> +
> +
> +Key Locker is a CPU feature feature to reduce key exfiltration
> +opportunities while maintaining a programming interface similar to AES-NI.
> +It converts the AES
On Mon, Jun 05, 2023 at 02:08:07PM -0500, Benjamin Marzinski wrote:
> On Wed, May 31, 2023 at 04:27:30PM +, Martin Wilck wrote:
> > On Wed, 2023-05-24 at 18:21 -0500, Benjamin Marzinski wrote:
> > > need_switch_pathgroup() only checks if the currently used pathgroup
> > > is
> > > not the highe
On 6/5/2023 7:17 PM, Randy Dunlap wrote:
On 6/3/23 08:22, Chang S. Bae wrote:
+
+* AES-KL implements support for 128-bit and 256-bit keys, but there is no
+ AES-KL instruction to process an 192-bit key. The AES-KL cipher
+ implementation logs a warning message with a 192-bit key and then falls
On 6/3/23 08:22, Chang S. Bae wrote:
> Document the overview of the feature along with relevant consideration
> when provisioning dm-crypt volumes with AES-KL instead of AES-NI.
>
> ---
> ---
> Documentation/arch/x86/index.rst | 1 +
> Documentation/arch/x86/keylocker.rst | 97 +++
On Sat, Jun 3, 2023 at 8:57 AM Mike Snitzer wrote:
>
> On Fri, Jun 02 2023 at 8:52P -0400,
> Dave Chinner wrote:
>
> > On Fri, Jun 02, 2023 at 11:44:27AM -0700, Sarthak Kukreti wrote:
> > > On Tue, May 30, 2023 at 8:28 AM Mike Snitzer wrote:
> > > >
> > > > On Tue, May 30 2023 at 10:55P -0400,
On Wed, May 31, 2023 at 04:27:30PM +, Martin Wilck wrote:
> On Wed, 2023-05-24 at 18:21 -0500, Benjamin Marzinski wrote:
> > need_switch_pathgroup() only checks if the currently used pathgroup
> > is
> > not the highest priority pathgroup. If it isn't, all multipathd does
> > is
> > instruct th
On Wed, May 31, 2023 at 04:27:25PM +, Martin Wilck wrote:
> On Wed, 2023-05-24 at 18:21 -0500, Benjamin Marzinski wrote:
> > For multipath devices with path group policies other than
> > group_by_prio,
> > multipathd wasn't updating all the paths' priorities when calling
> > need_switch_pathgro
On Wed, 31 May 2023 14:55:12 +0200, Christoph Hellwig wrote:
> bool is the most sensible return value for a yes/no return. Also
> add __init as this funtion is only called from the early boot code.
>
>
Applied, thanks!
[01/24] driver core: return bool from driver_probe_done
commit: a
> break;
> case REQ_OP_READ:
> - ret = nvme_setup_rw(ns, req, cmd, nvme_cmd_read);
> + if (unlikely(req->cmd_flags & REQ_COPY))
> + nvme_setup_copy_read(ns, req);
> + else
> + ret = nvme_setup_rw(ns, req
On Fri, Jun 02, 2023 at 06:16:08PM +0200, Martin Wilck wrote:
> On Thu, 2023-06-01 at 13:17 -0500, Benjamin Marzinski wrote:
> > On Wed, May 31, 2023 at 03:44:58PM +, Martin Wilck wrote:
> > > On Fri, 2023-05-19 at 18:02 -0500, Benjamin Marzinski wrote:
> > > > This allows configuations to use
Bagas Sanjaya writes:
> On Sun, Jun 04, 2023 at 12:06:03PM -0700, Russell Harmon wrote:
>> Specifically:
>> interleave_sectors = 32768
>> buffer_sectors = 128
>> block_size = 512
>> journal_watermark = 50
>> commit_time = 1
>
> Your patch description duplicates the diff content belo
Implementaion is based on existing read and write infrastructure.
copy_max_bytes: A new configfs and module parameter is introduced, which
can be used to set hardware/driver supported maximum copy limit.
Suggested-by: Damien Le Moal
Signed-off-by: Anuj Gupta
Signed-off-by: Nitesh Shetty
Signed-
Add support for handling nvme_cmd_copy command on target.
For bdev-ns we call into blkdev_issue_copy, which the block layer
completes by a offloaded copy request to backend bdev or by emulating the
request.
For file-ns we call vfs_copy_file_range to service our request.
Currently target always sh
Setting copy_offload_supported flag to enable offload.
Signed-off-by: Nitesh Shetty
---
drivers/md/dm-linear.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/md/dm-linear.c b/drivers/md/dm-linear.c
index f4448d520ee9..1d1ee30bbefb 100644
--- a/drivers/md/dm-linear.c
+++ b/drivers/md
Before enabling copy for dm target, check if underlying devices and
dm target support copy. Avoid split happening inside dm target.
Fail early if the request needs split, currently splitting copy
request is not supported.
Signed-off-by: Nitesh Shetty
---
drivers/md/dm-table.c | 41 ++
For direct block device opened with O_DIRECT, use copy_file_range to
issue device copy offload, and fallback to generic_copy_file_range incase
device copy offload capability is absent.
Modify checks to allow bdevs to use copy_file_range.
Suggested-by: Ming Lei
Signed-off-by: Anuj Gupta
Signed-of
For device supporting native copy, nvme driver receives read and
write request with BLK_COPY op flags.
For read request the nvme driver populates the payload with source
information.
For write request the driver converts it to nvme copy command using the
source information in the payload and submit
For the devices which does not support copy, copy emulation is added.
It is required for in-kernel users like fabrics, where file descriptor is
not available and hence they can't use copy_file_range.
Copy-emulation is implemented by reading from source into memory and
writing to the corresponding d
Add device limits as sysfs entries,
- copy_offload (RW)
- copy_max_bytes (RW)
- copy_max_bytes_hw (RO)
Above limits help to split the copy payload in block layer.
copy_offload: used for setting copy offload(1) or emulation(0).
copy_max_bytes: maximum total length of copy in
Introduce blkdev_issue_copy which takes similar arguments as
copy_file_range and performs copy offload between two bdevs.
Introduce REQ_COPY copy offload operation flag. Create a read-write
bio pair with a token as payload and submitted to the device in order.
Read request populates token with sour
The patch series covers the points discussed in past and most recently
in LSFMM'23[0].
We have covered the initial agreed requirements in this patchset and
further additional features suggested by community.
Patchset borrows Mikulas's token based approach for 2 bdev implementation.
This is next it
On Sat, 2023-06-03 at 13:12 +0200, Xose Vazquez Perez wrote:
> On 5/31/23 17:49, Martin Wilck wrote:
>
> > On Wed, 2023-05-31 at 15:57 +0200, Xose Vazquez Perez wrote:
> > > ALUA is needed by Hitachi Global-Active Device (GAD):
> > > https://knowledge.hitachivantara.com/Documents/Management_Softwa
On Wed, 2023-05-31 at 15:57 +0200, Xose Vazquez Perez wrote:
> ALUA is needed by Hitachi Global-Active Device (GAD):
> https://knowledge.hitachivantara.com/Documents/Management_Software/SVOS/8.1/Global-Active_Device/Overview_of_global-active_device
>
> Cc: Matthias Rudolph
> Cc: Martin Wilck
> C
On 6/5/23 04:31, Coiby Xu wrote:
Hi Eric and Milan,
On Sat, Jun 03, 2023 at 11:22:52AM +0200, Milan Broz wrote:
On 6/2/23 23:34, Eric Biggers wrote:
On Thu, Jun 01, 2023 at 03:24:39PM +0800, Coiby Xu wrote:
[PATCH 0/5] Support kdump with LUKS encryption by reusing LUKS volume key
The kernel
As described in commit 8111964f1b85 ("dm thin: Fix ABBA deadlock between
shrink_slab and dm_pool_abort_metadata"), ABBA deadlock will be triggered
since shrinker_rwsem need to be held when operations failed on dm pool
metadata.
We have noticed the following three problem scenarios:
1) Described by
29 matches
Mail list logo