On 2018-12-10 at 16:10:31 -0800, Dan Williams wrote:
> On Tue, Aug 21, 2018 at 12:38 AM Yi Zhang wrote:
> >
> > On 2018-08-20 at 12:50:31 -0700, Dave Jiang wrote:
> > >
> > >
> > > On 08/20/2018 10:53 AM, Verma, Vishal L wrote:
> > > >
> > > > On Mon, 2018-08-13 at 20:02 +0800, Zhang Yi wrote:
>
From: Dan Williams
[ Upstream commit b5fd2e00a60248902315fb32210550ac3cb9f44c ]
A "short" ARS (address range scrub) instructs the platform firmware to
return known errors. In contrast, a "long" ARS instructs platform
firmware to arrange every data address on the DIMM to be read / checked
for
Force the device registration for nvdimm devices to be closer to the actual
device. This is achieved by using either the NUMA node ID of the region, or
of the parent. By doing this we can have everything above the region based
on the region, and everything below the region based on the nvdimm bus.
The current async_probe test code is only testing one device allocated
prior to driver load and only loading one device afterwards. Instead of
doing things this way it makes much more sense to load one device per CPU
in order to actually stress the async infrastructure. By doing this we
should see
Use the device specific version of the async_schedule commands to defer
various tasks related to power management. By doing this we should see a
slight improvement in performance as any device that is sensitive to
latency/locality in the setup will now be initializing on the node closest
to the
Introduce four new variants of the async_schedule_ functions that allow
scheduling on a specific NUMA node.
The first two functions are async_schedule_near and
async_schedule_near_domain end up mapping to async_schedule and
async_schedule_domain, but provide NUMA node specific functionality. They
Call the asynchronous probe routines on a CPU local to the device node. By
doing this we should be able to improve our initialization time
significantly as we can avoid having to access the device from a remote
node which may introduce higher latency.
For example, in the case of initializing
This patch set provides functionality that will help to improve the
locality of the async_schedule calls used to provide deferred
initialization.
This patch set originally started out focused on just the one call to
async_schedule_domain in the nvdimm tree that was being used to defer the
Try to consolidate all of the locking and unlocking of both the parent and
device when attaching or removing a driver from a given device.
To do that I first consolidated the lock pattern into two functions
__device_driver_lock and __device_driver_unlock. After doing that I then
created functions
Provide a new function, queue_work_node, which is meant to schedule work on
a "random" CPU of the requested NUMA node. The main motivation for this is
to help assist asynchronous init to better improve boot times for devices
that are local to a specific node.
For now we just default to the first
Add an additional bit flag to the device struct named "dead".
This additional flag provides a guarantee that when a device_del is
executed on a given interface an async worker will not attempt to attach
the driver following the earlier device_del call. Previously this
guarantee was not present
Probe devices asynchronously instead of the driver. This results in us
seeing the same behavior if the device is registered before the driver or
after. This way we can avoid serializing the initialization should the
driver not be loaded until after the devices have already been added.
The
On Wed, Dec 12 2018 at 4:15pm -0500,
Theodore Y. Ts'o wrote:
> On Wed, Dec 12, 2018 at 12:50:47PM -0500, Mike Snitzer wrote:
> > On Wed, Dec 12 2018 at 11:12am -0500,
> > Christoph Hellwig wrote:
> >
> > > Does it really make sense to enhance dm-snapshot? I thought all serious
> > > users of
On Wed, Dec 12, 2018 at 12:50:47PM -0500, Mike Snitzer wrote:
> On Wed, Dec 12 2018 at 11:12am -0500,
> Christoph Hellwig wrote:
>
> > Does it really make sense to enhance dm-snapshot? I thought all serious
> > users of snapshots had moved on to dm-thinp?
>
> There are cases where dm-snapshot
On Wed, 2018-12-12 at 12:50 -0500, Mike Snitzer wrote:
> On Wed, Dec 12 2018 at 11:12am -0500,
> Christoph Hellwig wrote:
>
> > Does it really make sense to enhance dm-snapshot? I thought all serious
> > users of snapshots had moved on to dm-thinp?
>
> There are cases where dm-snapshot is
On Wed, Dec 12 2018 at 11:12am -0500,
Christoph Hellwig wrote:
> Does it really make sense to enhance dm-snapshot? I thought all serious
> users of snapshots had moved on to dm-thinp?
There are cases where dm-snapshot is still useful for people. But those
are very niche users. I'm not
Does it really make sense to enhance dm-snapshot? I thought all serious
users of snapshots had moved on to dm-thinp?
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm
On Fri, 31 Aug 2018 17:42:55 +0800 Jan Kara wrote
> On Fri 31-08-18 09:38:09, Dave Chinner wrote:
> > On Thu, Aug 30, 2018 at 03:47:32PM -0400, Mikulas Patocka wrote:
> > >
> > >
> > > On Thu, 30 Aug 2018, Jeff Moyer wrote:
> > >
> > > > Mike Snitzer writes:
> > > >
On Tue, 2018-12-11 at 13:25 -0700, Dave Jiang wrote:
> Adding nvdimm key format type to encrypted keys in order to limit the size
> of the key to 32bytes.
>
> Signed-off-by: Dave Jiang
> Signed-off-by: Dan Williams
Acked-by: Mimi Zohar
> ---
>
19 matches
Mail list logo