On Mon, Jan 23, 2017 at 10:03 AM, Christoph Hellwig wrote:
> On Mon, Jan 23, 2017 at 09:14:04AM -0800, Dan Williams wrote:
>> The use case that we have now is distinguishing volatile vs persistent
>> memory (brd vs pmem).
>
> brd is a development tool, so until we have other reasons
On Mon, Jan 23, 2017 at 10:03 AM, Christoph Hellwig wrote:
> On Mon, Jan 23, 2017 at 09:14:04AM -0800, Dan Williams wrote:
>> The use case that we have now is distinguishing volatile vs persistent
>> memory (brd vs pmem).
>
> brd is a development tool, so until we have other reasons for this
>
On Mon, Jan 23, 2017 at 09:14:04AM -0800, Dan Williams wrote:
> The use case that we have now is distinguishing volatile vs persistent
> memory (brd vs pmem).
brd is a development tool, so until we have other reasons for this
abstraction (which I'm pretty sure will show up rather sooner than
On Mon, Jan 23, 2017 at 09:14:04AM -0800, Dan Williams wrote:
> The use case that we have now is distinguishing volatile vs persistent
> memory (brd vs pmem).
brd is a development tool, so until we have other reasons for this
abstraction (which I'm pretty sure will show up rather sooner than
On Mon, Jan 23, 2017 at 8:00 AM, Christoph Hellwig wrote:
> On Sun, Jan 22, 2017 at 11:10:04PM -0800, Dan Williams wrote:
>> How about we solve the copy_from_user() abuse first before we hijack
>> this thread for some future feature that afaics has no patches posted
>> yet.
>
>
On Mon, Jan 23, 2017 at 8:00 AM, Christoph Hellwig wrote:
> On Sun, Jan 22, 2017 at 11:10:04PM -0800, Dan Williams wrote:
>> How about we solve the copy_from_user() abuse first before we hijack
>> this thread for some future feature that afaics has no patches posted
>> yet.
>
> Solving
On Sun, Jan 22, 2017 at 09:30:23AM -0800, Dan Williams wrote:
> So are you saying we need a way to go from a block_device inode to a
> dax_device inode and then look up the dax_operations from there?
>
> A filesystem, if it so chooses, could mount on top of the dax_device
> inode directly?
On Sun, Jan 22, 2017 at 09:30:23AM -0800, Dan Williams wrote:
> So are you saying we need a way to go from a block_device inode to a
> dax_device inode and then look up the dax_operations from there?
>
> A filesystem, if it so chooses, could mount on top of the dax_device
> inode directly?
On Sun, Jan 22, 2017 at 11:10:04PM -0800, Dan Williams wrote:
> How about we solve the copy_from_user() abuse first before we hijack
> this thread for some future feature that afaics has no patches posted
> yet.
Solving copy_from_user abuse first sounds perfectly fine to me. But
please do so
On Sun, Jan 22, 2017 at 11:10:04PM -0800, Dan Williams wrote:
> How about we solve the copy_from_user() abuse first before we hijack
> this thread for some future feature that afaics has no patches posted
> yet.
Solving copy_from_user abuse first sounds perfectly fine to me. But
please do so
On Mon, Jan 23, 2017 at 06:37:18AM +, Matthew Wilcox wrote:
> Wow, DAX devices look painful and awful. I certainly don't want to be
> exposing the memory fronted by my network filesystem to userspace to
> access. That just seems like a world of pain and bad experiences.
So what is your
On Mon, Jan 23, 2017 at 06:37:18AM +, Matthew Wilcox wrote:
> Wow, DAX devices look painful and awful. I certainly don't want to be
> exposing the memory fronted by my network filesystem to userspace to
> access. That just seems like a world of pain and bad experiences.
So what is your
On Sun, Jan 22, 2017 at 10:37 PM, Matthew Wilcox wrote:
> From: Christoph Hellwig [mailto:h...@lst.de]
>> On Sun, Jan 22, 2017 at 06:39:28PM +, Matthew Wilcox wrote:
>> > Two guests on the same physical machine (or a guest and a host) have access
>> > to the same set
On Sun, Jan 22, 2017 at 10:37 PM, Matthew Wilcox wrote:
> From: Christoph Hellwig [mailto:h...@lst.de]
>> On Sun, Jan 22, 2017 at 06:39:28PM +, Matthew Wilcox wrote:
>> > Two guests on the same physical machine (or a guest and a host) have access
>> > to the same set of physical addresses.
From: Christoph Hellwig [mailto:h...@lst.de]
> On Sun, Jan 22, 2017 at 06:39:28PM +, Matthew Wilcox wrote:
> > Two guests on the same physical machine (or a guest and a host) have access
> > to the same set of physical addresses. This might be an NV-DIMM, or it
> > might
> > just be DRAM
From: Christoph Hellwig [mailto:h...@lst.de]
> On Sun, Jan 22, 2017 at 06:39:28PM +, Matthew Wilcox wrote:
> > Two guests on the same physical machine (or a guest and a host) have access
> > to the same set of physical addresses. This might be an NV-DIMM, or it
> > might
> > just be DRAM
On Sun, Jan 22, 2017 at 06:39:28PM +, Matthew Wilcox wrote:
> Two guests on the same physical machine (or a guest and a host) have access
> to the same set of physical addresses. This might be an NV-DIMM, or it might
> just be DRAM (for the purposes of reducing guest overhead). The network
On Sun, Jan 22, 2017 at 06:39:28PM +, Matthew Wilcox wrote:
> Two guests on the same physical machine (or a guest and a host) have access
> to the same set of physical addresses. This might be an NV-DIMM, or it might
> just be DRAM (for the purposes of reducing guest overhead). The network
From: Christoph Hellwig [mailto:h...@lst.de]
> On Sun, Jan 22, 2017 at 06:19:24PM +, Matthew Wilcox wrote:
> > No, I mean a network filesystem like 9p or cifs or nfs. If the memcpy
> > is supposed to be performed by the backing device
>
> struct backing_dev has no relation to the DAX code.
From: Christoph Hellwig [mailto:h...@lst.de]
> On Sun, Jan 22, 2017 at 06:19:24PM +, Matthew Wilcox wrote:
> > No, I mean a network filesystem like 9p or cifs or nfs. If the memcpy
> > is supposed to be performed by the backing device
>
> struct backing_dev has no relation to the DAX code.
On Sun, Jan 22, 2017 at 06:19:24PM +, Matthew Wilcox wrote:
> No, I mean a network filesystem like 9p or cifs or nfs. If the memcpy
> is supposed to be performed by the backing device
struct backing_dev has no relation to the DAX code. Even more so what's
the point of doing a DAXish memcpy
On Sun, Jan 22, 2017 at 06:19:24PM +, Matthew Wilcox wrote:
> No, I mean a network filesystem like 9p or cifs or nfs. If the memcpy
> is supposed to be performed by the backing device
struct backing_dev has no relation to the DAX code. Even more so what's
the point of doing a DAXish memcpy
From: Christoph Hellwig [mailto:h...@lst.de]
> On Sun, Jan 22, 2017 at 03:43:09PM +, Matthew Wilcox wrote:
> > In the case of a network filesystem being used to communicate with
> > a different VM on the same physical machine, there is no backing
> > device, just a network protocol.
>
>
From: Christoph Hellwig [mailto:h...@lst.de]
> On Sun, Jan 22, 2017 at 03:43:09PM +, Matthew Wilcox wrote:
> > In the case of a network filesystem being used to communicate with
> > a different VM on the same physical machine, there is no backing
> > device, just a network protocol.
>
>
On Sat, Jan 21, 2017 at 9:52 AM, Christoph Hellwig wrote:
> On Sat, Jan 21, 2017 at 04:28:52PM +, Matthew Wilcox wrote:
>> Of course, there may not be a backing device either!
>
> s/backing device/block device/ ? If so fully agreed. I like the dax_ops
> scheme, but we should go
On Sat, Jan 21, 2017 at 9:52 AM, Christoph Hellwig wrote:
> On Sat, Jan 21, 2017 at 04:28:52PM +, Matthew Wilcox wrote:
>> Of course, there may not be a backing device either!
>
> s/backing device/block device/ ? If so fully agreed. I like the dax_ops
> scheme, but we should go all the way
On Sun, Jan 22, 2017 at 03:43:09PM +, Matthew Wilcox wrote:
> In the case of a network filesystem being used to communicate with
> a different VM on the same physical machine, there is no backing
> device, just a network protocol.
Again, do you mean block device? For a filesystem that does
On Sun, Jan 22, 2017 at 03:43:09PM +, Matthew Wilcox wrote:
> In the case of a network filesystem being used to communicate with
> a different VM on the same physical machine, there is no backing
> device, just a network protocol.
Again, do you mean block device? For a filesystem that does
From: Christoph Hellwig [mailto:h...@lst.de]
> On Sat, Jan 21, 2017 at 04:28:52PM +, Matthew Wilcox wrote:
> > Of course, there may not be a backing device either!
>
> s/backing device/block device/ ? If so fully agreed. I like the dax_ops
> scheme, but we should go all the way and detangle
From: Christoph Hellwig [mailto:h...@lst.de]
> On Sat, Jan 21, 2017 at 04:28:52PM +, Matthew Wilcox wrote:
> > Of course, there may not be a backing device either!
>
> s/backing device/block device/ ? If so fully agreed. I like the dax_ops
> scheme, but we should go all the way and detangle
On Sat, Jan 21, 2017 at 04:28:52PM +, Matthew Wilcox wrote:
> Of course, there may not be a backing device either!
s/backing device/block device/ ? If so fully agreed. I like the dax_ops
scheme, but we should go all the way and detangle it from the block
device. I already brought up this
On Sat, Jan 21, 2017 at 04:28:52PM +, Matthew Wilcox wrote:
> Of course, there may not be a backing device either!
s/backing device/block device/ ? If so fully agreed. I like the dax_ops
scheme, but we should go all the way and detangle it from the block
device. I already brought up this
From: Dan Williams [mailto:dan.j.willi...@intel.com]
> A couple weeks back, in the course of reviewing the memcpy_nocache()
> proposal from Brian, Linus subtly suggested that the pmem specific
> memcpy_to_pmem() routine be moved to be implemented at the driver
> level [1]:
Of course, there may
From: Dan Williams [mailto:dan.j.willi...@intel.com]
> A couple weeks back, in the course of reviewing the memcpy_nocache()
> proposal from Brian, Linus subtly suggested that the pmem specific
> memcpy_to_pmem() routine be moved to be implemented at the driver
> level [1]:
Of course, there may
A couple weeks back, in the course of reviewing the memcpy_nocache()
proposal from Brian, Linus subtly suggested that the pmem specific
memcpy_to_pmem() routine be moved to be implemented at the driver
level [1]:
"Quite frankly, the whole 'memcpy_nocache()' idea or (ab-)using
A couple weeks back, in the course of reviewing the memcpy_nocache()
proposal from Brian, Linus subtly suggested that the pmem specific
memcpy_to_pmem() routine be moved to be implemented at the driver
level [1]:
"Quite frankly, the whole 'memcpy_nocache()' idea or (ab-)using
36 matches
Mail list logo